Steve Carlip: Dimensional Reduction at Small Scale

  • Thread starter Thread starter marcus
  • Start date Start date
  • Tags Tags
    Reduction
marcus
Science Advisor
Homework Helper
Gold Member
Dearly Missed
Messages
24,753
Reaction score
794
http://www.ift.uni.wroc.pl/~planckscale/lectures/1-Monday/1-Carlip.pdf

Steve Carlip gave the first talk of a weeklong conference on Planck Scale physics that just ended July 4 (yesterday). The pdf of his slides is online.

We here at PF have been discussing since 2005 from time to time the odd coincidence that several very different approaches to quantizing GR give a spacetime of less than 4D at small scale. As you zoom in, and measure things like areas and volumes at smaller and smaller scale you find in these various quantized-geometry models that the geometry is behaving as if it was fractional dimensioned less than 4. Going continuously down to 3.9 and 3.8 and 3.7 ...and finally approaching 2D.

Dimensionality does not have to be a whole number, like exactly 2, or exactly 3. There are easy ways to measure the dimensionality of whatever space you are in----like by comparing radius with volume to see how the volume grows---or by conducting a random walk diffusion and seeing of fast diffusion happens. And these easy ways to measure dimension experimentally can give non-integer answers. And there are many well-known examples of spaces that you can construct that have non-whole-number dimension. Renate Loll had a SciAm article about this, with nice illustrations, and saying why this could be how it works at Planck Scale. The link is in my signature if anyone wants.

So from our point of view it's great that Steve Carlip is focusing some attention on this strange coincidence. Why should the different approaches of Renate Loll, Martin Reuter, Petr Horava, and even also Loop QG, why should these very different models all arrive at the bizarre spontaneous dimensional reduction at small scale (that is the title of Carlip's talk.)

Carlip is prominent and a widely recognized expert so IMHO it nice he is thinking about this.

Here is the whole schedule of the Planck Scale conference which now has online PDF for many of the talks for the whole week
http://www.ift.uni.wroc.pl/~planckscale/index.html?page=timetable
 
Physics news on Phys.org


Renate Loll also dicussed this coincidence of different QG approaches on spontaneous dimensional reduction from 4D down to 2D.
See her slide #9, two slides from the end.
http://www.ift.uni.wroc.pl/~planckscale/lectures/1-Monday/4-Loll.pdf
She gives arxiv references to papers by her and by Reuter, and by Horava, and by Modesto, and the one by Benedetti etc. So you have a more complete set of links to refer to than you get with Steve Carlip. She has been writing about this since 2005 and has the whole thing in focus and sharp perspective. The interesting thing is Steve Carlip is a completely different brain now looking at the same coincidences and likely making something different out of it.

I'm beginning to think this Planck Scale conference at Wroclaw (pronounced "Breslau" by many people despite the polish spelling) was a great conference. Perhaps it will turn out to have been the best one of summer 2009. Try sampling some of the talks and see what you think.
 
Last edited:


An interval is defined by the events at its ends. There is no reason for us to claim there are intervals that do not have events at their ends, because an interval without events is immeasurable and has no meaning except classically.

Where there are no events, there are no intervals, and where there are no intervals, there is no dimension (for dimensions are sets of orthogonal, or at least non-parallel, intervals) and where there is no dimension, there is no space-time. For this reason we see space and time (i.e. a classical concept) inapplicable in systems that are isolated from us. Thus a closed system like an entangled pair of particles are welcome to disobey causality just as readily as are a couple of local particles, because in both cases there is no space-time between them. Likewise time executes infinitely fast within a quantum computer because it is a closed system and there are no intervals between it and the rest of the universe to keep its time rate synchronized with the rest of the universe.

It is a grave mistake to think of quantum mechanical events as occurring "in space-time". Rather, it is those events that define space-time, and there is no space-time where there are no events.

The fewer the events, the fewer orthogonal (and non-parallel) intervals, and thus the fewer dimensions.
 
Last edited:


marcus said:
Renate Loll also dicussed this coincidence of different QG approaches on spontaneous dimensional reduction from 4D down to 2D.
See her slide #9, two slides from the end.
http://www.ift.uni.wroc.pl/~planckscale/lectures/1-Monday/4-Loll.pdf
She gives arxiv references to papers by her and by Reuter, and by Horava, and by Modesto, and the one by Benedetti etc. So you have a more complete set of links to refer to than you get with Steve Carlip. She has been writing about this since 2005 and has the whole thing in focus and sharp perspective. The interesting thing is Steve Carlip is a completely different brain now looking at the same coincidences and likely making something different out of it.

I'm beginning to think this Planck Scale conference at Wroclaw (pronounced "Breslau" by many people despite the polish spelling) was a great conference. Perhaps it will turn out to have been the best one of summer 2009. Try sampling some of the talks and see what you think.


Interesting.

By the way Marcus, is there a reason why you think the idea of string theory that the universe may have more than four dimensions silly but you seem to like the idea that the universe at a microscopic scale may have less than four dimensions? I personally think that we should be open-minded to all possibilities and use physics (and maths) to guide us, not personal prejudices. But you seem close-minded to one opossibility while being open-minded to the other. Is it just because you don't like anything that comes from string theory and like everything that come from LQG or is there a more objective reason?

Thanks for your feedback. Yourwork in bringing up interesting papers and talks is sincerely appreciated!
 


What force of a loupe do we need to see the Plank scale and LQG?
 


Bob_for_short said:
What force of a loupe do we need to see the Plank scale and LQG?
That's a good question and I like the way you phrase it! The aim is to observe Planck scale effects. There are two instruments recently placed in orbit which many believe may be able to measure Planck scale effects.

These are Fermi (formerly called the gammaray large array space telescope) and Planck (the successor to the Wilkinson microwave anisotropy probe.)

The Fermi collaboration has already reported an observation that shows the desired sensitivity. To get a firm result one will need many tens of such results. One has to record gammaray bursts arriving from several different distances.

The Fermi collaboration reported at conference in January 2009 and published in Science journal soon after, I think it was in March. They reported that with 95% confidence the quantum gravity mass MQG is at least 0.1 MPlanck.

The Science article is pay-per-view but Charles Dermer delivered this powerpoint talk
http://glast2.pi.infn.it/SpBureau/g...each-contributions/talk.2008-11-10.5889935356
at LongBeach at the January meeting of the American Astronomical Society (AAS). It summarized observational constraints on the QG mass by other groups and discusses the recent Fermi stuff.

The main point is we already have the instruments to do some of the testing that is possible. They just need time to accumulate more data. The result so far that MQG > 0.1 MPlanck is not very helpful. But by observing many more bursts they may be able to say
either MQG > 10 MPlanck or MQG < 10 MPlanck. Either result would be a big help. The first would pretty much kill the DSR (deformed special relativity) hypothesis and the second would strongly favor DSR. If you haven't been following the discussion of DSR, Carlo Rovelli posted a short paper about it in August 2008, and there are earlier longer papers by many other people in the LQG community. Rovelli seems to be a DSR skeptic, but he has this one paper which I think wouldn't be such a bad introduction.

Several of us discussed that 6 page Rovelli paper here:
https://www.physicsforums.com/showthread.php?p=2227272

Amelino gave a seminar talk at Perimeter this spring, interpreting the Fermi result. Amelino is a QG phenomenologist who has made QG testing his specialty, written a lot about it, he gets to chair the parallel session on that line of research at conferences. He was happy with the Chuck Dermer paper reporting a slight delay in arrival time of some high energy gamma, but he emphasized the need to observe a number of bursts at different distances to make sure the effect is distance-dependent. Any one who is really curious in this can watch the video.

The url for the video is in the first post of this thread:
https://www.physicsforums.com/showthread.php?t=321649

======================
Various papers have been written about QG signature in the microwave Background temperature and polarization map. Right now I don't know of firm predictions---of some theory that will live or die depending on detail found in that map. Planck spacecraft just arrived out at L2 lagrange point and began observing and I'm watching to see if people in the QG community can utilize the improved resolution of the map.
 
Last edited by a moderator:


marcus said:
The result so far that MQG > 0.1 MPlanck is not very helpful. But by observing many more bursts they may be able to say
either MQG > 10 MPlanck or MQG < 10 MPlanck.

This is not correct. Smolin argued for a distribution function for the delay of the GRB, so merely posing the cut off at the first photons detected may not be the correct path.
 


MTd2 said:
This is not correct. Smolin argued for a distribution function for the delay of the GRB, so merely posing the cut off at the first photons detected may not be the correct path.

I'm not sure what you mean. Who is not correct? As I recall the parameter MQG has been in general use for some time. Smolin(2003) used it and so did the John Ellis+MAGIC paper(2007). Charles Dermer used it in his Jan 2009 report for the Fermi collaboration, which then used it in their Science journal article.

It is not a cutoff and has nothing to do with a cutoff as far as I know. Just a real simple handle on quantum-geometry dispersion. You may know all this and think it is basic (it is basic) and maybe you are talking about something else more sophisticated. I want to keep it simple.

Papers typically look at two QG masses, one appearing in a first order dispersion relation and the other in a second order, so you see alternate formulas with both MQG1 and MQG2. When they don't make the distinction, they are talking about the first order.

I don't like the notation personally. I think they should use the symbol EQG because it is really an energy they are talking about, expressed either in GeV or in terms of the Planck energy EPlanck.

EQG is the Planck scale factor by which dispersion is hypothetically suppressed.

The hypothesis to be tested is that the speed the photon (of energy E) travels is not c but

(1 - E/EQG)c.

More complicated behavior could be conjectured and tested, maybe there is no first-order dependence, but there is some second order effect of E on the speed. But this linear hypothesis is simple. The observational astronomers can test it and possibly rule it out (if it is wrong) fairly quickly.

The way you would rule it out is to raise the lower limit on MQG or on EQG as I would prefer (my two cents).
Intuitively if there is a Planck scale DSR effect, then MQG is on the order of the Planck mass. So if you can show that dispersion is more strongly suppressed than that, if you can show that the parameter is , say, > 10 time Planck, that would effectively rule out DSR (or at least rule out any simple first-order theory).

To be very certain perhaps one should collect data until one can confidently say that the parameter is > 100 times Planck. But I personally would be happy to reject DSR with only a 95% confidence result saying > 10 times Planck.

On the other hand, if DSR is not wrong, then groups like Fermi will continue to observe and will come up with some result like the parameter is < 10 Planck. Then it will be bracketed like in some interval like [0.1 Planck, 10 Planck]
Then the dam on speculation will break and the cow will float down the river.
Because we will know that there is a linear dispersion coefficient actually in that range.
Everybody including John Ellis with his twinkly eyes and Tolkein beard will go on television to offer an explanation. John Ellis has already offered an explanation in advance of any clear result of this sort. And straight quantum gravitists will be heard as well. It is an exciting thought, but we have to wait and see until there are many lightcurves of many distant GRB.

The Smolin(2003) paper was titled something like "How far are we from a theory of quantum gravity?"
The John Ellis+MAGIC paper was around 2007. I will get the link.
http://arxiv.org/abs/0708.2889
"Probing quantum gravity using photons from a flare of the active galactic nucleus Markarian 501 observed by the MAGIC telescope"
 
Last edited:


I am not referring to neither. I am referring to the fuzzy dispersion, p. 22, section 5.1

http://arxiv.org/PS_cache/arxiv/pdf/0906/0906.3731v3.pdf

Note eq. 20 and 21.

If you suppose that the delta time is is the mean of a gaussian, it does not make sense to talk of a first or second order aproximation. You have to look at the infinite sum, that is, the gaussian.
 
  • #10


I see! But that is one small special section of the Amelino-Smolin paper. On "fuzzy dispersion". For much of the rest of paper they are talking about the simple MQG that I am used to.
They cite the Fermi collaboration's paper in Science, and quote the result
MQG > 0.1 MPlanck.
It's true they look ahead to more complicated types of dispersion and effects that more advanced instruments beyond Fermi might observe. They even consider superluminal dispersion (not just the slight slowing down when a photon has enough energy to wrinkle the geometry it is traveling thru.)
They want to open the QG dispersion question up and look at secondorder and non-simple possibilities, which is what scientists are supposed to do. We expect it.
But our PF member Bob asked a question about the LOUPE you need to see Planck-scale wrinkles! The tone of his question is keep-it-basic. I want to not go where Amelino-Smolin go in that 2009 paper. I want to say "here is an example of the magnifying glass that you see wrinkles with" The loupe (the jewelers' squint-eye microscope) is this spacecraft called Fermi which is now in orbit observing gamma bursts.

MTd2, let's get back to the topic of spontaneous dimensional reduction. I would like to hear some of your (and others) ideas about this.
I suspect we have people here at PF who don't immediately see the difference between a background independent theory (B.I. in the sense that LoopQG people speak of it) where you don't tell space what dimensionality to have and a fixed geometric background theory where you set up space ahead of time to have such and such dimensionalities.

In some (B.D.) theories you put dimensionality in by hand at the beginning.
In other (B.I.) theories the dimensionality can be anything and it has the status of a quantum observable, or measurement. And it may be found, as you study the theory, that to your amazement the dimensionality changes with scale, and gradually gets lower as the scale gets smaller. This behavior was not asked for and came as a surprise.

Significantly I think, it showed up first in the two approaches that are the most minimalist attempts to quantize General Relativity. Reuter following Steven Weinberg's program of finding a UV fixed-point in the renormalization flow (the asymptotic safe approach) and Loll letting the pentachorons assemble themselves in a swarm governed by Regge's General Relativity without coordinates. Dimensional reduction appeared first in the approaches that went for a quick success with the bare minimum of extra machinery, assumptions, new structure. Both had their first papers appear in 1998.

And Horava the former string theorizer came in ten years later with another minimalist approach which turned out to get the same dimensional reduction. So what I am thinking is there must be something about GR. Maybe it is somehow intrinsic built into the nature of GR that if you try to quantize it in the simplest possible way you can think of, without any strings/branes/extradimensions or anything at all you dream up. If you are completely unimaginative and go directly for the immediate goal, then maybe you are destined to find dimensional reduction at very small scale. (If your theory is B.I., no fixed background geometry which would preclude the reduction happening.) Maybe this says something about GR that we didn't think of before. This is just a vague hunch. What do you think?===========EDIT: IN REPLY TO FOLLOWING POST========
Apeiron,
these reflections in your post are some of the most interesting ideas (to me) that I have heard recently about what could be the cause of this curious agreement among several very different methods of approach to Planckscale geometry (or whatever is the ground at the roots of geometry). I don't have an explanation to offer as an alternate conjecture to yours and in fact I am quite intrigued by what you say here and want to mull it over for a while.

I will answer your post #11 here, since I can still edit this one, rather than making a new post just to say this.
 
Last edited:
  • #11


marcus said:
If you are completely unimaginative and go directly for the immediate goal, then maybe you are destined to find dimensional reduction at very small scale. (If your theory is B.I., no fixed background geometry which would preclude the reduction happening.) Maybe this says something about GR that we didn't think of before. This is just a vague hunch. What do you think?

Can you explain just what it is in the approaches that leads to this dimensional reduction?

In the CDT story, is it that as path lengths shrink towards Planck scale, the "diffusion" gets disrupted by quantum jitter? So instead of an organised 4D diffusion, dimensionality gets so disrupted there are only linear jumps in now unspecified, uncontexted, directions?

Loll writes:
"In our case, the diffusion process is defined in terms of a discrete random walker between neighbouring four simplices, where in each discrete time step there is an equal probability for the walker to hop to one of its five neighbouring four-simplices."

So it does seem to be about the choice being disrupted. A random walker goes from four options (or 3+1) down to 1+1.

As to general implications for GR/QM modelling, I think it fits with a systems logic in which it is global constraints that produce local degrees of freedom. So the loss of degrees of freedoms - the disappearace of two of the 4D - is a symptom of the asymptotic erosion of a top-down acting weight of constraint.

GR in a sense is the global macrostate of the system and it attempts to impose 4D structure on its locales. But as the local limit is reached, there is an exponential loss of resolving power due to QM fluctuations. Pretty soon, all that is left is fleeting impulses towards dimensional organisation - the discrete 2D paths.

Loll's group would seem to draw a different conclusion as she then says that continue on through the Planckscale and she would expect the same 2D fractal universe to continue crisply to infinite smallness. I would expect instead - based on a systems perspective - that 2D structure would instead dissolve completely into a vague QM foam of sorts.

But still, the question is why does the reduction to 2D occur in her model? Is it about QM fluctuations overwhelming the dimensional structure, fragmenting directionality into atomistic 2D paths?

(For those unfamiliar with downward causation as a concept, Paul Davies did a good summary from the physicist's point of view where he cautiously concludes:)

In such a framework, downward causation remains a shadowy notion, on the fringe of physics, descriptive rather than predictive. My suggestion is to take downward causation seriously as a causal category, but it comes at the expense of introducing either explicit top-down physical forces or changing the fundamental categories of causation from that of local forces to a higher-level concept such as information.

http://www.ctnsstars.org/conferences/papers/The%20physics%20of%20downward%20causation.pdf
 
Last edited by a moderator:
  • #12


I've gotten the impression that my point of views are usually a little alien to marcus reasoning but I'll add fwiw an opinon realating to this...

I also think the common traits of various programs are interesting, and possibly a sign of something yet to come, but I still think we are quite some distance away from understanding it!

I gave some personal opinons on the CDT logic before in one of marcus threads https://www.physicsforums.com/showthread.php?t=206690&page=3

I think it's interesting, but to understand why it's like this I think we need to somehow think in new ways. From what I recall from the CDT papers I read, their reasoning is what marcus calls minimalist. The start with the current model, and do some minor technical tricks. But from the point of view of ratioanl reasoning here, I think the problem is that the current models are not really a solid starting point. It's an expectation of nature, based on our history.

I think similarly, spontaneous dimensional reduction or creation might be thought of as representation optimations, mixing the encoded expectations with new data. This somehow mixes "spatial dimensions" with time history sequences, and also ultimately retransformed histories.

If you assume that these histories and patterns are physical encoded in a material observer, this alone puts a constraint on the combinatorics here. The proper perspective should make sure there are no divergences. It's when no constraints on the context are given, that absurd things happens. Loll argued for the choice of gluing rules so that otherwise it would diverge etc, but if you take the observer - to be the computer, there simply is no physical computer around to actually realize a "divergence calculation" anyway. I don't think the question should have to even appear.

So, if you take the view that spatial structure are simply a preferred structure in the observeser microstructure (ie. matter encodes the spacetime properties of it's environment) then clearly, the time histories of the measurement history, are mixed by the spacetime properties as the observers internal evolution takes place (ie. internal processes).

I think this is a possible context to understand emergent dimensionality (reduction as well as creation). The context could then be an evolutionary selection for the observers/matters microstructure and processing rules, so as to simply persist.

The reasonings where the Einstein action is given, and there is a unconstrained context for the theory I think it's difficult to "see" the logic, since things that should be evolving and responding, are taken as frozen.

So I personally think in order to understand the unification of spacetime and matter, it's about as sinful to assume a background (action) as it is to assume a background space. I think it's intersting what Ted Jacobsson suggested that Einsteins equations might simply be seen as a state (a state of the action that is), but that a more general case is still out there.

/Fredrik
 
  • #13


What puzzles me more is that Loll makes a big deal about the "magic" of the result. The reason for the reduction should be easy to state even if it is "emergent" from the equations.

As I interpret the whole approach, CDT first takes the found global state of spacetime - its GR-modelled 4D structure complete with "causality" (a thermodynamic arrow of time) and dark energy expansion. Then this global system state is atomised - broken into little triangulations that, like hologram fragments, encode the shape of the whole. Then a seed is grown like a crystal in a soupy QM mixture, a culture medium. And the seed regrows the spacetime from which it was derived. So no real surprise there?

Then a second part of the story is to insert a Planck-scale random walker into this world.

While the walking is at a scale well above QM fluctuations, the walker is moving in 4D. So a choice to jump from one simplex to another is also, with equal crispness, a choice not to jump in any of the other three directions. But then as scale is shrunk and QM fluctuations rise (I'm presuming), now the jump in some direction is no longer also a clear failure to have moved in the other directions. So dimensionality falls. The choice is no longer oriented as orthogonal to three other choices. The exact relationship of the jump to other potential jumps is instead just vague.

Is this then a good model of the Planckscale?

As Loll says, the traditional QM foam is too wild. CDT has a tamer story of a sea of unoriented, or very weakly oriented, actions. A lot of small definite steps - a leap to somewhere. A 1 bit action that defines a single dimension. Yet now there are no 0 bit non-actions to define an orientation to the other dimensions.

So from your point of view perhaps (and mine), a CDT type story may track the loss of context, the erosion of observerhood. In 4D realm, it is significant that a particle went a step in the x-axis and failed to step towards the y or z axis. The GR universe notices these things. But shrink the scale and this clarity of orientation of events is what gets foamy and vague.
 
  • #14


The way I interpret Loll's reasoning is that there is not supposed to be a solid, plausible, convincing reason. They simply try what they think is a conservative, attempt to reinterpret the old path integral approch, but with an additional details of putting in manually a microstructure in the space of sampling the spacetimes (implicit in their reasoning of gluing rules etc).

And they just note an interesting result, and suggest that the interesting result itself indirectly is an argument for the "validity".

I think it's great to try stuff, but I do not find their reasoning convincing either. That's doesn't however remove the fact of interesting results, but it questions the program as beeing sufficiently ambitious and fundamental. It is still somewhat semiclassical to me.

/Fredrik
 
  • #15


> Then a second part of the story is to insert a Planck-scale random walker into this world.

IMO, I'm not sure are describing what a small random walker would see, they try to describe what a hypotetical massive external observer would sees, when observing a supposed random walker doing a random walk in the a subsystem of the larger context under the loupe. Ie. an external observer, observing the actions and interplay of the supposed random walker.

The problem is when you use such a picture to again describe two interacting systems, then you are not using the proper perspective.

What I am after, is, when they consider the "probability for return" after a particular extent of evolution, then, who is calculating that probability, and more important what is the consequences for the calculating device (observer) when feedback from actions based on this probability comes from the environment? The ensemble escape is IMO a static mathematical picture, not a evolving physical picture.

I think we need to question the physical basis even of statistics and probability here in a context, to make sense out of these path integrals. I guess I think there is actally an element of "reality" to the wavefunction, but a relative one. As soon as you introduce ensembles are hypotetical repeats of mesurements, I feel we are really leaving reality. In a real experiment statistics, the ensembles are still real memory records. the manifestation of the statistics is the memory record, encoding the statistics. Without such physical representation the information just doesn't exist IMO.

But I think that this STILL says something deep about GR, that marcus was after. Something about the dynamical relational nature of reality, but to understand WHY this is so, I personally think we need to understand WHY the GR actions looks like it does. No matter how far CDT would take us, I would still be left with a question mark in my forehead.

/Fredrik
 
  • #16


Hey sorry to interrupt the deep conversation. But one can give a pretty boring argument forward as to why gravity seems to be 2 dimensional on small scales: Newtons constant is dimensionless in d=2 hence one should expect gravity to be UV complete and hence renormalisable if a dimensional reduction t d=2 occurs. In RG language it makes sense that d=2 at the UV fixed point. The reason I suspect that we can't renormalize the theory in a perturbative way is that in this case the dimensionality is fixed or rather one expands around a 4d background in such rigid way that the expansion is still 4d. Reuter suspects that it is the background Independent formulation of gravity that allows for the fixed point.

http://arxiv.org/abs/0903.2971


As for a deeper reason why this reduction happens maybe we need to look no further than a) why is G dimensionless in d=2? i.e study of why GR takes the form it does and b) why is it important to have dimsionless couplings to renormalize QFT's?


I view both CDT and RG approaches to QG as attempts to push QFT and GR as far as they can go to help us understand the quantum nature of spacetime. They are humble attempts that hope to give us some insights into a more fundamental theory.
 
  • #17


Finbar said:
Hey sorry to interrupt the deep conversation. But one can give a pretty boring argument forward as to why gravity seems to be 2 dimensional on small scales: Newtons constant is dimensionless in d=2 hence one should expect gravity to be UV complete and hence renormalisable if a dimensional reduction t d=2 occurs. In RG language it makes sense that d=2 at the UV fixed point. The reason I suspect that we can't renormalize the theory in a perturbative way is that in this case the dimensionality is fixed or rather one expands around a 4d background in such rigid way that the expansion is still 4d. Reuter suspects that it is the background Independent formulation of gravity that allows for the fixed point.

http://arxiv.org/abs/0903.2971As for a deeper reason why this reduction happens maybe we need to look no further than a) why is G dimensionless in d=2? i.e study of why GR takes the form it does and b) why is it important to have dimsionless couplings to renormalize QFT's? I view both CDT and RG approaches to QG as attempts to push QFT and GR as far as they can go to help us understand the quantum nature of spacetime. They are humble attempts that hope to give us some insights into a more fundamental theory.

To me that argument is not boring, Finbar. It makes good sense. It does not explain why or how space could experience this dimensional reduction at micro scale.

But it explains why a perturbative approach that is locked to a rigid 4D background geometry might not ever work.

It explains why if you give more freedom to the geometry and make your approach background independent (or less dependent, at least) then your approach might find for you the UV fixed point where renormalization is natural and possible. (the theory becomes predictive after a finite number of parameters are determined experimentally.

And it gives an intuitive guess as to why dimensionality MUST reduce as this UV fixedpoint is approached, thus at small scale as you zoom in and look at space with a microscope.

So your unboring argument is helpful, it says why this must happen. But still not fully satisfying because one still wonders what this strange microscopic geometry, or lack of geometry, could look like, and what causes nature to be made that way.

It's like if you are with someone who seems to have perfectly smooth skin, and you notice that she is sweating. So you say "well her skin must have holes in it so that these droplets of water can come out." This person must not be solid but must actually be porous! You infer this, but you did not yet take a magnifying glass and look closely at the skin, to see what the pores actually look like, and you did not tell us why, when the skin grows, it always forms these pores in such a particular way

You told us that dimensionality must reduce, because the field theory of geometry/gravity really is renormalizable as we all suspect. But you did not show us a microscope picture of what it looks like zoomed in, and explain why nature could turn out to not so smooth and solid as we thought.
 
Last edited:
  • #18


Finbar said:
one can give a pretty boring argument forward as to why gravity seems to be 2 dimensional on small scales: Newtons constant is dimensionless in d=2 hence one should expect gravity to be UV complete and hence renormalisable if a dimensional reduction t d=2 occurs..

Are you saying here that the reason why a dimensional reduction to 2D is "a good thing" for the CDT approach is that a 2D realm would give us the strength of gravity that would be required at the Planckscale? So take away dimensions and the force of gravity no longer dilutes with distance?

But that would not explain why the model itself achieves a reduction to 2D. Or was I just wrong about it being quantum fluctuations overwhelming the random walker as being the essential mechanism of the model?
 
  • #19


Fra said:
>

I think we need to question the physical basis even of statistics and probability here in a context, to make sense out of these path integrals. I guess I think there is actally an element of "reality" to the wavefunction, but a relative one. As soon as you introduce ensembles are hypotetical repeats of mesurements, I feel we are really leaving reality. In a real experiment statistics, the ensembles are still real memory records. the manifestation of the statistics is the memory record, encoding the statistics. Without such physical representation the information just doesn't exist IMO.

/Fredrik

Everything may be flux (or process) when it comes to reality, but there must still be some kind of stability or structure to have observation - a system founded on meanings/relationships/interactions.

You ought to check out Salthe on hierarchy theory to see how this can be achieved in a "holographic principle" style way.

So an observer exists at a spatiotemporal scale of being. The observer looks upwards and sees a larger scale that is changing so slow it looks frozen, static, permanent. The observer looks down to the micro-scale and now sees a realm that is moving so fast, equilbrating its actions so rapidly, that it becomes a solid blur - a different kind of permanence.

This is the key insight of hierarchy theory. If you allow a process to freely exist over all scales, it must then end up with this kind of observer-based structure. There will be a view up and a view down. And both will be solid event horizons that have effects which "locate" the observer.
 
  • #20


apeiron said:
Are you saying here that the reason why a dimensional reduction to 2D is "a good thing" for the CDT approach is that a 2D realm would give us the strength of gravity that would be required at the Planckscale? So take away dimensions and the force of gravity no longer dilutes with distance?

But that would not explain why the model itself achieves a reduction to 2D. Or was I just wrong about it being quantum fluctuations overwhelming the random walker as being the essential mechanism of the model?

I have a better understanding of the RG approach than the CDT. But both CDT and RG approaches are valid at all scales including the Planck scale. My point is not that it is a "good thing" but that it is essential for the theory to work at the Planck scale. In CDT you putting in 4-symplexs(4 dimensional triangles) and using the 4 dimensional einstein-hilbert(actually Regge) action but this does not garentee a d=4 geometry comes out or that you can even complute the path integral(in this case sum over triagulations). For the theory to be reasonable it must produce d=4 on large scales to agree with the macroscopic world. Now there are many choices one must make before computing a path integral over triangulations such as what topologies to allow and how to set the bare couplings etc. My guess is that in RG language this corresponds to choices of how to fix the gauge and which RG trajectory to put oneself on. Different choies can lead to completely different geomtries on both large and small scales(as the orginal Eucldiean DT showed). So one must choose wisely and hope that the geomtery one gets out a) is d=4 on large scales and b) has a well defined contiuum limit. My point is that to get a good continuum limit would probably involve the small scale to look 2 dimensional other wise i would expect the theory to blowup (or possibly the geometry to implode).

Think of it this way: If we want to describe whatever it is that is actually happening on the Planck scale in the language of d=4 GR and QFT we have to do it in such a way that the effective dimension is d=2 else the theory blows up and can't describe anything. In turn this tells us that if we can push this GR/QFT theory to the Planck scale(if there's a UV fixed point) then its very possible that the actual Planck scale physics resembles this d=2 reduction. We could then prehaps turn the whole thing the other way and say that Planck scale physics must look like this d=2 GR/QFT theory to produce a d=4 GR/QFT theory on larger scales.

Of course this is all spectulation. But the key question here is when,that is at what energy scale, do we stop using GR and QFT and start using something else?
 
  • #21


So it is still a mystery why the reduction occurs as a result of the mathematical operation? Surely not.

I can see the value of having an operation that maps the 4D GR view on to the limit view, the QM Planckscale. It allows a smooth transition from one to the other. Before there was a problem in the abruptness of the change. With CDT, there is a continuous transformation. Perhaps. If we can actually track the transformation happening.

To me, it seems most likely that we are talking about a phase transition type situation. So a smooth change - like the reduction of dimensionality in this model - could actually be an abrupted one in practice.

I am thinking here of Ising models and Kauffman auto-catalytic nets, that sort of thing.

So we could model GR as like the global magnetic field of a magnet. A prevailing dimensional organisation that is impervious to local fluctuations (of the iron dipoles).

But heat up the magnet and the fluctuations grow to overwhelm the global order. Dimensionality becomes fractured (rather than strictly fractal). Every dipole points in some direction, but not either in alignment, or orthogonal to that alignment, in any definite sense.

So from this it could be said that GR is the ordered state, QM the disordered. And we then want a careful model of the transition from one to the other.

But no one here seems to be able to spell out how CDT achieves its particular result. And therefore whether it is a phase transition style model or something completely different.
 
  • #22


apeiron said:
Everything may be flux (or process) when it comes to reality, but there must still be some kind of stability or structure to have observation - a system founded on meanings/relationships/interactions.

You ought to check out Salthe on hierarchy theory to see how this can be achieved in a "holographic principle" style way.

So an observer exists at a spatiotemporal scale of being. The observer looks upwards and sees a larger scale that is changing so slow it looks frozen, static, permanent. The observer looks down to the micro-scale and now sees a realm that is moving so fast, equilbrating its actions so rapidly, that it becomes a solid blur - a different kind of permanence.

This is the key insight of hierarchy theory. If you allow a process to freely exist over all scales, it must then end up with this kind of observer-based structure. There will be a view up and a view down. And both will be solid event horizons that have effects which "locate" the observer.

Apeiron, this is right. I do not argue with this.

But in my view, there is still a physical basis for each scale.

To me the key decompositions are Actions (observer -> environment) and Reactions (environment -> observer). The Actions of an observer, are I think constructed AS IF this most stable scale was universal. The action is at best of of probabilistic type. The reaction is however less predictable (undecidable). The reactions is what deforms the action, by evolution. The action is formed by evolution, as it's interacting and subject to reactions.

I think the ACTION here, is encoded in the observer. Ie. the complexity of the action, ie the informatio needed to encode the action, are constrained by the observers "most stable view". The observer acts as if this was really fixed. However, only in the differential sense, since during global changes, the entire action structure way deforms. It doesn't have to deform, but it can deform. The action is at equilibrium when it does not deform.

And I also think that this deformation can partially be seen as phase transitions, where the different phases (microstructure) encode different actions.

I think there is no fundamental stability or determinism, instead it's this process of evolution that produces effective stability and context.

/Fredrik
 
  • #23


I wrote this in a different context (loops and strings) but I think i may be valid here as well: the major problem of quantizing gravity is that we start with an effective theory (GR) and try to find the underkying microscopic degrees of freedom; currently it seems that they could be strings, loops (or better: spin networks), or something like that.

The question is which guiding principle guarantuees that starting with GR as an "effective theory" and quantizing it by rules of thumb (quantizing is always using rules of thumb - choice of coordinates, hamiltonian or path integral, ...) allows us to identify the microscopic degrees of freedom. Of course we will find some of their properties (e.g. that they "are two-dimensional") but I am sure that all those approaches are limited in te sense that deriving microscopic degrees of freedom from macroscopic effective theories simply does not work!

Let's make a comparison with chiral perturbation theory:
- it is SU(2) symmetric (or if you wish SU(N) with N being the number of flavors)
- it is not renormalizable
- it incorporates to some extend principles like soft pions, current algebra, ...
- is has the "correct" low energy effective action with the well-known pions as fundamental degrees of freedom
But it simply does not allow you to derive QCD, especially not the color degrees of freedom = SU(3) and its Hamiltonian.
Starting with QCD there are arguments how to come to chiral perturbation theory, heavy baryons etc., but even starting with QCD, integrating out degrees of freedom and deriving the effective theories mathematically has not been achieved so far.

So my claim is that the identification of the fundamental degrees of freedom requires some additional physical insight (in that case the color gauge symmetry) which cannot be derived from an effective theory.

Maybe we are in the same situation with QG: we know the IR regime pretty well, we have some candidate theories (or only effective theories) in the UV respecting some relevant principles, but we still do not "know" the fundamental degrees of freedom: the effects corresponding to deep inelastic scattering for spin networks (or something else) are missing!
 
  • #24


tom.stoer said:
So my claim is that the identification of the fundamental degrees of freedom requires some additional physical insight

I think so too.

This is why I personally think it's good to try to think outside of the current frameworks and try to go back even to analyse our reasoning and how our scientific method looks like. Because when you phrase questions in a given frameworks, sometimes some possible answers are excluded.

This is why I have personally but a lot of focus on the scientific inquiring process and measurement processes. The problem of scientific induction and the measurement problem have common traits. We must question also our own questions. A question isn't just so simlpy as, here is the questions, what is the answer? Sometimes it's equallty interesting, and a good part of the answer to ask why are we asking this particular question? What is the origin of questions? And how does our questions influence our behaviour and actions?

The same questions appear in physics, if you wonder, what is matter and space, why is the action of matter and spacetime this or that? If a material object contains "information" about it's own environment, how does the learning process of this matter in his environment work like? And how is that principally different (if at all) from human scientific processes?

/Fredrik
 
  • #25


tom.stoer said:
I am sure that all those approaches are limited in te sense that deriving microscopic degrees of freedom from macroscopic effective theories simply does not work!

But why do you presume this? I am expecting the exact opposite to be the case from my background in systems science approaches. Or even just condensed matter physics.

In a systems approach, local degrees of freedom would be created by global constraints. The system has downward causality. It exerts a stabilising pressure on all its locations. It suppresses all local interactions that it can observe - that is, it equilbrates micro-differences to create a prevailing macrostate. But then - the important point - anything that the global state cannot suppress is free to occur. Indeed it must occur. The unsuppressed action becomes the local degree of freedom.

A crude analogy is an engine piston. An explosion of gas sends metal flying. But the constraint exerted by the engine cylinder forces all action to take place in one direction.

When all else has been prevented, then that which remains is what occurs. And the more that gets prevented, the more meaningful and definite becomes any remaining action, the more fundamental in the sense of being a degree of freedom that the system can not eradicate.

This is the logic of decoherence. Or sum over histories. By dissipating all the possible paths, all the many superpositions, the universe is then left with exactly the bit it could not average away.

Mostly the universe is a good dissipator of QM potential. Mostly the universe is cold, flat and empty.
 
  • #26


Maybe it's a matter of interpretation here, but Tom asked for physical insight. I guess this can come in different, forms. I don't think we need "insights" like "matter must be built from strings". In that respect I'm with Apeiron.

However, there are also the kind of insights that refer to the process of inference. In that sense, I wouldn't say we can "derive" microscopic domain from macro in the deducetive sense. But we can induce a guess, that we can thrive on. By assuming there is a universal deductive rule from macro to micro, I think we make a mistake. But there migth still be a rational path of inference, which just happens to be key to the GAME that does come with an emergent stability we do observe.

I think the missing physical insight, should be exactly how this process works, rather than coming up with a microstructure out of the blue. Conceptually might be the first step to gain intutition, but then also mathematically - what kind of mathematical formalism are best used to describe this?

I think this problem, gets then inseparable from the general problem of scientific induction.
- What is science, and what is knowledge, what is a scientific process?
- What is physics, what are physical states, and what are physical processes?

Replace the labels and the problems are strikingly similar.

/Fredirk
 
  • #27


Would a soliton count as a physical insight then? Standing waves are exactly the kind of thing that motivate the view I'm taking here.

Very interesting that you say we cannot deduce from macro but may induce. This is also a systems science point. Peirce expanded on it very fruitfully.

The reason is that both the macro and the micro must emerge together in interaction. So neither can be "deduced" from the other (although retrospectively we can see how they each came to entail the other via a process of induction, or rather abduction).
 
  • #28


Again Apeiron, I think we are reasonably close in our views.

apeiron said:
Would a soliton count as a physical insight then? Standing waves are exactly the kind of thing that motivate the view I'm taking here.

Yes, loosely something like that. The coherence of a system is self-stabilised.

But remaining questions are,

a) waves of what, in what? And what quantitative predictions and formalism does this suggest. The standing wave, mathematically is just a function over some space, or index.

b) what is the logic of the emergent actions that yield this soliton like stuff? And how can these rudimentary and abstract ideas, be used to reconstruct the normal physics - spacetime and matter?

I'm struggling with this, but I sure doesn't have any ready answers.

Before we can distinguish a wave, we must distinguish the index(or space) where the wave exists, but we also need to distinguish the state of the wave.

Maybe once we agree loosely on the direction here, peircian style stuff. What remains is partly a creative technical challange - to find the technical framework that realizes this intuitive vision.

So far I'm working on a reconstruction of information models. Where there are no statistical ensembles or prior continuum probabilities at all. I guess the reconstruction corresponds to how an inside observer would describe the universe origin. Indexes that are the discrete prototype of continuum spaces are emergent distinguishable states of the observers coherent degrees of freedom. These coherent degrees of freedom "just are". The observer doesn't know where the came from. However, the only logic to see the origin is to understand how these degrees of freedom can grow and shrink. Any by starting at the simplest possible observer, ponder how he can chooset to play a game, that makes him grow. Generation of mass, and thus mass of indexes, might generate space. The exploit is that the action possible for a simple observer, is similarly simple :) How many ways can you combine 3 states for example? So I am aiming for a discrete formalism. In the limit of the system complexity growing, there will be an "effective continuum model". But the only way I think to UNDERSTAND the continum model is to undertand how it emerges from the discrete limit.

But it's still a massive problem. This is why I'm always curious to read up on what others are doing. I want a mathematical formalism, implemented as per these guidelines. I haven't found it yet.

/Fredrik
 
  • #29


So this all is about making somehow the QG renormalizable?

In my opinion, the QED renormalizability plays a bad role - it makes an impression that the renormalizations are a "good soluiton" to mathematical and conceptual difficulties. "Cowboy's attacs" to manage divergences in QG fail and all these strings and superstrings, loops, dimension reductions are just attempts to get something meaningfull, at least mathematically.

At the same time, there is another approach that contains really natural (physical) regularizators or cut-offs and thus is free from divergences. I would like to read your opinions, if any, in my "Independent research" thread (not here).
 
  • #30


Marcus, great discussion...thanks for the post..

a lot of this is new to me so I'm still puzzling over some basics.

I had a similar thought, I think, as Aperion, who posted,

And the seed regrows the spacetime from which it was derived. So no real surprise there?

whereas I am puzzling (via the Loll, the Scientific American article Marcus referenced) :

Mix a batch of four dimensional simplicies glued together in a computer model with external independent inputs of time (arrows) (CDT), plus a cosmological constant, plus a tamed QM foam also via CDT results in a four dimensional de Sitter shape...

so I keeping wondering How close is this model to anything resembling quantum conditions? Is this a huge advance or a really tiny baby step, barely a beginning?

Yes, similar examples of self organization and self assembly exist the authors note, but those have an already established space,time,cosmological constant,etc in existence as background before they initiate...why would we suppose quantum emergence has all those present when the authors say unfettered quantum foam typically results in crumpled up dimensions?

One answer could be that "everything that is not prohibited is required" but is that all we have here??

PS: Just got to love the idea of fractal dimensions at sub Planck scales..fascinating!
 
  • #31


Naty1 said:
...
Mix a batch of four dimensional simplicies glued together in a computer model with external independent inputs of time (arrows) (CDT), plus a cosmological constant, plus a tamed QM foam also via CDT results in a four dimensional de Sitter shape...

Naty, I was glad to see your reaction. Mine is similar in a good many respects. Your short list of ingredients doesn't mention one which is the idea of letting the size of the simplices then go to zero. (I think that's understood, in your post, but it still can use a mention.)

It is like with a Feynman path integral you might only average over all the piecewiselinear paths---polygonal paths made of short linear segments.
Then you let the lengths of the segments all go to zero.

There is a kind of practical leap of faith there (even in the original Feynman path integral) because the segmented paths are admittedly a very small subset of the set of all paths. You have to trust that they are sufficiently representative like a "skeleton crew" of the whole set. Because the whole set of paths is too big to average over. The set of segmented paths is small and simple enough so you can put a probability measure or amplitudes or whatever you need.

And then you trust that when you let the segment size go to zero the skeleton average will somehow come close to the whole average, that you can't define mathematically so well and compute with.

With Loll's method it doesn't matter very much what shape blocks they use. Only that they use some uniform set of block and let the size go to zero in the limit. They have papers using other shape blocks.

So they are not fantasizing that space is "made" of simplices or that there is any "minimal length" present in fundamental geometric reality whatever that is (there may be limits on what you can measure but presumably that's a different matter)

This is all kind of implied in what you and others were saying. So I am being a bit tiresome to be spelling it out. But it seems to me that philosophically the idea is kind of elusive. It doesn't say that the fundmental degrees of freedom are nailed down to be a certain kind of lego block. It says that a swarm of shrinking-to-zero legoblocks provides a good description of the dynamic geometry, a good skeleton path integral. That captures some important features of how space might work at small scale.

And amazingly enough deSitter space does emerge out of it, just as one would want, in the matterless case. The deSitter model is what our own universe geometry is tending towards as matter thins out and nothing left but dark energy. and it is also how you represent inflation (just a different dark energy field, but also no ordinary matter).

No amount of talk can conceal that we are not saying what is space made of. We are trying to understand a process. Spacetime is a process by which one state of geometry evolves into another state of geometry. We want a propagator that tells transition amplitudes.

There may, under high magnification, be no wires, no cogwheels, no vibrating gidgets, there may only be a process that controls how geometry evolves. It feels to me a little like slamming into a brick wall, philosophically. what if that is all there is?
 
  • #32


apeiron said:
... I am expecting the exact opposite to be the case from my background in systems science approaches. ...

In a systems approach, local degrees of freedom would be created by global constraints.

I don't want to add too many explanations, I simply would like to stress the example of chiral perturbation theory: there is no way to derive QCD (with quarks and gluons as fundamental degrees of freedom) from chiral perturbation theory (as effective low-energy theory of pions). You need additional physical insight and new principles (here: color gauge theory, physical effects like deep inelastic scattering) to conclude that QCD is the right way to go. It's more than doing calculations.

I think with QG we may be in the same sitiuation. We know GR as IR effective theory and we are now searching for more fundamental entities. Unfortunately quantizing GR directly w/o additional physical ingredients is not the right way to go. New principles or entities (strings? loops? spin networks? holography?) that are not present in GR are required.

Fra is looking for something even more radical.
 
  • #33


fleem said:
It is a grave mistake to think of quantum mechanical events as occurring "in space-time". Rather, it is those events that define space-time, and there is no space-time where there are no events.

It is complaint with my understanding. As I showed in one of my publications, the "classical" phenomena are the inclusive QM pictures (with many-many events summed up).
 
  • #34


Marcus posts:
With Loll's method it doesn't matter very much what shape blocks they use. Only that they use some uniform set of block and let the size go to zero in the limit. They have papers using other shape blocks.

Gald you mentioned that, I meant to ask and forgot and was unaware of other shapes...thats a good indicator. Another thing I forgot to post is that I belieive somewhere the authors said their results were not very sensitive to parameter changes and I LIKED that if my recollection is correct. Fine tuning in a situation like this just does not seem right unless we know the process by which nature does it.

Also, glad you posted:
It doesn't say that the fundmental degrees of freedom are nailed down to be a certain kind of lego block. ...No amount of talk can conceal that we are not saying what is space made of. We are trying to understand a process.

that helps clarify my own understanding...I know from other forms of computer analyses and modeling if you don't understand the inputs, the underlying logic of the processing, and the sensitivity of outputs relative to input changes, understanding the outputs is almost hopeless.
 
Last edited:
  • #35


marcus said:
With Loll's method it doesn't matter very much what shape blocks they use. Only that they use some uniform set of block and let the size go to zero in the limit. They have papers using other shape blocks.

So they are not fantasizing that space is "made" of simplices or that there is any "minimal length" present in fundamental geometric reality whatever that is (there may be limits on what you can measure but presumably that's a different matter)

The significance of the triangles would be, I presume, that it gives an easy way to average over local curvatures. The angles of a triangle add to pi in flat space, less than pi in hyperbolic space, more than pi in hyperspheric space.

If geometry is actually shrunk to a point, the curvature becomes "invisible" - if could be anything. The only clue would be what it looked like before the shrinking took place. So you might be able to use any block shape. But it is pi as a measure of curvature which is the essential bit of maths here?

And thus what they are "fantasising" perhaps is that spacetime is made of an average over curvatures. At least I hope so because that is the core idea I like.

Curvature is "good" as it is a physical intuition that embodies both dimension (length/direction) and energy (an action, acceleration, tension, distinction). So flatness is cold. Curved is hot. This is pretty much the language GR speaks, right?

Again, making the parallel with Ising models, you could say that at the GR scale, spacetime curvature is all smoothly connected, like a magnetic field. Every point has a curvature, and all the curvatures are aligned, to make a closed surface (with now we discover, a slight cosmological constant positive curvature).

But as we descend to a QM scale, fluctuations break up the smooth closed curvature. Curvature becomes increasingly unoriented. It does not point towards the rest of the world but off out into some more random direction. Smooth and coherent dimensionality breaks up into a foam of "2D" dimensional impulses. Like heated dipoles spinning freely, having at best fleeting (fractal/chaotic) alignments with nearest neighbours.

Anyway, the CDT story is not really about shrinking triangles but about doing quantum averages over flexi GR curvatures?
 
  • #36


Re: the Scientific American article:
What happened to mass in their output? Were Loll and collaborators disappointed none popped out? Or did I miss something?

Sounds like their inputs resulted in only spacetime and gravity. I wonder what they would put into their model to get some emergent mass out of it? And if that would suggest origins of mass...because only de Sitter spacetime was an output, could this model suggest the overlay of spontaneous symmetry breaking, the Higgs mechanism, (an ad hoc add on to the standard model), really is an inappropriate "plug" in by theorists??
 
Last edited:
  • #37


apeiron said:
The significance of the triangles would be, I presume, that it gives an easy way to average over local curvatures. The angles of a triangle add to pi in flat space, less than pi in hyperbolic space, more than pi in hyperspheric space.
...

You give the basic insight. A combinatorial or "counting" grasp of the basic feature of geometry (curvature). Probably goes back to Tullio Regge 1960 who showed how to do "General Relativity without coordinates".

If the action is simply to be the integral of curvature, then if all the triangles are identical and one finds curvature at a point by counting the number that meet, to find the average one needs only compare the number of triangles with the number of points.

Now to extend this idea up one dimension to D = 3 one will be looking at the curvature around a D = 1 edge, and one will count the number of tetrahedra that meet around that edge. (The edge is sometimes called the "bone" or the "hinge" in this context. The curvature lives on the bone.)

So the overall average can be found, in the D = 3 case, simply by comparing the total number of 3-simps with the total number of 1-simps. If there are fewer 3-simps than you think there should be, over all, then it is the positive curve "hyperspherical" case, as you said earlier.

Since you know greek (as a fan of Anax. of Milet. and his primal unbounded indefiniteness apeiron idea) you may know that the analog of a tetrahedron is a pentachoron. Loll has sometimes used this term for the 4-simplex block that builds spacetime. Hedron is flat side and Choron is a 3D "room". A 3-simplex is bounded by 4 flat sides (hedra) and a 4 simplex is bounded by 5 rooms (chora). I think pentachoron is a nice word and a good brother to the tetrahedron.

Anyway, in the D = 4 case the D - 2 = 2 simplices are the "bones" or the "hinges" around which curvature is measured. One counts how many pentachors join around one triangle.

And to get the integral, for the Einstein-Hilbert-Regge action, one just has to count the total number of pentachors and compare to the total number of triangles. If there are not as many pentachors as you expected, then some positive curvature must have crept into the geometry. Seeped in, infiltrated.

It is nice to be able to do geometry with simple counting and without coordinates because, as Tullio Regge realized, in Einstein's 1915 approach you make extra work. First you have to set up coordinates. Then when you are done you have to get rid of them! By diffeomorphism invariance, or general covariance, any two solutions which are the same by a diffeo are "equivalent" and represent the same physical reality. So you have to take "equivalence classes". The real physics is what is left after you squeeze out the huge redundancy which has been introduced by using coordinates.

Coordinates are "gauge" meaning physically meaningless redundant trash and Regge found a shortcut way to avoid using coordinates and still calculate overall curvature and conduct business and have a dynamic geometry.

He shares the same name as Cicero, the essay writer. Italians still pay respect to Cicero, apparently, by naming children after him.

So in the general D dimension case, the D-2 simplices are the "bones" and the curvature is hanging on the bones or riding on the bones or is imagined to be concentrated on the bones. And you count how many D-simplices meet at a particular D-2 simplex.

With Loll's approach the simplices are almost but not quite equilateral, there is an extra parameter that can elongate simplices in the time direction. But to begin understanding, it is good to imagine equilateral simplices. All the same size.

I have been enjoying everyones posts, which are full of ideas. Right now I have no ideas myself about why there is this curious coincidence of Loll Reuter Horava Modesto etc etc method. Can it be an elaborate practical joke or artifact, or can it actually be something that nature has been secretly saving to surprise us with, when we are ready? I think it was not something that Loll and Reuter were originally looking for. It just turned up, like a "who ordered this?" We'll see.
 
Last edited:
  • #38


Naty1 said:
Re: the Scientific American article:
What happened to mass in their output? ...

Take any opinion of mine about this with a grain of salt, Naty. I'm a non-expert non-authority. I think Loll's approach is a springboard to something better. I don't think she CAN introduce matter in a satisfactory way. But it is a beautiful springboard---it has inspired and will continue to inspire the development of ideas.

I don't know. Maybe you should not merely glue the blocks together, maybe you should twist them as you are gluing them. :eek:
or maybe you should allow a few of them to be glued to others which they are not adjacent to:bugeye:
or paint the building blocks different colors:zzz:
Nature is teasing and playing with us. The Loll approach is a like a smile she flashed at us, but you don't know yet what that particular smile means. Yes. Somehow matter must be included in geometry.
 
  • #39


Hey, so coming back to the question of how this reduction from d=4 to d=2 is achieved I'd like to give a heuristic argument from a RG/particle physics point of view. So its not such a geometric point of view but possibly it can give insights into geometric approaches.

So we consider a single particle that we are measuring the gravitational field of from a far distance. As such the force law is 1/r^2 and we can conclude that d=4 and Newtons constant is G=6.673(10) x 10-11 m^3 /kg s^2.

Now imagine we get closer to the particle such that we are now measuring its field on a smaller scale. As such our certainty of the particles position is increased; (delta)x is smaller. As such from the uncertainty principle we are less certain of its momentum. At a smaller enough scale this uncertainty can mean that the number of particles also becomes uncertain. Hence on such scales we may "see" more than one particles(vacuum fluctuations if you like).

But we must remember something important: We are still looking at the same physical system that was just one particle. As such the force we measure from these multiple particles must be the same force measured from the single one at a large distance. Now if we measure the field from only one of these ensemble of particles on a small scale we must find that the field strength is not as large as we expected. In this way we say gravity is "anti screening"; it gets weaker on smaller scales as we take quantum fluctuations into account.

In an RG setting we would then let Newtons constant run to account from this. The strength of gravity from a single particle is G(r)/r^2 but now G(r) is not constant. Now if on small scales r-->0, G(r)/r^2 -->infinity we wound say that the theory brakes down in the UV and QG is nonrenormalisable. If however G(r)~r^2 we find something quite different we find that the field is constant!

Now how is this related to d=2. Well its just Gauss' law d=4: implies 1/r^2 behavior, d=3 implies 1/r behavior, d=2 implies 1/r^0=constant behavior.

Its best to visualize this as field lines coming out of the particle on large scales these field lines appear to spread out over a sphere but as we go to smaller scales the field lines seem more like there spreading out over a circle than a sphere (or better some fractal that has a spatial dimension less than 3). And as we get to yet smaller scales the field lines seem not to spread out over any surface at all at all and instead there is just a singe field line. At this scale you might want to "look around" a bit and measure the field from other particles. Indeed you see this same d=2 like behavior but now your "looking" in another direction. Thus the idea you could then conclude is that spacetime is some kind of 2 dimensional foam who's form is dictated by the distribution of particles you see around you(Quantum gravity?!?).
 
  • #40


Just one question poppin' in: I wonder if holography was ever considered to be connected to the dimensional reduction of the discussed approaches. Of course, holography also goes along with e.g. strings, which is an entirely different direction, especially concerning spacetime dimensionality. Plus, my understanding of holography is rather a reduction from D=3+1 to D=2+1, rather than D=1+1 as in CDT and others. So I don't know if this even makes sense. Still, the concept of holography and the phenomenon of dimensional reduction have in common that the description of physics departs from 4 dimensions towards a lower number. So this is just something I wondered, being not very knowledgeable.
 
  • #41


Finbar said:
Its best to visualize this as field lines coming out of the particle on large scales these field lines appear to spread out over a sphere but as we go to smaller scales the field lines seem more like there spreading out over a circle than a sphere (or better some fractal that has a spatial dimension less than 3). And as we get to yet smaller scales the field lines seem not to spread out over any surface at all at all and instead there is just a singe field line. At this scale you might want to "look around" a bit and measure the field from other particles. Indeed you see this same d=2 like behavior but now your "looking" in another direction. Thus the idea you could then conclude is that spacetime is some kind of 2 dimensional foam who's form is dictated by the distribution of particles you see around you(Quantum gravity?!?).

Thanks for this example. It sounds very much like the intuitive argument I was making - with a better technical grounding of course.

But that still leaves me wanting to know how the result pops out of the particular math simulation run by Loll and co. I can't follow the machinery of the maths. But I just presume that the people who use the maths can easily see why some feature shoud emerge.

I really don't know what to make of a situation where researchers - and informed commentators - seem to be saying we run the equations we concocted and out pops this crazy result. It's a kind of magic. We can't explain why.
 
  • #42


marcus said:
So in the general D dimension case, the D-2 simplices are the "bones" and the curvature is hanging on the bones or riding on the bones or is imagined to be concentrated on the bones. And you count how many D-simplices meet at a particular D-2 simplex.

Thanks Marcus. It is very useful to have the Regge approach explained so well. I can have another go at seeing if I can track the logic of CDT.
 
  • #43


marcus said:
Nature is teasing and playing with us. The Loll approach is a like a smile she flashed at us, but you don't know yet what that particular smile means. Yes. Somehow matter must be included in geometry.

Suppose CDT actually indicates Asymptotic Safety, and suppose AS actually works, then wouldn't it be straightforward to include matter by just adding eg. the SM Lagrangian? Except I guess that electroweak theory doesn't have a continuum limit, so I the theory will still not have a continuum limit, even though such a limit exists for gravity?

If CDT actually indicates Horava with its fixed non-relativistic background, then could one use say Wen's way of getting relativistic QED and QCD to emerge from non-relativistic models (he doesn't know how to do chiral interactions).
 
  • #44


Atyy these are interesting ideas you are proposing and I see how you are following out these trains of thought. Instead of disputing (when I agree with a lot of the general tenor of what you are saying, it could turn out to be easy to include matter once one has an adequate quantum theory of geometry, or it might not, speculation either way.)

Instead of disputing, Atyy, I just want to outline my attitude in contrast. I don't think any of these approaches implies the other. I think they have family resemblances. Pairs of them share some common features and characteristics.

But there are no mother-daughter pairs. No one is derived from any other. Or so I think.

And I see no reason to suppose that any of them will turn out to be "right" in the sense of being a final description of nature. That is not what we ask of them. What we want is progress towards a quantum geometry that gives General Rel at large scale. And that you can eventually predict astronomical observations with and put matter into and test observationally with CMB/gammaray bursts/collapse events/cosmic ray data and all that good stuff. And calculations could be done using several different ones of these approaches. I do not care which one morphs into an eventual "final theory", if any does. I want to see progress with whatever works.

And fortunately we do see progress, and we see new researchers coming in, and new funding getting pumped into Loop and allied research (CDT, AS, ... as you mentioned.)
That certainly includes the condensed matter inspired lines of research and innovative stuff you have mentioned in other threads. It's a good time to be in non-string QG. I can't keep track of the broad field and give an accurate overview, so much going on. But anyway that's my attitude---pragamatic, incremental, not-thinking-ahead-to-ultimate-conclusions.

Once there is a background independent quantum theory of geometry---that is in other words of the gravitational field---which is what matter fields live on---then the theory of matter will need to be completely rebuilt, I suppose. Because the old idea of space on which QFT was built would then, I imagine, be history.:biggrin:
 
  • #45


marcus said:
Maybe you should not merely glue the blocks together, maybe you should twist them as you are gluing them. :eek:

But isn't this a reasonably mainstream approach to inserting mass into the spacetime picture?

Knots, solitons, gauge symmetries, etc. You have a web of relations drawing itself flat - a self-organising GR fabric, the vacuum. And then knots or kinks get caught in the fabric as it cools.

In this thread and others, there is a lot of concern about how mass can be added to the picture. But it seems rather that the theory would be a model of the vacuum, and then secondary theories would handle mass as knots in the fabric.
 
  • #46


apeiron said:
...
I really don't know what to make of a situation where researchers - and informed commentators - seem to be saying we run the equations we concocted and out pops this crazy result. It's a kind of magic. We can't explain why.

I would agree that this is unsatisfactory. Perhaps the simplified picture here (partly my fault) makes the situation seem worse than it really is. I think Renate Loll could explain clearly to you why dim-reduction happens in her approach (triangulations QG) but she might not wish to explain why it occurs in Reuter's approach (asymptotic safe QG) or in Horava's... Perhaps each can explain why this happens in his/her own form of QG, but can not explain why it happens in the others'.

Dario Benedetti has attempted a more abstract explanation of dimensional-reduction. So far only one paper on this, treating two toy model cases. He is a Loll PhD (2007) who then went postdoc to Perimeter. He has worked both in Triangulations with Loll and in Asymptotic Safe with Saueressig (a coauthor of Reuter's.) He is about as close to both approaches as anyone---having done research in both lines. He has published extensively. I don't understand this one paper of his about dimensional reduction. Maybe you can get something from it.
http://arxiv.org/abs/0811.1396
Fractal properties of quantum spacetime
Dario Benedetti
(Submitted on 10 Nov 2008)
"We show that in general a spacetime having a quantum group symmetry has also a scale dependent fractal dimension which deviates from its classical value at short scales, a phenomenon that resembles what is observed in some approaches to quantum gravity. In particular we analyze the cases of a quantum sphere and of kappa-Minkowski, the latter being relevant in the context of quantum gravity."
4 pages, 2 figures Phys.Rev.Lett.102:111303,2009
 
Last edited:
  • #47


I think this is an intersting dicussion, to see brief input and reflections from different directions.

apeiron said:
In this thread and others, there is a lot of concern about how mass can be added to the picture. But it seems rather that the theory would be a model of the vacuum, and then secondary theories would handle mass as knots in the fabric.

The conceptual point I tried to suggest in past posts is that in my mind you can not have a "picture" at all without mass! It just doesn't make sense. By the same token you need a brain to have an opinon, or you actually need a physical memory record to compute statistics.

This would tangent to the holographic picture, where you do not just need a screen/communication channel, you also need a sink/source behind the screen, and this is eventually saturated - then this unavoidable backreacts on the screen itself - it must change.

IE. from an informational point of view, each piece of information have a kind of count or requires information capacity. This also suggests that, if you infere and action from a state, an action also sort of have mass/complexity, and we get that actions acquire an inertia, which explains stability. It's the same simple logic that if you have a long history of statistics, not single data point of any kind can flip you off the chart.

This physical basis of information, is what I miss in most approaches. Some ideas are very good, but these things are still lacking.

Olaf Dreyers idea is that the inside view, implies that any measurements must be made by inside sticks and rod, but what I can't fully read out of this reasoning is if he also acknowledges that we're also constrained to inside-memory records to store time-history data, and even be able to distinguish time, and dimensions.

If seems expect just from that picture that as the mass/complexity of the inside-observe shrinks, there is an over-head of the "index structure" (space) that becomes more and more unstable, and it will eventually loose it. Pretty much like a phase transition.

Dreyer also picture the origin of the universe, as a phase transition where the observers live in the new, ordered phase. But the observers must emerge during the transition. If you then take the observer to be material, it's the simulataenous emergence of space and matter. But that still seems to make use of an external reference of the prior phase. I do not understand all his reasoning, it still seems there are several major conjectures along the way.

So to me, even if you consider in absurdum "pure gravity" or emptry space, this very PICTURE *implies* a complex CONTEXT. No context - no picture _at all_. This is also why even the VOID has inertia (cosmological constant). Because the context defining the void (ie. the boundaries or communication channels) must have a sink/source.

Unless of course, you think it's ok to attach all this to a mathematical reality, that never needs justification.

/Fredrik
 
  • #48


By now some of us have had a chance to read Carlip's slides and see what exactly it is that HE has to say.
There are only 12 slides.
It is not all about dimensional reduction. That is one of the "hints".
He mentions several hints or clues
==quote==
Accumulating bits of evidence
that quantum gravity simplifies at short distances
• Causal dynamical triangulations
• Exact renormalization group/asymptotic safety
• Loop quantum gravity area spectrum
• Anisotropic scaling models (Horava)
Are these hints telling us something important?
==endquote==

and he digs into classical Gen Rel---solutions like Kasner and Mixmaster---to see if there are behaviors that arise classically (things about lightcones and geodesics) that could relate to behavior revealed by the various quantum geometry approaches.

It could help to know something of Carlip's past research and present interests:
http://www.physics.ucdavis.edu/Text/Carlip.html
http://particle.physics.ucdavis.edu/hefti/members/doku.php?id=carlip

The "Planck Scale" conference organizers say that video of the lectures will be put online:
http://www.ift.uni.wroc.pl/~planckscale/index.html?page=home
Carlip's talk is one that I especially want to watch.

In case anyone missed it when we gave the link at first, here are Carlip's slides:
http://www.ift.uni.wroc.pl/~planckscale/lectures/1-Monday/1-Carlip.pdf
 
Last edited:
  • #49


If I remember Weinberg's talk, he says the latest AS suggests d=3, but CDT and the other stuff has d=2. I'm not sure though that "d" in the AS work is spectral dimension whereas CDT and Horava are spectral dimensions. I think Benedetti had d=3 in some http://arxiv.org/abs/0811.1396 - again not sure if all the ds are defined the same way. :confused:
 
Last edited:
  • #50


atyy said:
If I remember Weinberg's talk, he says the latest AS suggests d=3, but CDT and the other stuff has d=2. I'm not sure though that "d" in the AS work is spectral dimension whereas CDT and Horava are spectral dimensions. I think Benedetti had d=3 in some http://arxiv.org/abs/0811.1396 - again not sure if all the ds are defined the same way. :confused:

Let's verify this! I remember it differently---that both AS and CDT agree---but we should check.
BTW Weinberg has something on arxiv about AS, which I posted link to here:
https://www.physicsforums.com/showthread.php?p=2272121#post2272121
There are references to both CDT and AS papers. It doesn't answer your question though.

I think if Weinberg said that in AS dimension -> 3 at small scale he was probably simply mistaken, because I've always heard that it -> 2 in both AS and CDT. I will have to listen to his talk again (the last 10 minutes) to be sure just what he said.

I can't say about Benedetti and about Modesto, there could have been some differences, with only partial similarity. But I have the strong impression that AS and CDT results are consistent. We'll check, I could be wrong.

BTW Carlip is kind of an expert. Look at his slides #4 and #5. He says AS and CDT agree on spectral dimension being around 2 at small scale. How I remember it too.
 
Last edited:
Back
Top