Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractality)

  1. Jul 5, 2009 #1

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    http://www.ift.uni.wroc.pl/~planckscale/lectures/1-Monday/1-Carlip.pdf

    Steve Carlip gave the first talk of a weeklong conference on Planck Scale physics that just ended July 4 (yesterday). The pdf of his slides is online.

    We here at PF have been discussing since 2005 from time to time the odd coincidence that several very different approaches to quantizing GR give a spacetime of less than 4D at small scale. As you zoom in, and measure things like areas and volumes at smaller and smaller scale you find in these various quantized-geometry models that the geometry is behaving as if it was fractional dimensioned less than 4. Going continuously down to 3.9 and 3.8 and 3.7 .....and finally approaching 2D.

    Dimensionality does not have to be a whole number, like exactly 2, or exactly 3. There are easy ways to measure the dimensionality of whatever space you are in----like by comparing radius with volume to see how the volume grows---or by conducting a random walk diffusion and seeing of fast diffusion happens. And these easy ways to measure dimension experimentally can give non-integer answers. And there are many well-known examples of spaces that you can construct that have non-whole-number dimension. Renate Loll had a SciAm article about this, with nice illustrations, and saying why this could be how it works at Planck Scale. The link is in my signature if anyone wants.

    So from our point of view it's great that Steve Carlip is focusing some attention on this strange coincidence. Why should the different approaches of Renate Loll, Martin Reuter, Petr Horava, and even also Loop QG, why should these very different models all arrive at the bizarre spontaneous dimensional reduction at small scale (that is the title of Carlip's talk.)

    Carlip is prominent and a widely recognized expert so IMHO it nice he is thinking about this.

    Here is the whole schedule of the Planck Scale conference which now has online PDF for many of the talks for the whole week
    http://www.ift.uni.wroc.pl/~planckscale/index.html?page=timetable
     
  2. jcsd
  3. Jul 5, 2009 #2

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    Renate Loll also dicussed this coincidence of different QG approaches on spontaneous dimensional reduction from 4D down to 2D.
    See her slide #9, two slides from the end.
    http://www.ift.uni.wroc.pl/~planckscale/lectures/1-Monday/4-Loll.pdf
    She gives arxiv references to papers by her and by Reuter, and by Horava, and by Modesto, and the one by Benedetti etc. So you have a more complete set of links to refer to than you get with Steve Carlip. She has been writing about this since 2005 and has the whole thing in focus and sharp perspective. The interesting thing is Steve Carlip is a completely different brain now looking at the same coincidences and likely making something different out of it.

    I'm beginning to think this Planck Scale conference at Wroclaw (pronounced "Breslau" by many people despite the polish spelling) was a great conference. Perhaps it will turn out to have been the best one of summer 2009. Try sampling some of the talks and see what you think.
     
    Last edited: Jul 5, 2009
  4. Jul 5, 2009 #3
    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    An interval is defined by the events at its ends. There is no reason for us to claim there are intervals that do not have events at their ends, because an interval without events is immeasurable and has no meaning except classically.

    Where there are no events, there are no intervals, and where there are no intervals, there is no dimension (for dimensions are sets of orthogonal, or at least non-parallel, intervals) and where there is no dimension, there is no space-time. For this reason we see space and time (i.e. a classical concept) inapplicable in systems that are isolated from us. Thus a closed system like an entangled pair of particles are welcome to disobey causality just as readily as are a couple of local particles, because in both cases there is no space-time between them. Likewise time executes infinitely fast within a quantum computer because it is a closed system and there are no intervals between it and the rest of the universe to keep its time rate synchronized with the rest of the universe.

    It is a grave mistake to think of quantum mechanical events as occurring "in space-time". Rather, it is those events that define space-time, and there is no space-time where there are no events.

    The fewer the events, the fewer orthogonal (and non-parallel) intervals, and thus the fewer dimensions.
     
    Last edited: Jul 5, 2009
  5. Jul 5, 2009 #4

    nrqed

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit


    Interesting.

    By the way Marcus, is there a reason why you think the idea of string theory that the universe may have more than four dimensions silly but you seem to like the idea that the universe at a microscopic scale may have less than four dimensions? I personally think that we should be open-minded to all possibilities and use physics (and maths) to guide us, not personal prejudices. But you seem close-minded to one opossibility while being open-minded to the other. Is it just because you don't like anything that comes from string theory and like everything that come from LQG or is there a more objective reason?

    Thanks for your feedback. Yourwork in bringing up interesting papers and talks is sincerely appreciated!
     
  6. Jul 5, 2009 #5
    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    What force of a loupe do we need to see the Plank scale and LQG?
     
  7. Jul 5, 2009 #6

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    That's a good question and I like the way you phrase it! The aim is to observe Planck scale effects. There are two instruments recently placed in orbit which many believe may be able to measure Planck scale effects.

    These are Fermi (formerly called the gammaray large array space telescope) and Planck (the successor to the Wilkinson microwave anisotropy probe.)

    The Fermi collaboration has already reported an observation that shows the desired sensitivity. To get a firm result one will need many tens of such results. One has to record gammaray bursts arriving from several different distances.

    The Fermi collaboration reported at conference in January 2009 and published in Science journal soon after, I think it was in March. They reported that with 95% confidence the quantum gravity mass MQG is at least 0.1 MPlanck.

    The Science article is pay-per-view but Charles Dermer delivered this powerpoint talk
    http://glast2.pi.infn.it/SpBureau/g...each-contributions/talk.2008-11-10.5889935356
    at LongBeach at the January meeting of the American Astronomical Society (AAS). It summarized observational constraints on the QG mass by other groups and discusses the recent Fermi stuff.

    The main point is we already have the instruments to do some of the testing that is possible. They just need time to accumulate more data. The result so far that MQG > 0.1 MPlanck is not very helpful. But by observing many more bursts they may be able to say
    either MQG > 10 MPlanck or MQG < 10 MPlanck. Either result would be a big help. The first would pretty much kill the DSR (deformed special relativity) hypothesis and the second would strongly favor DSR. If you haven't been following the discussion of DSR, Carlo Rovelli posted a short paper about it in August 2008, and there are earlier longer papers by many other people in the LQG community. Rovelli seems to be a DSR skeptic, but he has this one paper which I think wouldn't be such a bad introduction.

    Several of us discussed that 6 page Rovelli paper here:
    https://www.physicsforums.com/showthread.php?p=2227272

    Amelino gave a seminar talk at Perimeter this spring, interpreting the Fermi result. Amelino is a QG phenomenologist who has made QG testing his specialty, written a lot about it, he gets to chair the parallel session on that line of research at conferences. He was happy with the Chuck Dermer paper reporting a slight delay in arrival time of some high energy gamma, but he emphasized the need to observe a number of bursts at different distances to make sure the effect is distance-dependent. Any one who is really curious in this can watch the video.

    The url for the video is in the first post of this thread:
    https://www.physicsforums.com/showthread.php?t=321649

    ======================
    Various papers have been written about QG signature in the microwave Background temperature and polarization map. Right now I don't know of firm predictions---of some theory that will live or die depending on detail found in that map. Planck spacecraft just arrived out at L2 lagrange point and began observing and I'm watching to see if people in the QG community can utilize the improved resolution of the map.
     
    Last edited by a moderator: Apr 24, 2017
  8. Jul 6, 2009 #7

    MTd2

    User Avatar
    Gold Member

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    This is not correct. Smolin argued for a distribution function for the delay of the GRB, so merely posing the cut off at the first photons detected may not be the correct path.
     
  9. Jul 6, 2009 #8

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    I'm not sure what you mean. Who is not correct? As I recall the parameter MQG has been in general use for some time. Smolin(2003) used it and so did the John Ellis+MAGIC paper(2007). Charles Dermer used it in his Jan 2009 report for the Fermi collaboration, which then used it in their Science journal article.

    It is not a cutoff and has nothing to do with a cutoff as far as I know. Just a real simple handle on quantum-geometry dispersion. You may know all this and think it is basic (it is basic) and maybe you are talking about something else more sophisticated. I want to keep it simple.

    Papers typically look at two QG masses, one appearing in a first order dispersion relation and the other in a second order, so you see alternate formulas with both MQG1 and MQG2. When they don't make the distinction, they are talking about the first order.

    I don't like the notation personally. I think they should use the symbol EQG because it is really an energy they are talking about, expressed either in GeV or in terms of the planck energy EPlanck.

    EQG is the Planck scale factor by which dispersion is hypothetically suppressed.

    The hypothesis to be tested is that the speed the photon (of energy E) travels is not c but

    (1 - E/EQG)c.

    More complicated behavior could be conjectured and tested, maybe there is no first-order dependence, but there is some second order effect of E on the speed. But this linear hypothesis is simple. The observational astronomers can test it and possibly rule it out (if it is wrong) fairly quickly.

    The way you would rule it out is to raise the lower limit on MQG or on EQG as I would prefer (my two cents).
    Intuitively if there is a Planck scale DSR effect, then MQG is on the order of the Planck mass. So if you can show that dispersion is more strongly suppressed than that, if you can show that the parameter is , say, > 10 time Planck, that would effectively rule out DSR (or at least rule out any simple first-order theory).

    To be very certain perhaps one should collect data until one can confidently say that the parameter is > 100 times Planck. But I personally would be happy to reject DSR with only a 95% confidence result saying > 10 times Planck.

    On the other hand, if DSR is not wrong, then groups like Fermi will continue to observe and will come up with some result like the parameter is < 10 Planck. Then it will be bracketed like in some interval like [0.1 Planck, 10 Planck]
    Then the dam on speculation will break and the cow will float down the river.
    Because we will know that there is a linear dispersion coefficient actually in that range.
    Everybody including John Ellis with his twinkly eyes and Tolkein beard will go on television to offer an explanation. John Ellis has already offered an explanation in advance of any clear result of this sort. And straight quantum gravitists will be heard as well. It is an exciting thought, but we have to wait and see until there are many lightcurves of many distant GRB.

    The Smolin(2003) paper was titled something like "How far are we from a theory of quantum gravity?"
    The John Ellis+MAGIC paper was around 2007. I will get the link.
    http://arxiv.org/abs/0708.2889
    "Probing quantum gravity using photons from a flare of the active galactic nucleus Markarian 501 observed by the MAGIC telescope"
     
    Last edited: Jul 6, 2009
  10. Jul 6, 2009 #9

    MTd2

    User Avatar
    Gold Member

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    I am not refering to neither. I am refering to the fuzzy dispersion, p. 22, section 5.1

    http://arxiv.org/PS_cache/arxiv/pdf/0906/0906.3731v3.pdf

    Note eq. 20 and 21.

    If you suppose that the delta time is is the mean of a gaussian, it does not make sense to talk of a first or second order aproximation. You have to look at the infinite sum, that is, the gaussian.
     
  11. Jul 6, 2009 #10

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    I see! But that is one small special section of the Amelino-Smolin paper. On "fuzzy dispersion". For much of the rest of paper they are talking about the simple MQG that I am used to.
    They cite the Fermi collaboration's paper in Science, and quote the result
    MQG > 0.1 MPlanck.
    It's true they look ahead to more complicated types of dispersion and effects that more advanced instruments beyond Fermi might observe. They even consider superluminal dispersion (not just the slight slowing down when a photon has enough energy to wrinkle the geometry it is traveling thru.)
    They want to open the QG dispersion question up and look at secondorder and non-simple possibilities, which is what scientists are supposed to do. We expect it.
    But our PF member Bob asked a question about the LOUPE you need to see Planck-scale wrinkles! The tone of his question is keep-it-basic. I want to not go where Amelino-Smolin go in that 2009 paper. I want to say "here is an example of the magnifying glass that you see wrinkles with" The loupe (the jewelers' squint-eye microscope) is this spacecraft called Fermi which is now in orbit observing gamma bursts.

    MTd2, let's get back to the topic of spontaneous dimensional reduction. I would like to hear some of your (and others) ideas about this.
    I suspect we have people here at PF who don't immediately see the difference between a background independent theory (B.I. in the sense that LoopQG people speak of it) where you don't tell space what dimensionality to have and a fixed geometric background theory where you set up space ahead of time to have such and such dimensionalities.

    In some (B.D.) theories you put dimensionality in by hand at the beginning.
    In other (B.I.) theories the dimensionality can be anything and it has the status of a quantum observable, or measurement. And it may be found, as you study the theory, that to your amazement the dimensionality changes with scale, and gradually gets lower as the scale gets smaller. This behavior was not asked for and came as a surprise.

    Significantly I think, it showed up first in the two approaches that are the most minimalist attempts to quantize General Relativity. Reuter following Steven Weinberg's program of finding a UV fixed-point in the renormalization flow (the asymptotic safe approach) and Loll letting the pentachorons assemble themselves in a swarm governed by Regge's General Relativity without coordinates. Dimensional reduction appeared first in the approaches that went for a quick success with the bare minimum of extra machinery, assumptions, new structure. Both had their first papers appear in 1998.

    And Horava the former string theorizer came in ten years later with another minimalist approach which turned out to get the same dimensional reduction. So what I am thinking is there must be something about GR. Maybe it is somehow intrinsic built into the nature of GR that if you try to quantize it in the simplest possible way you can think of, without any strings/branes/extradimensions or anything at all you dream up. If you are completely unimaginative and go directly for the immediate goal, then maybe you are destined to find dimensional reduction at very small scale. (If your theory is B.I., no fixed background geometry which would preclude the reduction happening.) Maybe this says something about GR that we didn't think of before. This is just a vague hunch. What do you think?


    ===========EDIT: IN REPLY TO FOLLOWING POST========
    Apeiron,
    these reflections in your post are some of the most interesting ideas (to me) that I have heard recently about what could be the cause of this curious agreement among several very different methods of approach to planckscale geometry (or whatever is the ground at the roots of geometry). I don't have an explanation to offer as an alternate conjecture to yours and in fact I am quite intrigued by what you say here and want to mull it over for a while.

    I will answer your post #11 here, since I can still edit this one, rather than making a new post just to say this.
     
    Last edited: Jul 6, 2009
  12. Jul 6, 2009 #11

    apeiron

    User Avatar
    Gold Member

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    Can you explain just what it is in the approaches that leads to this dimensional reduction?

    In the CDT story, is it that as path lengths shrink towards planck scale, the "diffusion" gets disrupted by quantum jitter? So instead of an organised 4D diffusion, dimensionality gets so disrupted there are only linear jumps in now unspecified, uncontexted, directions?

    Loll writes:
    "In our case, the diffusion process is defined in terms of a discrete random walker between neighbouring four simplices, where in each discrete time step there is an equal probability for the walker to hop to one of its five neighbouring four-simplices."

    So it does seem to be about the choice being disrupted. A random walker goes from four options (or 3+1) down to 1+1.

    As to general implications for GR/QM modelling, I think it fits with a systems logic in which it is global constraints that produce local degrees of freedom. So the loss of degrees of freedoms - the disappearace of two of the 4D - is a symptom of the asymptotic erosion of a top-down acting weight of constraint.

    GR in a sense is the global macrostate of the system and it attempts to impose 4D structure on its locales. But as the local limit is reached, there is an exponential loss of resolving power due to QM fluctuations. Pretty soon, all that is left is fleeting impulses towards dimensional organisation - the discrete 2D paths.

    Loll's group would seem to draw a different conclusion as she then says that continue on through the planckscale and she would expect the same 2D fractal universe to continue crisply to infinite smallness. I would expect instead - based on a systems perspective - that 2D structure would instead dissolve completely into a vague QM foam of sorts.

    But still, the question is why does the reduction to 2D occur in her model? Is it about QM fluctuations overwhelming the dimensional structure, fragmenting directionality into atomistic 2D paths?

    (For those unfamiliar with downward causation as a concept, Paul Davies did a good summary from the physicist's point of view where he cautiously concludes:)

    In such a framework, downward causation remains a shadowy notion, on the fringe of physics, descriptive rather than predictive. My suggestion is to take downward causation seriously as a causal category, but it comes at the expense of introducing either explicit top-down physical forces or changing the fundamental categories of causation from that of local forces to a higher-level concept such as information.

    http://www.ctnsstars.org/conferences/papers/The%20physics%20of%20downward%20causation.pdf [Broken]
     
    Last edited by a moderator: May 4, 2017
  13. Jul 7, 2009 #12

    Fra

    User Avatar

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    I've gotten the impression that my point of views are usually a little alien to marcus reasoning but I'll add fwiw an opinon realating to this...

    I also think the common traits of various programs are interesting, and possibly a sign of something yet to come, but I still think we are quite some distance away from understanding it!

    I gave some personal opinons on the CDT logic before in one of marcus threads https://www.physicsforums.com/showthread.php?t=206690&page=3

    I think it's interesting, but to understand why it's like this I think we need to somehow think in new ways. From what I recall from the CDT papers I read, their reasoning is what marcus calls minimalist. The start with the current model, and do some minor technical tricks. But from the point of view of ratioanl reasoning here, I think the problem is that the current models are not really a solid starting point. It's an expectation of nature, based on our history.

    I think similarly, spontaneous dimensional reduction or creation might be thought of as representation optimations, mixing the encoded expectations with new data. This somehow mixes "spatial dimensions" with time history sequences, and also ultimately retransformed histories.

    If you assume that these histories and patterns are physical encoded in a material observer, this alone puts a constraint on the combinatorics here. The proper perspective should make sure there are no divergences. It's when no constraints on the context are given, that absurd things happens. Loll argued for the choice of gluing rules so that otherwise it would diverge etc, but if you take the observer - to be the computer, there simply is no physical computer around to actually realise a "divergence calculation" anyway. I don't think the question should have to even appear.

    So, if you take the view that spatial structure are simply a preferred structure in the observeser microstructure (ie. matter encodes the spacetime properties of it's environment) then clearly, the time histories of the measurement history, are mixed by the spacetime properties as the observers internal evolution takes place (ie. internal processes).

    I think this is a possible context to understand emergent dimensionality (reduction as well as creation). The context could then be an evolutionary selection for the observers/matters microstructure and processing rules, so as to simply persist.

    The reasonings where the Einstein action is given, and there is a unconstrained context for the theory I think it's difficult to "see" the logic, since things that should be evolving and responding, are taken as frozen.

    So I personally think in order to understand the unification of spacetime and matter, it's about as sinful to assume a background (action) as it is to assume a background space. I think it's intersting what Ted Jacobsson suggested that Einsteins equations might simply be seen as a state (a state of the action that is), but that a more general case is still out there.

    /Fredrik
     
  14. Jul 7, 2009 #13

    apeiron

    User Avatar
    Gold Member

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    What puzzles me more is that Loll makes a big deal about the "magic" of the result. The reason for the reduction should be easy to state even if it is "emergent" from the equations.

    As I interpret the whole approach, CDT first takes the found global state of spacetime - its GR-modelled 4D structure complete with "causality" (a thermodynamic arrow of time) and dark energy expansion. Then this global system state is atomised - broken into little triangulations that, like hologram fragments, encode the shape of the whole. Then a seed is grown like a crystal in a soupy QM mixture, a culture medium. And the seed regrows the spacetime from which it was derived. So no real surprise there?

    Then a second part of the story is to insert a planck-scale random walker into this world.

    While the walking is at a scale well above QM fluctuations, the walker is moving in 4D. So a choice to jump from one simplex to another is also, with equal crispness, a choice not to jump in any of the other three directions. But then as scale is shrunk and QM fluctuations rise (I'm presuming), now the jump in some direction is no longer also a clear failure to have moved in the other directions. So dimensionality falls. The choice is no longer oriented as orthogonal to three other choices. The exact relationship of the jump to other potential jumps is instead just vague.

    Is this then a good model of the planckscale?

    As Loll says, the traditional QM foam is too wild. CDT has a tamer story of a sea of unoriented, or very weakly oriented, actions. A lot of small definite steps - a leap to somewhere. A 1 bit action that defines a single dimension. Yet now there are no 0 bit non-actions to define an orientation to the other dimensions.

    So from your point of view perhaps (and mine), a CDT type story may track the loss of context, the erosion of observerhood. In 4D realm, it is significant that a particle went a step in the x-axis and failed to step towards the y or z axis. The GR universe notices these things. But shrink the scale and this clarity of orientation of events is what gets foamy and vague.
     
  15. Jul 7, 2009 #14

    Fra

    User Avatar

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    The way I interpret Loll's reasoning is that there is not supposed to be a solid, plausible, convincing reason. They simply try what they think is a conservative, attempt to reinterpret the old path integral approch, but with an additional details of putting in manually a microstructure in the space of sampling the spacetimes (implicit in their reasoning of gluing rules etc).

    And they just note an interesting result, and suggest that the interesting result itself indirectly is an argument for the "validity".

    I think it's great to try stuff, but I do not find their reasoning convincing either. That's doesn't however remove the fact of interesting results, but it questions the program as beeing sufficiently ambitious and fundamental. It is still somewhat semiclassical to me.

    /Fredrik
     
  16. Jul 7, 2009 #15

    Fra

    User Avatar

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    > Then a second part of the story is to insert a planck-scale random walker into this world.

    IMO, I'm not sure are describing what a small random walker would see, they try to describe what a hypotetical massive external observer would sees, when observing a supposed random walker doing a random walk in the a subsystem of the larger context under the loupe. Ie. an external observer, observing the actions and interplay of the supposed random walker.

    The problem is when you use such a picture to again describe two interacting systems, then you are not using the proper perspective.

    What I am after, is, when they consider the "probability for return" after a particular extent of evolution, then, who is calculating that probability, and more important what is the consequences for the calculating device (observer) when feedback from actions based on this probability comes from the environment? The ensemble escape is IMO a static mathematical picture, not a evolving physical picture.

    I think we need to question the physical basis even of statistics and probability here in a context, to make sense out of these path integrals. I guess I think there is actally an element of "reality" to the wavefunction, but a relative one. As soon as you introduce ensembles are hypotetical repeats of mesurements, I feel we are really leaving reality. In a real experiment statistics, the ensembles are still real memory records. the manifestation of the statistics is the memory record, encoding the statistics. Without such physical representation the information just doesn't exist IMO.

    But I think that this STILL says something deep about GR, that marcus was after. Something about the dynamical relational nature of reality, but to understand WHY this is so, I personally think we need to understand WHY the GR actions looks like it does. No matter how far CDT would take us, I would still be left with a question mark in my forehead.

    /Fredrik
     
  17. Jul 7, 2009 #16
    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    Hey sorry to interrupt the deep conversation. But one can give a pretty boring argument forward as to why gravity seems to be 2 dimensional on small scales: Newtons constant is dimensionless in d=2 hence one should expect gravity to be UV complete and hence renormalisable if a dimensional reduction t d=2 occurs. In RG language it makes sense that d=2 at the UV fixed point. The reason I suspect that we can't renormalize the theory in a perturbative way is that in this case the dimensionality is fixed or rather one expands around a 4d background in such rigid way that the expansion is still 4d. Reuter suspects that it is the background Independent formulation of gravity that allows for the fixed point.

    http://arxiv.org/abs/0903.2971


    As for a deeper reason why this reduction happens maybe we need to look no further than a) why is G dimensionless in d=2? i.e study of why GR takes the form it does and b) why is it important to have dimsionless couplings to renormalize QFT's?


    I view both CDT and RG approaches to QG as attempts to push QFT and GR as far as they can go to help us understand the quantum nature of spacetime. They are humble attempts that hope to give us some insights into a more fundamental theory.
     
  18. Jul 7, 2009 #17

    marcus

    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    To me that argument is not boring, Finbar. It makes good sense. It does not explain why or how space could experience this dimensional reduction at micro scale.

    But it explains why a perturbative approach that is locked to a rigid 4D background geometry might not ever work.

    It explains why if you give more freedom to the geometry and make your approach background independent (or less dependent, at least) then your approach might find for you the UV fixed point where renormalization is natural and possible. (the theory becomes predictive after a finite number of parameters are determined experimentally.

    And it gives an intuitive guess as to why dimensionality MUST reduce as this UV fixedpoint is approached, thus at small scale as you zoom in and look at space with a microscope.

    So your unboring argument is helpful, it says why this must happen. But still not fully satisfying because one still wonders what this strange microscopic geometry, or lack of geometry, could look like, and what causes nature to be made that way.

    It's like if you are with someone who seems to have perfectly smooth skin, and you notice that she is sweating. So you say "well her skin must have holes in it so that these droplets of water can come out." This person must not be solid but must actually be porous! You infer this, but you did not yet take a magnifying glass and look closely at the skin, to see what the pores actually look like, and you did not tell us why, when the skin grows, it always forms these pores in such a particular way

    You told us that dimensionality must reduce, because the field theory of geometry/gravity really is renormalizable as we all suspect. But you did not show us a microscope picture of what it looks like zoomed in, and explain why nature could turn out to not so smooth and solid as we thought.
     
    Last edited: Jul 7, 2009
  19. Jul 7, 2009 #18

    apeiron

    User Avatar
    Gold Member

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    Are you saying here that the reason why a dimensional reduction to 2D is "a good thing" for the CDT approach is that a 2D realm would give us the strength of gravity that would be required at the planckscale? So take away dimensions and the force of gravity no longer dilutes with distance?

    But that would not explain why the model itself achieves a reduction to 2D. Or was I just wrong about it being quantum fluctuations overwhelming the random walker as being the essential mechanism of the model?
     
  20. Jul 7, 2009 #19

    apeiron

    User Avatar
    Gold Member

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    Everything may be flux (or process) when it comes to reality, but there must still be some kind of stability or structure to have observation - a system founded on meanings/relationships/interactions.

    You ought to check out Salthe on hierarchy theory to see how this can be achieved in a "holographic principle" style way.

    So an observer exists at a spatiotemporal scale of being. The observer looks upwards and sees a larger scale that is changing so slow it looks frozen, static, permanent. The observer looks down to the micro-scale and now sees a realm that is moving so fast, equilbrating its actions so rapidly, that it becomes a solid blur - a different kind of permanence.

    This is the key insight of hierarchy theory. If you allow a process to freely exist over all scales, it must then end up with this kind of observer-based structure. There will be a view up and a view down. And both will be solid event horizons that have effects which "locate" the observer.
     
  21. Jul 7, 2009 #20
    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    I have a better understanding of the RG approach than the CDT. But both CDT and RG approaches are valid at all scales including the Planck scale. My point is not that it is a "good thing" but that it is essential for the theory to work at the Planck scale. In CDT you putting in 4-symplexs(4 dimensional triangles) and using the 4 dimensional einstein-hilbert(actually Regge) action but this does not garentee a d=4 geometry comes out or that you can even complute the path integral(in this case sum over triagulations). For the theory to be reasonable it must produce d=4 on large scales to agree with the macroscopic world. Now there are many choices one must make before computing a path integral over triangulations such as what topologies to allow and how to set the bare couplings etc. My guess is that in RG language this corresponds to choices of how to fix the gauge and which RG trajectory to put oneself on. Different choies can lead to completely different geomtries on both large and small scales(as the orginal Eucldiean DT showed). So one must choose wisely and hope that the geomtery one gets out a) is d=4 on large scales and b) has a well defined contiuum limit. My point is that to get a good continuum limit would probably involve the small scale to look 2 dimensional other wise i would expect the theory to blowup (or possibly the geometry to implode).

    Think of it this way: If we want to describe whatever it is that is actually happening on the Planck scale in the language of d=4 GR and QFT we have to do it in such a way that the effective dimension is d=2 else the theory blows up and can't describe anything. In turn this tells us that if we can push this GR/QFT theory to the Planck scale(if theres a UV fixed point) then its very possible that the actual Planck scale physics resembles this d=2 reduction. We could then prehaps turn the whole thing the other way and say that Planck scale physics must look like this d=2 GR/QFT theory to produce a d=4 GR/QFT theory on larger scales.

    Of course this is all spectulation. But the key question here is when,that is at what energy scale, do we stop using GR and QFT and start using something else?
     
  22. Jul 7, 2009 #21

    apeiron

    User Avatar
    Gold Member

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    So it is still a mystery why the reduction occurs as a result of the mathematical operation? Surely not.

    I can see the value of having an operation that maps the 4D GR view on to the limit view, the QM planckscale. It allows a smooth transition from one to the other. Before there was a problem in the abruptness of the change. With CDT, there is a continuous transformation. Perhaps. If we can actually track the transformation happening.

    To me, it seems most likely that we are talking about a phase transition type situation. So a smooth change - like the reduction of dimensionality in this model - could actually be an abrupted one in practice.

    I am thinking here of Ising models and Kauffman auto-catalytic nets, that sort of thing.

    So we could model GR as like the global magnetic field of a magnet. A prevailing dimensional organisation that is impervious to local fluctuations (of the iron dipoles).

    But heat up the magnet and the fluctuations grow to overwhelm the global order. Dimensionality becomes fractured (rather than strictly fractal). Every dipole points in some direction, but not either in alignment, or orthogonal to that alignment, in any definite sense.

    So from this it could be said that GR is the ordered state, QM the disordered. And we then want a careful model of the transition from one to the other.

    But no one here seems to be able to spell out how CDT achieves its particular result. And therefore whether it is a phase transition style model or something completely different.
     
  23. Jul 8, 2009 #22

    Fra

    User Avatar

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    Apeiron, this is right. I do not argue with this.

    But in my view, there is still a physical basis for each scale.

    To me the key decompositions are Actions (observer -> environment) and Reactions (environment -> observer). The Actions of an observer, are I think constructed AS IF this most stable scale was universal. The action is at best of of probabilistic type. The reaction is however less predictable (undecidable). The reactions is what deforms the action, by evolution. The action is formed by evolution, as it's interacting and subject to reactions.

    I think the ACTION here, is encoded in the observer. Ie. the complexity of the action, ie the informatio needed to encode the action, are constrained by the observers "most stable view". The observer acts as if this was really fixed. However, only in the differential sense, since during global changes, the entire action structure way deforms. It doesn't have to deform, but it can deform. The action is at equilibrium when it does not deform.

    And I also think that this deformation can partially be seen as phase transitions, where the different phases (microstructure) encode different actions.

    I think there is no fundamental stability or determinism, instead it's this process of evolution that produces effective stability and context.

    /Fredrik
     
  24. Jul 8, 2009 #23

    tom.stoer

    User Avatar
    Science Advisor

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    I wrote this in a different context (loops and strings) but I think i may be valid here as well: the major problem of quantizing gravity is that we start with an effective theory (GR) and try to find the underkying microscopic degrees of freedom; currently it seems that they could be strings, loops (or better: spin networks), or something like that.

    The question is which guiding principle guarantuees that starting with GR as an "effective theory" and quantizing it by rules of thumb (quantizing is always using rules of thumb - choice of coordinates, hamiltonian or path integral, ...) allows us to identify the microscopic degrees of freedom. Of course we will find some of their properties (e.g. that they "are two-dimensional") but I am sure that all those approaches are limited in te sense that deriving microscopic degrees of freedom from macroscopic effective theories simply does not work!

    Let's make a comparison with chiral perturbation theory:
    - it is SU(2) symmetric (or if you wish SU(N) with N being the number of flavors)
    - it is not renormalizable
    - it incorporates to some extend principles like soft pions, current algebra, ...
    - is has the "correct" low energy effective action with the well-known pions as fundamental degrees of freedom
    But it simply does not allow you to derive QCD, especially not the color degrees of freedom = SU(3) and its Hamiltonian.
    Starting with QCD there are arguments how to come to chiral perturbation theory, heavy baryons etc., but even starting with QCD, integrating out degrees of freedom and deriving the effective theories mathematically has not been achieved so far.

    So my claim is that the identification of the fundamental degrees of freedom requires some additional physical insight (in that case the color gauge symmetry) which cannot be derived from an effective theory.

    Maybe we are in the same situation with QG: we know the IR regime pretty well, we have some candidate theories (or only effective theories) in the UV respecting some relevant principles, but we still do not "know" the fundamental degrees of freedom: the effects corresponding to deep inelastic scattering for spin networks (or something else) are missing!
     
  25. Jul 8, 2009 #24

    Fra

    User Avatar

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    I think so too.

    This is why I personally think it's good to try to think outside of the current frameworks and try to go back even to analyse our reasoning and how our scientific method looks like. Because when you phrase questions in a given frameworks, sometimes some possible answers are excluded.

    This is why I have personally but alot of focus on the scientific inquiring process and measurement processes. The problem of scientific induction and the measurement problem have common traits. We must question also our own questions. A question isn't just so simlpy as, here is the questions, what is the answer? Sometimes it's equallty interesting, and a good part of the answer to ask why are we asking this particular question? What is the origin of questions? And how does our questions influence our behaviour and actions?

    The same questions appear in physics, if you wonder, what is matter and space, why is the action of matter and spacetime this or that? If a material object contains "information" about it's own environment, how does the learning process of this matter in his environment work like? And how is that principally different (if at all) from human scientific processes?

    /Fredrik
     
  26. Jul 8, 2009 #25

    apeiron

    User Avatar
    Gold Member

    Re: Steve Carlip on dimensional reduction (Loll, Reuter, Horava; smallscale fractalit

    But why do you presume this? I am expecting the exact opposite to be the case from my background in systems science approaches. Or even just condensed matter physics.

    In a systems approach, local degrees of freedom would be created by global constraints. The system has downward causality. It exerts a stabilising pressure on all its locations. It suppresses all local interactions that it can observe - that is, it equilbrates micro-differences to create a prevailing macrostate. But then - the important point - anything that the global state cannot suppress is free to occur. Indeed it must occur. The unsuppressed action becomes the local degree of freedom.

    A crude analogy is an engine piston. An explosion of gas sends metal flying. But the constraint exerted by the engine cylinder forces all action to take place in one direction.

    When all else has been prevented, then that which remains is what occurs. And the more that gets prevented, the more meaningful and definite becomes any remaining action, the more fundamental in the sense of being a degree of freedom that the system can not eradicate.

    This is the logic of decoherence. Or sum over histories. By dissipating all the possible paths, all the many superpositions, the universe is then left with exactly the bit it could not average away.

    Mostly the universe is a good dissipator of QM potential. Mostly the universe is cold, flat and empty.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook