Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Programmes that calculate dimensionality

  1. Aug 20, 2009 #1
    I'm wondering which theoretical physics programmes actually try to calculate the classical 4D dimensionality of spacetime that we observe. Thanks.
  2. jcsd
  3. Aug 22, 2009 #2


    User Avatar
    Science Advisor

    In some sense String Theory (ST) does: it starts with a general description of a 2-dim world sheet (plus fermionic = Grassmann dimensions) and a D-dim target space. For supersymmetric ST it can be shown that only in D=10 certain anomalies cancel, such that only in D=10 superstring theory can be formulated consistently.

    In some restricted (!) sense Causal Dynamical Triangulation (CDT) does: it starts with fundamental 4-D building blocks (4-simplices with one "timelike" dimension); then it is shown that in a "long-distance-limit" = on macroscopic scales the 4-D spacetime is reproduced (asymptotically). On short scales the (spectral) dimension is < 4. However this is not really a calculation from scratch but "only" a consistency check of the approach.
  4. Aug 22, 2009 #3
    Are there efforts that try to use a Feynman type path integral whose paths wind through different dimensions for the same point, or something like that? For it seems arbitrary to label an event with a 1D coordinate or a 3D or 5D coordinate if all you're trying to do is distinquish one event from another. All that's required is to have a different number or set of numbers for each event. I'm thinking that maybe such a transdimensional path integral would have a classical path of our familiar 4D universe. Does this sound like any study programme out there?
  5. Aug 22, 2009 #4


    User Avatar
    Gold Member

    I agree that some kind of path integral approach seems a powerful way of modelling a self-organising dimensionality. But I've not yet seen any programme to explore this idea.

    What you would seem to need is some mechanism by which an infinity of possible dimensions self-cancels down to three symmetric spatial and one asymmetric temporal.

    I found Baez octonions paper - http://math.ucr.edu/home/baez/octonions/ - a provocative lead. There is an SO story when it comes to the "dimensionality" of division algebras. So Baez might be a place to start if you are seeking a programme.

    Another line of thought consistent with this SO dimensionality approach would be network theory. And Wolfram has a good bit in his book on why all interactions can be reduced to three edges (two "inputs" to form a context, one "output" to represent the event).


    There are also the frequent comments that forces would wane too fast in more than 3D (and remain too strong in less). While this seems true, it does not then give a mechanism why dimensionality would be constrained to less than infinity in number. But harness that idea to a dissipative structure approach, and it may click into place.

    Here, a good thought primer might be the benard cell. Global order emerges out of unoriented impulses as a dissipative structure forms.


    I would throw in Penrose's twistors as that gives a way of representing things from a lightcone co-ordinate perspective. Gets us away from the square rigidities of Newtonian dimensionality.

  6. Aug 23, 2009 #5


    User Avatar
    Science Advisor

    Regarding twistors: originally Penrose's claim was that using the fundamental twistor space is restricted to 4D spacetime which makes 4D quite unike.

    I do not know if this claims is still valid.
  7. Aug 23, 2009 #6


    User Avatar

    I think this a great reflection and does illustrate one fundamental issue with the various sum over options ideas, and it's how to properly count the possibilities. It's not hard to see that the ergodic assumption hidden here is implicitly specified once you define the microstructure or the event/sample space, which is usually considered to be a fixed background information.

    I am not aware of something that does this they way I think it should be done either, which is my I have started working on such a reconstruction on my own.

    As I see it, the microstructure that defines the partitioning and the counting of possibilities in the action, are itself uncertain and evolving. The results is a self-referential evolving model that should self-organise and thus select a preferred system of microstructures including a selection of dimensionality. Conceptually the basis for the selection would be like a compression problem, the physical structure that has the best compression is the fittest.

    I am starting the reconstruction a bit like you suggest - with distinguishable states/options. But the distinguishable states are observer dependent, and more or less makes up the identity of the observer, and an evolving state space is equivalent to an evolving observer. The observers physical structure I envision as a system of related microstructures, the relations are defined by transformations that are also selected by evolution. This system of related microstructures is constructed in such a way that it encodes an implicit selfaction that is maintained until an external disturbance distorts it. I consider this external distortion to be the selective pressure that forces the action system to maintain the fittest configuration as an act of self-presevation.

    My highly personal idea is that dimensions are grown from a starting point where there are nothing but distinguishable events. Higher dimensions is born out of deviations from stable distributions of the lower dimensions, when the data is so demanding. The higher dimension implicit in the microstructure would then be preferred as it has a higher relative entropy.

    The idea is that the different observers implicitly defines a different ergodic hypothesis and an different distinguishable event space, and that the selection or dimension is simply the selection of the population of action/matter/observer systems in the environment.

    My hope is that the easiest is to start at the low complexity scale, because there the options are few and combinatorically exploration is possible. When the complexity grows, effectively continous must bel emergent.

    I wish there was more developed but I have not found much except fragments.

  8. Aug 23, 2009 #7


    User Avatar

    This is one of the motivations behind my objection to STARTING with an a priori contiuum - you simply have no clue how to COUNT it, in a program that is building on physical inference like mine. To do that, you need to postulate various information measures, and for a continuum there seems to be an infinity of possible measures - thus making the counting ambigous.

    This is my main motivation for wanting to reconstruct the continuum as inherited from newtons and leibnitz development of calculus in physics from a physical standpoint that is closer to the actual evidence/measurement position.

  9. Aug 24, 2009 #8


    User Avatar

    FWIW, the way I am currently pondering this is by exploiting distinguishable time histories. Ie. you can partition or recode the internal memory space containing recorded and retained time histories from just a time history data mode into transformed say truncated probability distributions, and here as times goes by new dimesions are defined by distinguishable distributions.

    I think the key is the transformations which are effectively a form of datacompression. Ie. howto recode the observers timehistory to hold a maximum of _useful_ information given a finite memory capacity?

    At some point and internal partitioning and structure of the memory strucure will emerge, that I picture will "image" the outside. In particular do I think there are a series of transformations that generate a dimensionality of spacetime from a pure timehistory of simply distinguishable events. The grouping that appears are IMO a result of selection for higest compression.

    In this scheme, I'm fiddling with expressions for the action of the compression machinery and this has similarities with sum over paths. Sometime the non-classical statistics is a result of the fact that there are several (generated) related (by means of selected transformations) sub-structures that are at play, which introduces new properties of and and the or logical operators.

    I think QM would follow if you at some point assume the fourier transform defining a new event space from a timehistory, but I'm not happy with that, I'm still looking for the deeper reason what this transformation is special.

    The problem is that this approach is so radical that it requires reconstruction of most basic physical concepts. It's not until the reconstruction is developed that it will be clear if this bears fruit or not.

  10. Aug 24, 2009 #9


    User Avatar

    If you try to make sense out of, and compare a classical expression for a partition of conditional probabilities and a QM logic sum with superpositions with the born rule then the core difference lies in the definition of logical operators. The origin of this is what I think can be solved in the above way.

    This was partially touched upon in this thread
    https://www.physicsforums.com/showthread.php?t=198571&page=10 the two last pages.

    That discussed the interpretation of superposition, and not dimensions but of course, I see a connection here.

    In particular the meaning of X and P when P are not commuting. I'm suggesting that non-commuting observables appear spontaneously as a result of data compression. I think the form of superposition can be explained rather than postulated.

    If this can be accomplished I am very confident that a much deeper understanding of feynmanns path integral will come along where the partition structure is itself evolving.

  11. Aug 24, 2009 #10
    Perhaps as a dimension is varied in the process of integration, the higher the dimensionality of a contribution the less it contributes to the overall sum because now the same function is described with many dimensions, only one of which is being varied, which means the function is not changing as much and cancels out with other values of the dimension as that integration variable is varied. This would have a tendency to favor the lower dimensions, and perhaps lead to the classical observation of 3 dimensions of space.
  12. Aug 24, 2009 #11


    User Avatar
    Gold Member

    So a directed action in 3-space would be very obvious and distinctive. In one axis, something happened, while in the other two - with equal crisp definiteness - nothing did. But the same action in 390-space, or 4,000,033-space would be "lost" comparatively. The higher the dimensionality, the less anything overall would seem to have changed?

    Is this the logic of your approach?
  13. Aug 24, 2009 #12
    I'm not really sure I've got it figured out yet, just bouncing some idea around to see if it sounds familiar or feasible to anyone. I was just thinking that every time an integration is done in the path integral, the one dimensional contribution of the action would be integrated throughout the entire range where as the multidimensional contributions would be integrated through only one of its dimensions and not throughout its entire range. And so I was thinking that since higher dimensional contributions would not change much, they would tend to cancel out with contribution with other higher dimensional contributions. The result would favor the lower dimension, hopefully the 3D space we live in.

    The trick I'm looking for is what function or action is integrated, and what do the multidimensional versions look like, and how would you ensure that the single dimensional contribution is always integrated even though other integration variables are used to integrate with in the multidimensional integral.
  14. Aug 25, 2009 #13


    User Avatar

    Just an idea:

    How about if there was a constraint on the intergration/sum, such as a maximum count - say, the sum can not sum over more than M "options". Then then counting one n-tuple as one distinguishable events rather than n separate distinguishable event, surely is more economical, thus making valid groupings preferred from the point of computation economy.

    That wouldn't be too far off what I envision.

    What would that constraint come from then? IMO it's the complexity (information mass) of the observer.

    In terms of continuum integrals, rather than sums, the limits would ultimatley have to do with relative convergence speeds of the defining limiting sequences. This is why the continuum is a problematic physical starting point.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook