Programmes that calculate dimensionality

  • Thread starter friend
  • Start date
In summary, the conversation touches upon the various theoretical physics programmes that attempt to calculate the classical 4D dimensionality of spacetime that we observe. These include String Theory and Causal Dynamical Triangulation, but there are also efforts to use a Feynman type path integral approach and explore the idea of a transdimensional path integral. The concept of dimensionality is still not fully understood and some proposals suggest that it may emerge from a self-organizing process. However, there is currently no established programme that fully explores this idea.
  • #1
friend
1,452
9
I'm wondering which theoretical physics programmes actually try to calculate the classical 4D dimensionality of spacetime that we observe. Thanks.
 
Physics news on Phys.org
  • #2
In some sense String Theory (ST) does: it starts with a general description of a 2-dim world sheet (plus fermionic = Grassmann dimensions) and a D-dim target space. For supersymmetric ST it can be shown that only in D=10 certain anomalies cancel, such that only in D=10 superstring theory can be formulated consistently.

In some restricted (!) sense Causal Dynamical Triangulation (CDT) does: it starts with fundamental 4-D building blocks (4-simplices with one "timelike" dimension); then it is shown that in a "long-distance-limit" = on macroscopic scales the 4-D spacetime is reproduced (asymptotically). On short scales the (spectral) dimension is < 4. However this is not really a calculation from scratch but "only" a consistency check of the approach.
 
  • #3
Are there efforts that try to use a Feynman type path integral whose paths wind through different dimensions for the same point, or something like that? For it seems arbitrary to label an event with a 1D coordinate or a 3D or 5D coordinate if all you're trying to do is distinquish one event from another. All that's required is to have a different number or set of numbers for each event. I'm thinking that maybe such a transdimensional path integral would have a classical path of our familiar 4D universe. Does this sound like any study programme out there?
 
  • #4
I agree that some kind of path integral approach seems a powerful way of modelling a self-organising dimensionality. But I've not yet seen any programme to explore this idea.

What you would seem to need is some mechanism by which an infinity of possible dimensions self-cancels down to three symmetric spatial and one asymmetric temporal.

I found Baez octonions paper - http://math.ucr.edu/home/baez/octonions/ - a provocative lead. There is an SO story when it comes to the "dimensionality" of division algebras. So Baez might be a place to start if you are seeking a programme.

Another line of thought consistent with this SO dimensionality approach would be network theory. And Wolfram has a good bit in his book on why all interactions can be reduced to three edges (two "inputs" to form a context, one "output" to represent the event).

http://www.wolframscience.com/nksonline/page-476?firstview=1

There are also the frequent comments that forces would wane too fast in more than 3D (and remain too strong in less). While this seems true, it does not then give a mechanism why dimensionality would be constrained to less than infinity in number. But harness that idea to a dissipative structure approach, and it may click into place.

Here, a good thought primer might be the benard cell. Global order emerges out of unoriented impulses as a dissipative structure forms.

http://en.wikipedia.org/wiki/Bénard_cell

I would throw in Penrose's twistors as that gives a way of representing things from a lightcone co-ordinate perspective. Gets us away from the square rigidities of Newtonian dimensionality.

http://www.twistordiagrams.org.uk/general/index.html
 
  • #5
Regarding twistors: originally Penrose's claim was that using the fundamental twistor space is restricted to 4D spacetime which makes 4D quite unike.

I do not know if this claims is still valid.
 
  • #6
friend said:
Are there efforts that try to use a Feynman type path integral whose paths wind through different dimensions for the same point, or something like that? For it seems arbitrary to label an event with a 1D coordinate or a 3D or 5D coordinate if all you're trying to do is distinquish one event from another. All that's required is to have a different number or set of numbers for each event.

I think this a great reflection and does illustrate one fundamental issue with the various sum over options ideas, and it's how to properly count the possibilities. It's not hard to see that the ergodic assumption hidden here is implicitly specified once you define the microstructure or the event/sample space, which is usually considered to be a fixed background information.

friend said:
Does this sound like any study programme out there?

I am not aware of something that does this they way I think it should be done either, which is my I have started working on such a reconstruction on my own.

As I see it, the microstructure that defines the partitioning and the counting of possibilities in the action, are itself uncertain and evolving. The results is a self-referential evolving model that should self-organise and thus select a preferred system of microstructures including a selection of dimensionality. Conceptually the basis for the selection would be like a compression problem, the physical structure that has the best compression is the fittest.

I am starting the reconstruction a bit like you suggest - with distinguishable states/options. But the distinguishable states are observer dependent, and more or less makes up the identity of the observer, and an evolving state space is equivalent to an evolving observer. The observers physical structure I envision as a system of related microstructures, the relations are defined by transformations that are also selected by evolution. This system of related microstructures is constructed in such a way that it encodes an implicit selfaction that is maintained until an external disturbance distorts it. I consider this external distortion to be the selective pressure that forces the action system to maintain the fittest configuration as an act of self-presevation.

My highly personal idea is that dimensions are grown from a starting point where there are nothing but distinguishable events. Higher dimensions is born out of deviations from stable distributions of the lower dimensions, when the data is so demanding. The higher dimension implicit in the microstructure would then be preferred as it has a higher relative entropy.

The idea is that the different observers implicitly defines a different ergodic hypothesis and an different distinguishable event space, and that the selection or dimension is simply the selection of the population of action/matter/observer systems in the environment.

My hope is that the easiest is to start at the low complexity scale, because there the options are few and combinatorically exploration is possible. When the complexity grows, effectively continuous must bel emergent.

I wish there was more developed but I have not found much except fragments.

/Fredrik
 
  • #7
This is one of the motivations behind my objection to STARTING with an a priori contiuum - you simply have no clue how to COUNT it, in a program that is building on physical inference like mine. To do that, you need to postulate various information measures, and for a continuum there seems to be an infinity of possible measures - thus making the counting ambigous.

This is my main motivation for wanting to reconstruct the continuum as inherited from Newtons and leibnitz development of calculus in physics from a physical standpoint that is closer to the actual evidence/measurement position.

/Fredrik
 
  • #8
FWIW, the way I am currently pondering this is by exploiting distinguishable time histories. Ie. you can partition or recode the internal memory space containing recorded and retained time histories from just a time history data mode into transformed say truncated probability distributions, and here as times goes by new dimensions are defined by distinguishable distributions.

I think the key is the transformations which are effectively a form of datacompression. Ie. howto recode the observers timehistory to hold a maximum of _useful_ information given a finite memory capacity?

At some point and internal partitioning and structure of the memory strucure will emerge, that I picture will "image" the outside. In particular do I think there are a series of transformations that generate a dimensionality of spacetime from a pure timehistory of simply distinguishable events. The grouping that appears are IMO a result of selection for higest compression.

In this scheme, I'm fiddling with expressions for the action of the compression machinery and this has similarities with sum over paths. Sometime the non-classical statistics is a result of the fact that there are several (generated) related (by means of selected transformations) sub-structures that are at play, which introduces new properties of and and the or logical operators.

I think QM would follow if you at some point assume the Fourier transform defining a new event space from a timehistory, but I'm not happy with that, I'm still looking for the deeper reason what this transformation is special.

The problem is that this approach is so radical that it requires reconstruction of most basic physical concepts. It's not until the reconstruction is developed that it will be clear if this bears fruit or not.

/Fredrik
 
  • #9
If you try to make sense out of, and compare a classical expression for a partition of conditional probabilities and a QM logic sum with superpositions with the born rule then the core difference lies in the definition of logical operators. The origin of this is what I think can be solved in the above way.

This was partially touched upon in this thread
https://www.physicsforums.com/showthread.php?t=198571&page=10 the two last pages.

That discussed the interpretation of superposition, and not dimensions but of course, I see a connection here.

In particular the meaning of X and P when P are not commuting. I'm suggesting that non-commuting observables appear spontaneously as a result of data compression. I think the form of superposition can be explained rather than postulated.

If this can be accomplished I am very confident that a much deeper understanding of feynmanns path integral will come along where the partition structure is itself evolving.

/Fredrik
 
  • #10
friend said:
Are there efforts that try to use a Feynman type path integral whose paths wind through different dimensions for the same point, or something like that? For it seems arbitrary to label an event with a 1D coordinate or a 3D or 5D coordinate if all you're trying to do is distinquish one event from another. All that's required is to have a different number or set of numbers for each event. I'm thinking that maybe such a transdimensional path integral would have a classical path of our familiar 4D universe. Does this sound like any study programme out there?

Perhaps as a dimension is varied in the process of integration, the higher the dimensionality of a contribution the less it contributes to the overall sum because now the same function is described with many dimensions, only one of which is being varied, which means the function is not changing as much and cancels out with other values of the dimension as that integration variable is varied. This would have a tendency to favor the lower dimensions, and perhaps lead to the classical observation of 3 dimensions of space.
 
  • #11
So a directed action in 3-space would be very obvious and distinctive. In one axis, something happened, while in the other two - with equal crisp definiteness - nothing did. But the same action in 390-space, or 4,000,033-space would be "lost" comparatively. The higher the dimensionality, the less anything overall would seem to have changed?

Is this the logic of your approach?
 
  • #12
apeiron said:
So a directed action in 3-space would be very obvious and distinctive. In one axis, something happened, while in the other two - with equal crisp definiteness - nothing did. But the same action in 390-space, or 4,000,033-space would be "lost" comparatively. The higher the dimensionality, the less anything overall would seem to have changed?

Is this the logic of your approach?

I'm not really sure I've got it figured out yet, just bouncing some idea around to see if it sounds familiar or feasible to anyone. I was just thinking that every time an integration is done in the path integral, the one dimensional contribution of the action would be integrated throughout the entire range where as the multidimensional contributions would be integrated through only one of its dimensions and not throughout its entire range. And so I was thinking that since higher dimensional contributions would not change much, they would tend to cancel out with contribution with other higher dimensional contributions. The result would favor the lower dimension, hopefully the 3D space we live in.

The trick I'm looking for is what function or action is integrated, and what do the multidimensional versions look like, and how would you ensure that the single dimensional contribution is always integrated even though other integration variables are used to integrate with in the multidimensional integral.
 
  • #13
Just an idea:

How about if there was a constraint on the intergration/sum, such as a maximum count - say, the sum can not sum over more than M "options". Then then counting one n-tuple as one distinguishable events rather than n separate distinguishable event, surely is more economical, thus making valid groupings preferred from the point of computation economy.

That wouldn't be too far off what I envision.

What would that constraint come from then? IMO it's the complexity (information mass) of the observer.

In terms of continuum integrals, rather than sums, the limits would ultimatley have to do with relative convergence speeds of the defining limiting sequences. This is why the continuum is a problematic physical starting point.

/Fredrik
 

1. What are programmes that calculate dimensionality?

Programmes that calculate dimensionality are computer programs that use mathematical algorithms to determine the dimensions of a given dataset or set of data points. These programs are commonly used in data analysis and machine learning applications.

2. How do these programmes work?

These programmes work by first analyzing the data to determine the number of variables or features present. Then, they use various mathematical techniques such as principal component analysis or singular value decomposition to determine the number of dimensions needed to represent the data accurately.

3. What are the benefits of using these programmes?

The main benefit of using programmes that calculate dimensionality is that they can help simplify complex data and reduce the number of variables needed to represent it. This can improve the efficiency and accuracy of data analysis and machine learning models.

4. Are there any limitations to these programmes?

One limitation of these programmes is that they are only as accurate as the data they are given. If the data is noisy or contains outliers, it may affect the accuracy of the dimensionality calculations. Additionally, these programmes may not be suitable for all types of data and may require some manual tuning.

5. Can these programmes be used for any type of data?

While programmes that calculate dimensionality can be used for a wide range of data types, they may not be suitable for all types of data. For example, they may not be effective for data that has highly non-linear relationships or data with a large number of categorical variables. It's essential to understand the limitations of these programmes and choose the appropriate one for the type of data being analyzed.

Similar threads

  • Beyond the Standard Models
Replies
0
Views
903
  • Beyond the Standard Models
Replies
5
Views
2K
  • Beyond the Standard Models
Replies
4
Views
1K
  • Beyond the Standard Models
Replies
2
Views
2K
  • Beyond the Standard Models
Replies
1
Views
2K
  • Beyond the Standard Models
Replies
3
Views
2K
  • Programming and Computer Science
Replies
1
Views
234
  • Beyond the Standard Models
Replies
4
Views
2K
  • Beyond the Standard Models
Replies
1
Views
735
  • Beyond the Standard Models
Replies
14
Views
3K
Back
Top