Why is spacetime four-dimensional

  • Thread starter Thread starter tom.stoer
  • Start date Start date
  • Tags Tags
    Spacetime
  • #51
Yes this is of course exactly the question that is pressing everyone, but AFAIK no one knows how to translate this (or another mathematical) proporty to a physical selection or extremality principle.

There has been circumstantial evidence here and there over the years, see eg: http://www-spires.dur.ac.uk/cgi-bin/spiface/hep/www?eprint=hep-th/0511140
but nothing came out really concrete.

Many people tend today to believe in some kind of anthropic or evolutive cosmological principle but that's of course a matter of heavy disputes.
 
Last edited by a moderator:
Physics news on Phys.org
  • #52
Fra said:
Ie. supposed we start out with an abstraction for the representation and decision/action problem of a given observer. Then this observer IS it's on measure, and it's contraint, of it's environement (ie. all other observers).

To keep to conceptually relating to strings, string theory can with some imagination be seen as an attempt at exactly this.

Ie. the string action beeing somehow a fundamental action, from which a lot then more or less follows together with generic lessions from QFT.

In this, sense, it's not a bad attempt at all. This is also I think almost the essence of what some string researchers think with string theory beeing theory of theories. That's an impressing ambition, and the logic isn't alien to me.

But, my main problem with ST, is that string theory is NOT supposed to be an inferential theory in the proper sense (like I try to suggest; becuse it's my wild imagination that sees a remote connection here, I know well that string theorist does not make this conenction). For example the fundamental string action is pretty much a classical starting point, building purely from the mental picture of a litteral excited string. The ACTION of the string has no proper inferential interpretation or meaning.

But a pretty much similar theorizing such as in string theory, BUT if based on a proper inferential starting point where the fundamental action is a pure probaiblistic or information divergence view with a representation that fits with histories of events, would MAYBE be able to overcome many of the issues that ST has. The landscape problem beeing one of them.

This is why I've rambled several times that max ent principles and action principles can be udnerstood as purely inferential. Thus the fundamental action should be understood as purely inferential. No association to "classical strings" or anything else that is just confusing should be necessary.

Rather a finite string can be associated maybe even with the [0,1] interval, of a probability measure, when this measure no longer can accommodate the environment, conservation laws require that the measure itself maps out more complexions and dimensions. In this way the original string can be understood as living in a higher dimensional space. The an action can be defined by pure combinatorics.

This would do away with th baggge of ST starting points such as background space where QFT applies, and the background "string action" (which is really just taken from classical mechanics mentality).

If what I suggest is right, maybe one can udnerstand why string research might have stumbled upon some interesting ideas, even though the deepest understanding is still lacking.

/Fredrik
 
  • #53
tom.stoer said:
This could single out dim=4 rather easily, but of course it misses a physical concept, e.g. explicit construction of exp(-S).

It seems we all agree here where the issue is.

The physical basis of exp(-S) is the essence of seeking the physical basis for inference. This is yet a deep argument for acknowledging the inferential nature of theory and physical law in any research program.

The selection of the MEASURE and understanding the relativity of measures is at the heart of all this. And these things are also at the CORE of the inferential perspective.

/Fredrik
 
  • #54
tom.stoer said:
One idea was to "count" diffeomorphic structures. This could single out dim=4 rather easily, but of course it misses a physical concept, e.g. explicit construction of exp(-S).

"count diffeomorphic structures"? Remind me again what so special about 4D? What does this have to do with diffeomophism invariance, or what? Thanks.
 
  • #55
friend said:
"count diffeomorphic structures"? Remind me again what so special about 4D? What does this have to do with diffeomophism invariance, or what? Thanks.
Look at the topological manifold R3. Try to construct a differential structures on top of it. It works - and you'll get exactly one such structure; nothing else but the standard differential structure we are used to. This applies to many other manifolds as well: one topological manifold - one differential structure. It applies especially to all Rn except n=4.

Now take the famous S7. You get 28 different topological structures, i.e. exotic spheres which are differentiable manifold that are homeomorphic but not diffeomorphic to the standard S7. Again this applies to many other manifolds as well: one topological manifold - N different differential structures (with N>1).

Now look at the topological manifold R4 (and afaik other non-compact 4-manifolds). There is not one differential structure, not N differential structure, but a continuum of differential structures. That means that dim=4 is unique in the following way: only in dim=4 one can have uncountably many manifolds that are all homeomorphic but not diffeomorphic to each other.

My idea is to "count" all differentiable manifolds, or to use something like a set of all differentiable manifolds. By the above reasoning it follows that manifolds with dim != 4 are a null-set in this set of all manifolds.
 
  • #56
tom.stoer said:
but of course it misses a physical concept, e.g. explicit construction of exp(-S).

Now I think I realize what you meant something else here.

The way I picture the counting, does not misses this weight. I rather think that if the counting procedure is taken seriously, these factors will pop out. I think if you look at the physics of counting, and in particular when the counting events com from non-commuting sets, the counting will in addition to the classical "probability weight", contain a transformation factors that corresponds to the connection-weight so to speak, betwene the non-commuting eventspaces. This conenction weight would measure the information loss during "transporting" between eventspaces. Just like one need to parallelltransport vectors in curved space into the same tangentspace in order to be able to comapre them. The same applies to the evidence orevents. A transport is need, before they can be comapred and this will introduce some further factors.

So if we take the counting more serious than just a CLASSICAL counting, giving rise to a classical probability, then a full expectation combining counting from non-commutative evidence, will introduce nonclassical terms in Z.

This is not in doubt in me, what I find unclear is the details, and wether the program will succeed. But I don't see such counting as beeing "simple" and missing those action terms. It would rather probably explain these terms, including quantum logic.

The idea beeing something like

/Fredrik
 
Last edited:
  • #57
tom.stoer said:
Now look at the topological manifold R4 (and afaik other non-compact 4-manifolds). There is not one differential structure, not N differential structure, but a continuum of differential structures. That means that dim=4 is unique in the following way: only in dim=4 one can have uncountably many manifolds that are all homeomorphic but not diffeomorphic to each other.

tom.stoer said:
One idea was to "count" diffeomorphic structures. This could single out dim=4 rather easily, but of course it misses a physical concept, e.g. explicit construction of exp(-S).

Let's try this: In the Feynman Path Integral, each path is continuous but not necessarily differentiable. In other words, paths can take sharp turns where no tangent exists at the turning point. So one path would not be diffeomorphic to another, but it would be homeomorphic. And you would need an infinite number of these non-diffeomorphic paths to construct the path integral. That only exists in R4.

Or, perhaps the whole path integral might be calcuated in one diffeomorphic manifold. And since the path integral is valid everywhere, you might need an entirely different manifold not diffeomorphic to the first to calculate the path integral somewhere else. Clearly then, you'd need an infinite number of non-diffeomorphic structures to insure that you could calculate the path integral everywhere so that the laws of physics would be the same everywhere. How does this sound?
 
  • #58
friend said:
Let's try this: In the Feynman Path Integral, each path is continuous but not necessarily differentiable. In other words, paths can take sharp turns where no tangent exists at the turning point. So one path would not be diffeomorphic to another, but it would be homeomorphic. And you would need an infinite number of these non-diffeomorphic paths to construct the path integral. That only exists in R4.
No; what you are discribing is possible in any dimension. But I am not talking about a path in spacetime, but about spacetime itself.

friend said:
Or, perhaps the whole path integral might be calcuated in one diffeomorphic manifold. And since the path integral is valid everywhere, you might need an entirely different manifold not diffeomorphic to the first to calculate the path integral somewhere else. Clearly then, you'd need an infinite number of non-diffeomorphic structures to insure that you could calculate the path integral everywhere so that the laws of physics would be the same everywhere. How does this sound?
I think that's not really what I am talking about.

I'll try to give you a simple example.

In bosonic string theory you try to define something like that:

\int dg\,e^{iS}

Here g is the Riemann metric on the two-dim. worldsheet of the string (forget about the 10-dim. target space; it's not relevant here). Then you recognize that you have different manifolds, in two dimensions simply identified via their genus; so you write the integral as

\sum_\text{genus}\int dg\,e^{iS}

where now the integral is over all metrics for fixed genus. But of course two different metrics g and g' with same genus are homeomorphic to each other and therefore should be identified physically. So formally one writes

\sum_\text{genus}\int \frac{dg}{\text{Vol}(\text{Diff})}dg\,e^{iS}

But here something interesting has been hidden: in two dim. two homeomorphic manifolds are also diffeomorphic and vice versa. This is no longer the case in higher dimensions. The first example are the famous exotic 7-spheres. They are all homeomorphic to the standard S7, but there is no smooth map between them, they are pair-wise non-diffeomorphic. Of course on each such S7 there are diffeomorphisms, but not between them.

My idea was to make use of this concept and treat non-diffeomorphic manifolds as physically different. So for the 7-spheres I would have to calculate the integral on each S7 and I would have to sum over all 28 7-spheres. In 4-dim spacetime the same will happen: I have numerous different manifolds. Usually we say that one of them is the R4. So when e.g. Hawking tries to write down a path integral over Riemann metrics he counts every manifolds (Minkowski, deSitter, ...) exactly once. But what I am saying is that even for the stadard R4 (required in the euclidean version) he has just one R4 topologically, but uncountably many different R4 which are homeomorphic but not diffeomorphic to each other. Therefore there should be a sum (or better: an integral) over all different R4's.

Now the funny thing is that this is unique to dim=4. There are examples for higher dimensional spaces which are homeomorphic but not diffeomorphic (the 7-spheres have been discovered first), but usually you only get a finite number of non-diffeomorphic manifolds. Only in dim=4 you get uncountably many.
 
  • #59
tom.stoer said:
Therefore there should be a sum (or better: an integral) over all different R4's.

We've learned to understand the feynmann path integral as that the action needs to account for all distinguishable possibilities (because somehow nature does). So we just count them, like we would count outcomes in probability theory.

But there are two things in this picture, which isn't understood well and that I think need to be understood to implement your idea too.

1. The quantum logic way of counting is different. Why? And how can be understand this?

2. When do we know that all physical possibilities are counted, but not overcounted? We need to understand the counting process within the right context.

The first issue is I think related to the decision problem where we have several sets of non-commuting information (that simply can't be added). It could be that BOTH sets contain information or evidence that supports a certain event, and then we need to ADD the "counts" from both sets... somehow, this is where quantum logic (and other generalisations) enter. This would amount to the classical expressions for probabilities from "classical counting" having forms such as (probability of possibility i)

<br /> P(i) = w(i)e^{-S(i)}<br />

Where S is a kind of information divergence, w is just the factor from statistical uncertaint, going to 1 in the infinity limit.

would by necessaity incomplex more complex computations where w and S are generalized (just like it is in PI vs classical statistics).

The NEXT problem(2) is that of normalization and making sure we count all options, but to not over count. IMHO, the key not here is to understand that any counting must be specificed with respect to a physical COUNTER, and record. This is the equivalent to counting, so the subjective bayesian view to rpobability. Call this context observer O.

This the expression further changes as

<br /> P(i|O) = w(i|O)e^{-S(i|O)}<br />

the complex formalism of QM is still real in the dend. I mean all expectations values are real. The to complex math is only in the comptuation.

Now, if the non-commuting sets are related by a Fourier transform, then obviously these transforms will enter the expressions. Any other relation and these will also reflect the comptation.

In particular will the context, put a bount on the number of possible distinguishable states if you think that the COUNTER and record can only distinguish, resp, encode a certain amount of information. This is what I think is ithe physical argument behind why it does not make sense to think we have to sum all mathematically possibilites.

Past attempts suchs as hawkings euclidian summation etc really does not even seem to ask this question: ie. the fact that the context of the counter is important, and has physical significance and that there is a good amoutn of relativity in the counting.

That two observers disagrees on how to count evidence is expected, it's not an inconsistency per see. I think is the reason for interactions in the first place.

As long as one is clear what is mean here, and doesn't think it means that two scientists will disagree upong PI calculations -they shouldn't. IT's just that if we play with the idea that a quark was about the perform the PI calculation, I am pretty sure it would be different, and this would explain the behaviour of the quark. The action ofthe quark reflects, it's expectations as defined by "renormalizing" this PI to quark level.

So I really think that we need to understand the physics of this counting itself.

/Fredrik
 
Last edited:
  • #60
I agree to most of the problems you are describing (and of course Hawking doesn't talk about these mathematical subtleties at all).

Yes, the biggest problem is how to define the counting including the weights. It is clear that we should count different topologies, but that we mustn't count physical identical entities twice. So the question is: what are physical identical entities? Usually one says that the same manifold equipped with different coordinates must be counted only once (if we would talk about world lines: each world line with different parameterizations is counted only once). But that means that we need diffeomorphisms between these different coordinates such that we are allowed to identify the two manifolds. As far as I can see it's exactly this step that fails when introducing homeomorphic but not diffeomorphic manifolds: the construction of a complete set of diffeomrphisms between the two atlases is no longer possible - therefore we should count them twice.

But there are additional problems: is it reasonable to start with manifolds at all? Wouldn't it be better to start with discrete structures from which manifolds can be recovered in a certain limit? If we try to do that, how can one save my argument, i.e. is there any discrete structure which is agnostic regarding dimensions in the very beginning (graphs are in some sense) but from which manifolds do arise, and which is somehow peaked around dim=4? I don't think that graphys will do the job as I don't see why dim=4 shall be favoured. What about causal sets, for example?

The problem is that all approaches I have seen so far seem to select dim=4 based on input + a dynamical approach (causal sets are constructed from dim=4 space and they recover dim=4 in some limkit defined by dynamics). My approach would be different in that sense that dim=4 is no input, dim=4 is not favoured by dynamics but by counting w/o dynamics. So the dynamics (that is still missing) should not be constrcuted such that dim=4 is selected (this is already done by counting) but that this selections not spoiled (i.e. that dim=4 is not supressed too much by exp(-S)).

So instead of having a dim-fixed starting point + dim=4 selecting dynamics it's the other way round: one as a dim-free setup + non-dyn. selection principle + dim-agnostic dynamics.

The major weakness is that I need manifolds. So any other (discrete) structure that could do the same job would be welcome.
 
  • #61
tom.stoer said:
So instead of having a dim-fixed starting point + dim=4 selecting dynamics it's the other way round: one as a dim-free setup + non-dyn. selection principle + dim-agnostic dynamics.

Couldn't 4D be selected because first principles require an infinite number of homeomorphic but non-diffeomorphic structures. So I was looking for where such structures might be used in a physical context and thought about how Feynman paths might be homeomorphic but not diffeomorphic to each other, and you'd need an infinite number of them. Although, you'd probably have to do a path integral of 4D space (paths) that are homeomorphic but not diffeomorphic to each other. So if one could justify the use of Feynman type path integrals, then 4D might become logically necessary, right?
 
  • #62
tom.stoer said:
So the question is: what are physical identical entities
We agree on the question.

This is also a different but deeper perspective to the old question of what the important observables; I mean do we quantize observer invariants, or do we form new invariants from quantized variants?

Because "quantization" is not just a mechanical procedure although one somtimes get that impression. It's is just "taking the inference perspective seriously". The choice reflects how seriously we take the inferencial status of physics. They way QFT "implements this" mathematically can IMHO be understood as necessarily a special case.

Namely: Who is counting? An inside observer, or an external observer? That's the first question.

I'd suggest that current QFT, makes sense in this perspective if the counter is an external observer. And here external is relatively speaking, not external to the universe of crouse. Just external to the interaction domain, which is the case in particle experiments. The external observer is the labframe. In this sense current understanding is purely descriptive, it is not really the basis for decision making.

But this is not the general case, therefore the exact mathematical abstraction of QFT, breaks down for a "general inside counter", and an inside counter is not merely doing descriptive science, it bets it's life on it's counting, since the action of this inside observer is dependent on predicting the unknown environment.

To imagine inside counters, also in a deep sense touches upon RG. Since it is like scaling the counting context. So that you count naked events or events from the much more complex screene/antiscreened original system. Again current RG, describes this scaling descriptively relative to a bigger context. Ie. from assumptions of some naked action and a environment with screening antiscreening effects this is predictable; and this can be described and tested against experiment. Again this theory or theory scaling is not a proper inside view in RG.

So the same idealisation exists there. RG and counting, are integral parts, and both these things will need reconstruction in such a counting scheme you seek ( and I see it too, so it think we share the quest here).

So I think it's not possible to resolve this, by keep taking the same of PI formalism for granted and ONLY focus on various spacetime topologies and diffomorphisms... I agree that needs to be done, but I feel quite confident in my hunch that clarifying this, in the sense you suggest... is probably possible, but it will require a deepening of many things.. including foundations of QM and RG.

But if we can agree on a common question here, that's still quite nice. If I understand surprised right he seems to more or less share the same quest, except the question may be formulated different from within ST?

More later...

/Fredrik
 
  • #63
Since I sometimes think of evolution, one should maybe clarify the difference to "dynamical" evolution.

tom.stoer said:
So instead of having a dim-fixed starting point + dim=4 selecting dynamics it's the other way round: one as a dim-free setup + non-dyn. selection principle + dim-agnostic dynamics.

If I understand you right, you by "dynamical selection principle" mean a deterministic law (although it can of course stil lbe probabilistic; just like QM) that rules the dynamics of the system, and this then selects the 4D structure.

Then I fully agree that such an "dynamical selection" does in fact no explain anyitnh, it's just a recoding of the same problem, but where the "why 4D" then transforms into "why this particular dynamical law(that "happens" to select 4D;)"

I do however think of the mechanism of evolution, that does select 4D. But not one which is ruled by deterministic evoluton laws, but more a darwinian evolution.

Of course the details of this must be clarified. I see this as work in progress. But this can explain things like; we do NOT count all "past possibilities" in the action integral, we only count the FUTURE possibilities. Because for a real bounded observer, I think that part of the history must necessarily be forgotten.

So evolution of law can still be seens as a random walk, ande here the number of possibilities and favouring of 4D may still have a place like you suggest. But this I see not as a "dynamical evolution" but rather as an selective and adaptive evolution.

I figure you will think that this is starting to just get more foggy and foggy, but I think there are some expoits here that to my knowledge has never been explored.

Namely to reconstruct the counting, in depth, and consider "artifically" probably evolutionary steps and come up with arguments for why nature looks like it does, that are more like rational inferences, rather than logical necessities.

I really do not have much time at all myself, although I try to make progress with the little tiny time I hade. I do enjoy and hope to see some of the promising professionals that are working in a promising direction make some progress here.

/Fredrik
 
  • #64
tom.stoer said:
So any other (discrete) structure that could do the same job would be welcome.

All I can say at the moment, is that I have some fairly specificf ideas here, but they are very immature. But I think this way is the right now.

My exploit is to start my reconstruction in the low complexity end of the RG. And consider how the evolving interactions develop relations (seed to spacetime) and how the set of possibilities increase as complexity does. The point is that at the low complexity limit, you can pretty much manually look a the possibilities. I think this would correspond to a level beyond the continuum beyond "strings" or other continuum measures. Something like at causet level... but still for some reason causet papers tend to get a different turn that I want to see. But the basic abstraction of ordered sets (corresponding to events) and historeise or chains of events corresponding to observers are I think plausible to me.

The continuum structures you think about, should emergen in some large complexity limit, and I am not crazy enough to think that a physical theory need to model every information bit in the universe... rather at some point we wil lconenct to ordinary continuum models, but very enriched with the new strong guidance we apparently need.

/Fredrik
 
  • #65
suprised said:
That's the hitch. Nothing forbids eg. d=10, that is, no compactifiction. Or simple torus compactification with maximal susy to any d up to 9.

In all those sugra compactifications like Freund-Rubin one always assumes some background, or some class of background,

Just to be sure, have you read the paper of F-R and do you remember that it assumes some background, or are you guessing? My recollection was that it was a dynamical argument, from a lagrangian and an action.

Also, I remember there was papers such as "10 into 4 doesn't go", showing that the F-R arguments were very particular of 11=7+4.

I think that in this kind of threads we are dangereously near of the mechanisms of consensus science: someone guess some content, it coincides with another guess, and nobody checks. I can try to xerox some papers for interested people, but if you guys don't have access even to commonplace journals that are available in any university campus, I am not sure if it is worthwhile.
 
  • #66
arivero said:
Just to be sure, have you read the paper of F-R and do you remember that it assumes some background, or are you guessing? My recollection was that it was a dynamical argument, from a lagrangian and an action.

The FR paper is available at KEK http://ccdb4fs.kek.jp/cgi-bin/img_index?198010222

There's no dynamical argument at all. The whole point of FR solutions is that they are maximally supersymmetric, however that means that they are at the same energy as the uncompactified theory. So there is no dynamical argument selecting FR without additional physics that we do not as yet know about.

Also, I remember there was papers such as "10 into 4 doesn't go", showing that the F-R arguments were very particular of 11=7+4.

Again, FR solutions, in their original sense, were maximally supersymmetric solutions. There are many more options available if you only want to preserve one supersymmetry in 4d. That these were not known in 1980 does not mean that we should ignore them.
 
Last edited by a moderator:
  • #67
fzero said:
The FR paper is available at KEK http://ccdb4fs.kek.jp/cgi-bin/img_index?198010222

There's no dynamical argument at all. The whole point of FR solutions is that they are maximally supersymmetric, however that means that they are at the same energy as the uncompactified theory. So there is no dynamical argument selecting FR without additional physics that we do not as yet know about.

Thanks, my recollection was different! My reading was that maximal supersymmetry limits the choosing to the 3-index antisymmetric tensor, and that then Einstein-Hilbert equations imply that any separation, if it exists, must me 4+7.

EDIT: In fact, my re-reading of the paper doesn't contradict my previous recollection, first they proof that the existence of a s-indexed antysym tensor implies that compactifications must be of the form (s+1), (D-s-1). They use Einstein-Hilbert equations, not susy, to prove this argument. Then D=11 Sugra in maximal susy has a s=3 tensor, ann they get the announced result. But the compactification argument does not use susy at all, it seems to me.
 
Last edited by a moderator:
  • #68
arivero said:
Thanks, my recollection was different! My reading was that maximal supersymmetry limits the choosing to the 3-index antisymmetric tensor, and that then Einstein-Hilbert equations imply that any separation, if it exists, must me 4+7.

EDIT: In fact, my re-reading of the paper doesn't contradict my previous recollection, first they proof that the existence of a s-indexed antysym tensor implies that compactifications must be of the form (s+1), (D-s-1).

They make the assumption that the (s+1)-form must be proportional to the volume form of the compact manifold. It is a worthwhile class of solutions to study, but it is by far not the only class. In fact, one reason not to do so is that the VEV of the kinetic term for the form becomes the negative cosmological constant of the AdS part of the solution. While there are models like Bousso-Polchinski, where the fluxes partially cancel the naive 10^{120}~\text{eV} scale CC, they are all incredibly fine-tuned. Other examples of moduli stabilization rely on much more modest amounts of flux.

They use Einstein-Hilbert equations, not susy, to prove this argument. Then D=11 Sugra in maximal susy has a s=3 tensor, ann they get the announced result. But the compactification argument does not use susy at all, it seems to me.

True, there are various internal manifolds that one can consider. The round spheres are maximally supersymmetric. This, together with hints at gauge groups from deformed spheres was what made these models interesting.

Incidentally, it is important to check the stability of these solutions in the absence of supersymmetry. I don't remember any relevant references, but I think most non-SUSY solutions would be unstable to decay to flat space.
 
  • #69
fzero said:
They make the assumption that the (s+1)-form must be proportional to the volume form of the compact manifold.

Ah, so proportionality of the s-form + application of Einstein-Hilbert action imply (s+1), and then susy implies s=3. And it uses an action principle (Einsten-Hilbert).

Of course it is not the right solution. If it were, we should not be here discussing about how to find solutions. :cool:

I think that the question of stability was studied too in the eighties, for spheres and deformed spheres, with both good and bad results, depending of parameters. In any case, as the problem of fermions show, spheres are not the complete solution neither, just interesting models that seem to be close to the real thing. Probably the deformed 7 spheres and the spaces with standard model isometries are connected from the fact that CP2 is a branched covering of the 4-sphere, a very singular situation.

The point of 11 SUGRA=7+4 being near of the real thing is that it was a serious justification to study M-theory. In fact it is better justification that to study it "because it is cool", or "because I am going to get more citations". Blame the split between hep-ph and hep-th.
 
  • #70
jal said:
Fra always says ... “from a given observers "inside view"”

Fra, take what you you say to the level of the universe of what a QUARKION would say.
:cool:

I await to hear what else you think the QUARKIONS WOULD SAY about their universe.

Jal, you're right that asking what a "quark would see" does fit into my intrinsic inference quest :)

Though it's too early for me to speculate in this. The main reason is that before quarks enter the picture I expect the formation of continuum like structure comes first. Now, even if someone would argue that it's 4D rather than 2D, 2D is neverthelss a countinuum.

So to attach my envisions construction into the standard big bang timeline, the starting points is somehow the Planck epoch. As early as this, is where the "discrete picture applies". When we get to the quark formation we first need to understand how the complexions separated out from gravity and how the continnuum approximation is formed.

/Fredrik
 
  • #71
jal said:
5. In the beginning, It appeared that our degrees of freedom were limited to 2 and that we were organized so that we could only move from a cubic to a hex. pattern.

Roughly, the simplest way I imagine how 2D "spacetime" emerges from evolving discrete complexions is like this.

Consider an observer that has a finite information capacity (memory) that can distinguish only ONE boolean event. Consider a counter that simply encodes/stores the historical counts indexed by 0 and 1.

At each instant all there is, is a counter state.

At the high complexity limit when the counter structure becomes sufficiently complex, the limit of the state space of the counter converges fills [0,1]. So almost a real number (but the further construction can only be understood if it's acknowledged that the limit is never reached).

The state of this counter is constantly challanged by new events and when the counter is saturated, a decision problem appears: An existing count needs to be erased from memory in order to make room from fresh data. What is the optimal information update here? I conjecture that data is ereased randomly!

(This means the erased data is randomly distributed with respect to the emitter, but not necessariy with respect to the receiver; compare here to black body radiation and the information content of hawking radiation)

As the complexity of the observer increases (getting close to the continuum), more possibilities of reencoding the microstructure appears! For example one can consider histories of counter states, effectively considering a history of real numbers. This is the first dimension.

This can then be repeated. But clearly the stability of this higher dimensional records depends on the complexity. At low complexity, the idea is taht these are unlikely to appears, for statistical reasons. The are not forbidden at all, they just don't happen since they are unstable.

But in parallell to this simple cobordism type of genration of dimensions, there are OTHER maybe more interesting development, such as more complex recodings... cobordism is extremely SIMPLE. More complex things is formation of non-commutative strucutres such as a fourier-like transform of the first "string" of real numbers. This would encode the state of change, and thus increase predictivity and stability of the entire measure complex.

So dimensional creation and creation of non-commutative structures are really both just different types of recoding of the data. The selection of WHICH of these recodings that are most stable is the challange.

IF you start from the low complexity end, one can user combinatorics and look as things.

Also the cobordism type of development (histories of states by recursion) and the development of parallell non-commutative structures are in equilibrium since both processes are constrained by the same ocmplexity. Inflating higher dimensions is extremely complexity demanding, but even creating parallell non-commutin structures are... but at the same time this entire structure complexed is constantly challaged by it's environemnt... and if you picture an idea where ALL these possibilities are randomly tried, what emerges in evolution is the optimally fit decomosition of eternal dimensionality and internal non-commuting structures. There is some equilibrium condition we seek here. This is how I see it.

I'm working on this and, all along the guiding principles are no ad hoc actions, all actions are rational random actions. The point is that what is just a entropic dissipation in a simple microstructure, will generate highly nontrivial actions when you combine it with higher dimensions (ie more than one:) and non-commuting structures.

/Fredrik
 
  • #72
jal said:
1. the universe is confined to 10^-15m

Since I think you expected some informal associations, to spawn imagination, it's tempting to make also the following picture of confinement and the origina of quark mass.

The most obvious reason why you never see something in isolation, is because it's just one face of something bigger, right? There is always flip side, and they support each other.

If you compare some ideas from ST where quarks are associated with end of the string. Then combine that with the idea above that the string index is the [0,1]. Then confinement seems to be related to that it doesn't make senes to consider the upper limit of the state space unless there is an lower limit.

I mean the only way to separate the limits, is to split the index (ie SPLIT the STATE SPACE of the counter into TWO) which then corresponds to creating new pair of "ends". This is easier to understand if one understand that the string index is really just an index defined by the states of a counter. And of the history of this counter for somereason weakens the support of the index in the middle-states, then that effectively creates two new ends, and even the slighest fluctuation and random deletion of data (mentioned previously) risks breaking the link. In neither way does an isolated upper limit make sense w/o it's lower limit.

I think it's the fact that quarks are not seen in isolation, may make understading their mass values easier. The origin of the mass of the quarks might then always happen in the not one by one, but in the bound quark-systems. The bound system is created directly as a measure complex, and the quarks are just inseparable logical components of this.

This only way to really split them, is by creating more of them.

I hope no one is too offended by this baloney, but it is just another "mental image" that may explain make sense of this "counting picture" the thread is about. After all it's a subtle thing, to ask for hte physical basis of counting. All these visions are circling my head but there is indeed enourmous effeorts needed to develop this into a full blown theory. But acquiring some intuition and abstraction models is I think good, that doesn't mean there is any reason to mix these visions up with the full model. It's perhaps though, what it would take to UNDERSTAND such a model, once it's on the table. At least that's how I see it.

/Fredrik
 
  • #73
“Since I think you expected some informal associations, to spawn imagination”

I’m an amateur compared to you.
:blushing:

“So to attach my envisions construction into the standard big bang timeline, the starting points is somehow the Planck epoch. As early as this, is where the "discrete picture applies". When we get to the quark formation we first need to understand how the complexions separated out from gravity and how the continuum approximation is formed.”

... where the "discrete picture applies"

My understanding is that quarks are considered discrete.
If you make the assumption that discreteness originates at the Planck epoch then you are obliged to consider densest packing, (hex. or cubic) with the size of a dimension being reduced, (Not a new concept. String uses that concept).

CERN is on the verge of giving us some hints on discreteness of quarks and maybe the discreteness in the perfect liquid.

Should discreteness be demonstrated, in the perfect liquid, then my avatar would be good visualization and lattice, LQG, string calculations should lead to a mathematical description of what could be happening and what could have happened in the beginning.

jal
 
  • #75
jal said:
I’m an amateur compared to you.

I'm definitely not a professional either, if I were I should have made far more progress than I have since I resumed this. The difference between trying to make progress in small time slots at weekends and nights and beeing paid to spend all days doing it is gigantic. (although of course, most professionals doesn't all days eithers as they need to often do part time teaching etc).

To look at the bright side of life, freedom of affiliation is also strenght, as it's easer to be faithful to your original ideas. Time is the only issue.

jal said:
My understanding is that quarks are considered discrete.
If you make the assumption that discreteness originates at the Planck epoch then you are obliged to consider densest packing, (hex. or cubic) with the size of a dimension being reduced, (Not a new concept. String uses that concept).

You seem to always come back to this picutre of "perfect symmetry" etc.. I think you think in a different way. You seem to see the big bang from an external view. Ie. a perfect symemtry that is subsequently broken? something like that? this is an extenral picture.

I argue that an internal observer, would not SEE this perfect symmetry. The internal observer is just undecidable about almost everything. An internal observer can not infer a perfect symmetry - only an external observer can. This is the different I think between considering the conditions close to the big bang in a laboratory; where we DO have an external observer, and to SCALE the thery back to those proto-observer that did exists back then.

Of course, both perspective are valid! I just think that tha latter perspective has the simplest view (easiest to understand), this is the exploit I picture.

The quark masses for example, the external inference we have to day are experimental. But a good "checkpoint" would be to see if relations between the masses (and mass I associae to complexity), can be postdicted. The wrong postdiction would kill the reconstruction.

From the inferencial perspective, anything with mass is not elementary. This is why ALL mass needs to be explaind. Just explaining 95% of all mass as confined energy still leaves us with 5%.

/Fredrik
 
Last edited:
  • #76
With the assumptions of more than 3 spatial dimensions, then the definition of a closed system must be expanded to include those other dimensions.
Would this imply the redefining the role of a neutrino?
Does it take energy to open up a path to another dimension? Could neutrinos be that energy requirement?
What are the kinds of energies would can come into our 3 space dimension?
( dark energy?, gravity?, tachyons?, virtual particles or quantum tunneling?)
---
http://en.wikipedia.org/wiki/Neutrino
Neutrino

Wolfgang Pauli theorized that an undetected particle was carrying away the observed difference between the energy, momentum, and angular momentum of the initial and final particles.
---
http://en.wikipedia.org/wiki/Conservation_of_energy
The law of conservation of energy is an empirical law of physics. It states that the total amount of energy in an isolated system remains constant over time (is said to be conserved over time). A consequence of this law is that energy can neither be created or destroyed: it can only be transformed from one state to another. The only thing that can happen to energy in a closed system is that it can change form: for instance chemical energy can become kinetic energy.
 
  • #77
Are you referring the universe as a "closed system"?

FWIW It's not how I see it. And more importantly I don't think it's how an inside observer can possibly see it: I do not see how an inside observer can much such an inference that the environment in which itself lives is closed. What does that even mean? I simply can't imagine the inference. What I can imagine is an expectation or illusion that it's closed. But the stability of such illusions remains undecidable. And if I understan you right, you seek to use this as a hard constraint. That logic is not sound to me.

To think of the universe from the outside as something that is closed, expands etc is to me somewhat of a fallacy due to applying the science we are known to apply to subsystems to the universe, where there always IS an effective external view. From the inside view, this external view is as I see it totally wiped out.

/Fredrik
 
  • #78
Are you referring the universe as a "closed system"?
No. It is limited by the event horizon.
However, only the universe of the proton, which is what is being considered, it is closed/confined. (10^-15m)
 
Last edited:
  • #79
Ok, but I'm still not sure what you mean by closed. Even if one can not isolate quarks without creating other quarks around it, the entire complex (say proton, neutron) might be scalable. The origin and organisation of information in the proton, and how the proton responds to external perturbation is exactly what I think requires explanation. I can not imagine using this as a starting point. Then one has already missed some interesting steps.

jal said:
With the assumptions of more than 3 spatial dimensions, then the definition of a closed system must be expanded to include those other dimensions.

In the way I mentally picture the discrete complexion picture above, there is no god given dimensionality at all. And different dimensionalities can exist without changing the complexity just by different ordering and grouping of discreteness.

I do not have a _visual_ picture of this at all, my own picture is just an abstraction in terms of a information processing/creating/storing observer that does a random walk in just a black swamp. The only map he has, is in his internal structure, acquired from the past. During equlibirium his internal map will not need revision and we have an holographic connection. But many systems aren't in equilibrium, it's just a special case.

jal said:
Would this imply the redefining the role of a neutrino?
Does it take energy to open up a path to another dimension? Could neutrinos be that
energy requirement?
What are the kinds of energies would can come into our 3 space dimension?
( dark energy?, gravity?, tachyons?, virtual particles or quantum tunneling?)

These specific question I can't yet connect to. It's too early for me, but I think at some point they will be a handle on this.

Personally I picture some sort of unified quantum, from which the various quantums of the other interactions branch off as more complex observers emergg (starting from some basic Planck view, or below that, what do I know).

So in this perspective, a proton is indeed already a very complex observer.

A simple observer, then we're talking about perhaps a single massless bit or something fuzzy like that. So there would be an hierarcy starting from an almost trivial "observer, and then as you let the complexity scale run, stable-observer-complexes emerged along the way and serve as more complex building blocks for further bigger sturctures. Somehere in this hierarchy all the elementary particle must come up, or that's the idea.

And WITH THEN, implicit in their relations, also the selection of 4D spacetime.

/Fredrik
 

Similar threads

Back
Top