What caused the physical laws we have?

  • Thread starter Thread starter AGlane
  • Start date Start date
  • Tags Tags
    Laws Physical
Click For Summary
The discussion revolves around the origins and nature of physical laws, questioning whether they have always existed and if they can evolve. It highlights that while various theories exist, none have been conclusively proven, and the laws we observe today likely originated from the Big Bang. The conversation touches on the multi-universe theory, suggesting its proof hinges on the ability to communicate between universes. Additionally, it critiques the traditional approach in physics that assumes fixed laws without exploring their potential evolution or origin. Ultimately, the dialogue emphasizes the need for a deeper philosophical inquiry into the nature of these laws and their implications for our understanding of the universe.
  • #31


On the continuum:

The world, indeed all systems, are formed by symmetry breaking and so there is always a dichotomy. Discrete/continuous is one of those dichotomies. But in the systems science approach (which stresses mutuality, synergy, dyadicity) both aspects of a dichotomy must exist. It is not a binary choice. Instead, both aspects will be fundamental.

The standard metaphysical position of physicists is mondadic. The world must always be fundamentally a this or a that. And yet QM forced the dichotomistic on scientists (such as Bohr with his yin yang complementary talk). It is not either location or momentum but both.

So in my view, a successful cosmological model would contain both the discrete and the continuous as fundamental.

In fact, this is exactly what we find. We have a local model in QM that describes the smallest grain of being and a global model of GR that describes the global continuity of spacetime.

There is this project in physics to break-up GR and be left with only the little bits and pieces, the discrete atoms, of QM. It is called the search for quantum gravity. But a systems view says this cannot be possible or useful. The "real" theory would find both theories as its twin opposed limits - hierarchically speaking, its local and its global bounds (see Salthe's scalar hierarchy).

Yes, it should be possible to marry QM and GR - via their common dissolution into a vagueness. And this is why I like loop and spin network approaches. It is more a marriage of the two in a vagueness. But the monadic metaphysics demands that the globally continuous be broken into the locally discrete, And that cannot be possible.

To complete the story, the systems science and Peircean approach I am talking about always ends up with a triadic equilibrium structure as its stable outcome.

So where is the triad that emerges out of the dyad? Well between the hyperbolic spacetime of QM and the hyperspheric spacetime of GR we find the flat spacetime of classical Newtonian physics.

The full triad is QM (as the discrete local small scale model), classical (as the emergent flat middle ground that forms between two extremes, by the mixing of the two extremes), and then GR (the model of the warped, closed, global boundary).

In the systems approach, everything is dynamic rather than static, a process rather than a structure. And so all three levels of the system are in action. The QM smallness is always collapsing (decohering to its smallest achievable scale), the GR largeness is always expanding (collapse becomes a problematic issue, is it actually permitted?), and the middle ground is always equilibrating (becoming as flat as possible).

You can see how this dynamic systems approach which acknowleges the global scale can remove some standard problems of cosmology. It explains why universes must expand - that is what systems must do to persist. It explains why middle grounds are flat - that is what the action of equilibration achieves.
 
Space news on Phys.org
  • #32


apeiron said:
You say you are seeking some concrete cosmological model. That is the maths you want to see. My interest is in concrete models of systems in general, of which cosmological ones would be an example.
Actually I am also looking for general models. General learning models. It is how I think for most of the time. But probably my prime focus is physics.

The connection between learning models and physics is similar to Ariel Catichas, Jaynes and others ideas. That the laws of physics is somehow related to the logic of reasoning upon incomplete information. And if you put that into an evolving context, with feedback, that's what learning is. It's a game.

I am not specifically interested in only cosmology in physics. My main interest is unification of forces, with special attention to QM and GR domains, and to put this in a general information theoretic context.

What I mean with the quest for mathematics is that I want quantiative predictive models. And that normally means mathematics. But I fully agree that the constructing principels can well be described by general reasoning and abstraction. But the ambition must be a computable model, to establish a utility.

apeiron said:
And it is early days. I take hierarchy theory to be mathematics. It is the geometry of systems, so to speak. But currently more at the descriptive than the calculating stage of development.

Yes, fair enough. The evolving approach in physics seems very young. So it's not surprising that much is open.

As I see it, there has been a trend to geometrize physics, but instead I wish to abstract it in a background indepedent information theoretic framework. And in that view, the physical interactions would be strongly related to rules of inference and evolutionary learning. The self-organisation is simply the constructive and self-preserving trait of learning and adapting.

Like you mentioned, the opposite would mean that observers failing to implement that, would self-destruct and simply not populate our world except in a transient manner.

My first inspiration was various Maxent methods, along the lines of Ariel. But I still realized something is wrong with that. And the founding key is how to "count evidence" and measure information, and here the continuum is confusing. Most MaxEnt methods fails in the sense that their choice of entropy measure is ambigous. the choice of measure, does constitute a background. Instead, my idea is that the definition and measure of information itslef, must evolve. Therefore, there is not universal measure of information. But this is exactly what the meaning of a background independent information theory means to me.

/Fredrik
 
  • #33


Had a swift zip through Ariel Catichas' approach. It seems in the same ballpark as Peirce, Salthe, Grossberg and Rosen (a logician, hierarchy theorist, neural networker and theoretical biologist!).

For example, Peirce's cosmology was based on the idea of learning. A world develops by a kind of self-discovery so to speak.

Grossberg's is a learning net approach. His ART systems predict the world they expect to see, then react and learn from that which has happened which was not predicted. This seems exactly Catichas' approach of global understandings only changing in the light of specific constraints.

Rosen's modelling relations theory is likewise all about models that anticipate and so minimise their need to change.

Applying all this to cosmology and entropy (using what some would call a pansemiotic approach) we could say there is a general drive for the universe to dissipate all gradients, all uneveness in terms of energy differences. So an uneveness would be one of the specific constraints that the general model must "learn" from in Catichas' view.

But the universe, as a context, is not just a passive learner, but an active flattener of uneveness, a dissipator of gradients. So really, I would want to say it is the general that constrains the specific.

Although this can also be phrased as saying the universe, as an example of a learning system, is seeking to become unconstrained by its local specifics. It wants to smooth out things to the point they are no longer locally troubling.

In mind science, this would be called the flow experience. Where everything is so perfectly anticipated, the global state of prediction needs no updating, no corrective learning.

This is getting us a long way from conventional notions of entropy as "disorder". But then entropy modelling is in want of a broader path.

Max ent for a system would be about reaching equilibrium - a state where there is nothing further to learn, nothing specific that could alter the global balance.

And this would be the heat death universe, an infinite cold void with all energy gradients flattened asymptotically to the limit.
 
Last edited:
  • #34


apeiron said:
Had a swift zip through Ariel Catichas' approach. It seems in the same ballpark as Peirce, Salthe, Grossberg and Rosen (a logician, hierarchy theorist, neural networker and theoretical biologist!).

I'm glad you see the general connection!

apeiron said:
Max ent for a system would be about reaching equilibrium - a state where there is nothing further to learn, nothing specific that could alter the global balance.

And this would be the heat death universe, an infinite cold void with all energy gradients flattened asymptotically to the limit.

In my thinking, I bypass the general choice of entropy and just use combinatorics to defined a discrete conditional probability, which is effectively a transition probability. It is the probability for a given change, given the present. A differential form of transition probability. And here there is a MaxEnt kind of principle, which is closely related to a more information theoretic form of the least action principle.

In this, the KL divergence appears (relative entropy) appears naturally.
See http://en.wikipedia.org/wiki/Kullback-Leibler_divergence

One can say that the maxent principle defines a differential structure, that is valid until more information is received. This means that the entropy measure of the maxent is deforming during the process, which means that there is no global heath death.

But apart from this as I see it the abstraction contains also a kind of datacompression which I picture as several related memory structures, in the relation between these structures, non-commutativity will naturally emerge.

Another constraining principle is complexity, which means that a system om memory structures are still constraint by a total complexity, which imples that the probability for excitation of some structures depends on the complexity scale. Here a hierarchy is also present.

Also these communicating non-commutatuve structures further complicates the dynamics away from the simple dissipative style. The dissipative mechanism are defined in my thinking only in a differential-change-sense.

So I think the maxent reasoning applies to the general case, like flat spacetime refers to a curved global spacetime.

/Fredrik
 
  • #35


Another complicatation yet is that since i do not picture infinite sequences of repeating experiements (this makes no sense since the measure are only define in a differential sense; it would mean you need to have infinite data before updating your opinion, which won't happen) different observes, having different information will in generall disagree upon their transition probabilities, because as I see it the utility of transition probabilities is that of odds, as basis for your actions.

So the "inconsistent" or disagreeing odds of the same event, suggests that they all act according to their beliefs. This is very strange if you have the frequentists interpretation of probability in the sense that it is verified by an actual infinite repeat of the same situation. I say that there is no way you CAN repeat the same situation without deforming you history and thus updating your information.

The frequentist interpretation is only an abstraction, where the sensible interpretation rather is more like a constraint for your actions. Instead the scientific approval of what probabilities that are "right" is simply the surviving acting strategy!

I think this is closely related to your thinking as well, and I think some form of probabilistic formalism will come in here, except I believe in a combinatorical one from start, where there is a system of interconnected "state spaces" rather than one. The difference is that the represent datacompression of actual history. Compression algorithms chosen, not to accurately mimic actualy timehistory, but to be of maximum utility for the future of the host.

Pretty much like some research suspect the human brain works.

/Fredrik
 
  • #36


Fra said:
One can say that the maxent principle defines a differential structure, that is valid until more information is received. This means that the entropy measure of the maxent is deforming during the process, which means that there is no global heath death.

Another way to see this is that I am suggesting a unification of the Principle of maximum entropy, and he principle of least action. It is in fact, the one and same principle. The unification lies at information about states vs information about expected change. And if you insist on a transparent information picture the state space is replaced by a space of differential changes. so the MaxEnt principe implies a kind of principle of least action.

/Fredrik
 
  • #37


I've also argued that least action and MEP are the same - one is about the dissipation of histories (as in Feynman sum over histories) and the other about the dissipation of energy gradients.
 
  • #38


To wrap up - if laws are viewed as emergent regularities of self-organising systems, then we need three parts to the story.

We need the initial conditions, the unformed potential from which the system arose.

We need the selection mechanism or process which acts on this potential to develop it.

Then we need the statistics of the outcome - the regularities that persist because a system has gone to equilibrium.

This is a triadic story - which Peirce as a logician was trying to articulate.

And which part needs to be modeled as the mathematics? It could be just the end state.

You could perhaps throw away the mechanism and the initial conditions as excess metaphysical baggage, just use the emergent laws, or statistical regularities.
 
  • #39


apeiron said:
We need the initial conditions, the unformed potential from which the system arose.

We need the selection mechanism or process which acts on this potential to develop it.

Then we need the statistics of the outcome - the regularities that persist because a system has gone to equilibrium.

This is a triadic story - which Peirce as a logician was trying to articulate.

Philosophically we are close enough here.

I have structured the problem into three parts as well

1) what is the logic of guessing?
Which is the logic of choosing the action based upon the current info. Essentially this is a form of statistical inference.

2) what is the logic of correction?
Which is the logic of updating your information i nthe light of new, essentially bayesian but not quite. It's sort of statistical inference of change, based on differential structure of the former AND the fact that the statespace in general can change, expand or shrink. The logic of correction is what restores infomratio conservation.

3) The synthesis of the logic of a corretive guessing? which results in evolution.

I think I described parts of it very briefly in https://www.physicsforums.com/showthread.php?t=239414&page=3 some time ago.

apeiron said:
And which part needs to be modeled as the mathematics? It could be just the end state.

IMO, all steps will be modeled quantiatively. Also all three steps are sort of intermingling. It's sort of a cycle. But it will be a mix of conenvtional analytic expressions, and algorithm type of models. As long as it's computable, it's quantitative. I have low hopes of fidning neat analytical solutiosn to this. It's far too complicated. Computer simulations is more likely, where near certain solutions effective analytical models can be found as approximations.

About the initial conditions as you say, I have come to the standpoitn that in principle the model msut work with ANY initial conditions. But that's an ambigouity that isn't possible to handle, so I have chosen to focus on the initial conditions in hte low complexity limit, simply because then the initial value problem becomes trivial, or vauge as you would probably call it.

Essentially the state space expands, and thus the initial conditions when the state space (abstractly speaing) is small, then there simlpy is no landscape of initial conditions.

But I sure don't have any answers yet either. But I expect to find them.

But we seem to share similar view on the nature of law in principle, and that it's a subtle concept.

/Fredrik
 
  • #40


I think the deep mathematical foundation here is to be found in "the geometry of symmetry breaking".

This is what underlies the modelling in all the different fields I've cited, from particle physics and cosmology, to hierarchy theory, Peircean semiotics, dissipative structure theory and neural networks.

So the target for mathematical modelling is the trajectory from max symmetry => max asymmetry.

Seeing as you want an information-based view of this, think of the trajectory from noise to signal.

Noise is the max symmetry initial conditions - the vagueness or pleni-potential. Then some operation (a symmetry breaking) isolates a signal. So we have the max asymmetry of an event standing proud of its (discarded) context. A 1 surrounded by a sea of 0s in Shannon's accounting system.

The max symmetry => max asymmetry trajectory would be a reformulation of the second law. It would be a higher level generalisation of the entropy of order => disorder story.

How does this square with cosmology?

In the beginning was a max symmetry in that the universe was plankscale. All distances and energies were vanilla the same and undifferentiated. Then the symmetry broke. At heat death, it will be max broken.

Again the outcome seems vanilla as everywhere (every location) is max cold. All gradients dissipated. But actually the situation is max dichotomised in that the universe is also max large and flat. So the universe is max divided into largeness and smallness with as little as possible existing in-between. A crisp outcome. A phase transition completed. A max ent in terms of a division between macro and micro states.

Dark energy and other issues would have to be reconciled to this picture. But the point is that second law thinking would seem the target of a fruitful generalisation towards concrete maths models. Intermediate stepping stone ideas like disorder, entropy, information, would be generalised to the fully geometric idea of symmetry (and symmetry breaking, and thirdly, asymmetry).

I have already been locked once in these forums for daring to suggest there is indeed a geometry of asymmetry, so I will leave it there.
 
  • #41


apeiron said:
I think the deep mathematical foundation here is to be found in "the geometry of symmetry breaking".

This is what underlies the modelling in all the different fields I've cited, from particle physics and cosmology, to hierarchy theory, Peircean semiotics, dissipative structure theory and neural networks.

So the target for mathematical modelling is the trajectory from max symmetry => max asymmetry.

Seeing as you want an information-based view of this, think of the trajectory from noise to signal.

Noise is the max symmetry initial conditions - the vagueness or pleni-potential. Then some operation (a symmetry breaking) isolates a signal. So we have the max asymmetry of an event standing proud of its (discarded) context. A 1 surrounded by a sea of 0s in Shannon's accounting system.

The max symmetry => max asymmetry trajectory would be a reformulation of the second law. It would be a higher level generalisation of the entropy of order => disorder story.

How does this square with cosmology?

In the beginning was a max symmetry in that the universe was plankscale. All distances and energies were vanilla the same and undifferentiated. Then the symmetry broke. At heat death, it will be max broken.

Again the outcome seems vanilla as everywhere (every location) is max cold. All gradients dissipated. But actually the situation is max dichotomised in that the universe is also max large and flat. So the universe is max divided into largeness and smallness with as little as possible existing in-between. A crisp outcome. A phase transition completed. A max ent in terms of a division between macro and micro states.

Dark energy and other issues would have to be reconciled to this picture. But the point is that second law thinking would seem the target of a fruitful generalisation towards concrete maths models. Intermediate stepping stone ideas like disorder, entropy, information, would be generalised to the fully geometric idea of symmetry (and symmetry breaking, and thirdly, asymmetry).

I have already been locked once in these forums for daring to suggest there is indeed a geometry of asymmetry, so I will leave it there.

Yes, we can finish the discussion here. Again your general reasoning above makes sense to me. So I have no obvious objections, except of course that we both admit that the exact mathematic model is still yet not on the table. So well have to keep working on that.

So back to the questions of post 1.

- What caused the physical laws we have?
- Have these laws always been in existence?
- Is it true we may never know what the universe was lke before the BB?
- Can a theory such as the multi-universe theory be proven?

I think we've now made our contribution to elaborate this. Which is also IMO partly well in line with Smolins reasoning that Marcus highlighted in post #4 when quoting his talk on the reality of time and the evolution of laws.

/Fredrik
 

Similar threads

  • · Replies 13 ·
Replies
13
Views
2K
Replies
18
Views
1K
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 15 ·
Replies
15
Views
3K
Replies
3
Views
4K
  • · Replies 17 ·
Replies
17
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
1
Views
2K