Evolving law, Smolin and others

  • Thread starter Thread starter Fra
  • Start date Start date
  • Tags Tags
    Law
Fra
Messages
4,338
Reaction score
704
Smolins Cosmological Natural Selection - that the laws, or at least some parameters of physical law, is evolving by reproducting universes and that some variation is introduced at each generation of a new black hole/new universe, contains a basic evolutionary idea that after some evolution the a randomly picked universe are likely to be somewhat selected for it's reprodctive fitness.

This is closely related to some reasoning I have on my own, but I don't see why it is necessary to contrain the concept to black holes. In a certain sense, a black hole can be seen as an observer, who continoually learns and consumes information. But what about all other observers? I had come to the conclusion that the same idea Smoling argues for should be even more natural when applied to a general observer, not only black holes.

This way, the logic of the normal flow of time, is the same as the logic "flow of time" in the universe population. Their origin are the same principle. If we can extend the logic here to normal observer, then perhaps it's easier to try to fill in the major missing points, to understand exactly what happens during the bounce, and exactly how the variation of laws is desribed.

The idea here is that the "DNA of the laws of physics" should be encoded in the microstructure of it's population, and thus variation among the population should in principle be thought of as variation of physical law, and then instead of arguing like Smoling does, the uniformity of physical law as we see it, could be explained by that fact that equilibration has taken place for so long. The oddball particles and system has since long dissolved.

I honestly don't see what insisting on the bounce as the only way to introduce variation is necessary. From my point of view, a black hole is just a very special (extreme) observer, but as I see it the logic must be present at all levels. This also has the advantage that you need not worrt about hypothetic collections of "other universes", because the logic lined out may be played out in front of our eyes in the this same universe.

Anyway, like smolin notices I think there could be plenty of ways to falsify this once it's more developed.

Has anyone, Smolin or anyone else taken the idea into that direction? ie. extend the reasoning beyond the black houle bounce, and bring the principle to unficiation with the ordinary flow of time within a single universe?

/Fredrik
 
Physics news on Phys.org
Fra said:
Smolins Cosmological Natural Selection - that the laws, or at least some parameters of physical law, is evolving by reproducing universes and that some variation is introduced at each generation of a new black hole/new universe, contains a basic evolutionary idea that after some evolution the a randomly picked universe are likely to be somewhat selected for it's reproductive fitness...


I honestly don't see what insisting on the bounce as the only way to introduce variation is necessary...

Your paraphrase omits an important point about the black hole bounce reproduction mechanism---it's essential role in making the conjecture scientific.

For the conjecture to be science, it must be empirically falsifiable. The conjecture that universe regions reproduce via black holes is clearly testable, and is being tested as we speak.

To discredit the hypothesis, all one would need to do is figure out some small mutation of the standard parameters of physics and cosmology which would have made the universe more reproductively efficient---that is which would have caused more stars to form and collapse into black holes.

Suppose one could imagine, for example, changing particle parameters slightly in a way that makes neutron stars less stable and more subject to collapse.
And suppose one could do that without some undesirable side effect (like eliminating an element from the periodic table that is essential for efficient star formation) then one would have shown that our region is sub-optimal. (Not at a fixed point of the evolutionary flow.)

Smolin didn't propose the bounce-evolution hypothesis because he liked black holes, or because he liked the bounce idea---those are secondary. The point is we can observe neutron stars and black holes and estimate masses and abundances. And we understand their history and formation well enough to get a handle on what change in fundamental constants might make them more abundant.

If you want to think up an alternate reproduction mechanism, and propose an alternative hypothesis, that's excellent as long as you can meet a basic requirement:
You have to be able to analyze how the parameters of the standard models of physics and cosmology affect the abundance of that alternative mechanism.

Because that's how you would test the hypothesis. By seeing whether the standard parameters are optimal for a prolific region---and thus at a fixed point of the evolution flow---or whether they are suboptimal.
 
Last edited:
grosquet said:
Fredrik:
It would be interesting to read your comments on the Scientific American article, "Was Einstein Wrong?: A Quantum Threat to Special Relativity" at http://www.sciam.com/article.cfm?id=was-einstein-wrong-about-relativity

The title "was einstein wrong" sounds like something you've heard before ;) but i'll skim this and see what it is, got a tub-slot coming up!

/Fredrik
 
marcus said:
Your paraphrase omits an important point about the black hole bounce reproduction mechanism---it's essential role in making the conjecture scientific.

For the conjecture to be science, it must be empirically falsifiable.
...

Suppose one could imagine, for example, changing particle parameters slightly in a way that makes neutron stars less stable and more subject to collapse.
And suppose one could do that without some undesirable side effect (like eliminating an element from the periodic table that is essential for efficient star formation) then one would have shown that our region is sub-optimal. (Not at a fixed point of the evolutionary flow.)
...

Because that's how you would test the hypothesis. By seeing whether the standard parameters are optimal for a prolific region---and thus at a fixed point of the evolution flow---or whether they are suboptimal.

Yes, you're absolutely right of course. I didn't mean to make a proper summary, because I didn't mean to attack _that logic_, rather his application of it, that he didn't take it far enough :)

anyway I thought I capture the falsification point with

"after some evolution the a randomly picked universe are likely to be somewhat selected for it's reproductive fitness"

"reprodctive fitness" is in smolins idea the ability to produce a lot of black holes, so by the conventional understanding of how black holes are formed, and how that process depend on the parameters one can estimate wether significant improvement in black hole formation would be possible with a slight chage of parameters - which itself would make the CNS more unlikely.

I don't have much to say about that. My idea does not compete with this, it is not either or. I just see it as taking the idea one step further, because I think the basic logic is good. I think selection and evolution can take place even in between the bounces, but by a similar reasoning.

As I see it, in the analogy smolin presents, one question is exactly how and where are the actual "DNA" of physical law stored, and how does it "copy"?

/Fredrik
 
I think the motivation for Smolin's natural selection cosmology is best understood when you contrast it with other multiverse scenarios, such as eternal inflation. Most multiverse scenarios conclude that universes are being created all the time with random properties. If this is true, then we would expect our universe to have random properties. This is not really a prediction. What Smolin was trying to do was modify the multiverse scenario such that some kind of universe would be preferred over some kind of other universe in the distribution of produced universes, so that our expectations if we assume a multiverse will be in some way different than our expectations if we do not.

So given this goal, the function of the black hole "bounces" is to provide a mechanism for certain kinds of universes outnumbering the other universes. If new universes come from black hole bounces, then the new universes will have some predictable set of values (i.e. values close to those of the parent universe) and not just random values.

In other words there is no reason to expect the new universe creation to be occurring through black holes, it is just that we hope it is because if that is true then that would be convenient for us (because it would make our scientific theories possible to test)...

As I see it, in the analogy smolin presents, one question is exactly how and where are the actual "DNA" of physical law stored, and how does it "copy"?
So if you look on the Arxiv, this is the most recent thing Smolin's published about the natural selection thing. If you look at the overview he offers he explains that the "DNA", if you want to call it that, of the universe would simply be its "dimensionless parameters", that is things like the 20 parameters of the standard model.

The dimensionless parameters p_new of each new universe differ, on average by a
small random change from those of its immediate ancestor. Small here means with
small with respect to the change that would be required to significantly change F(p).
[F(x) is the universal 'fitness function']

He does not explain what mechanism causes the parameters on the other side of a black hole bounce to vary, or what constrains that variance to be "small". The above paragraph is offered as one of three "hypotheses" that the cosmological NS theory depends on; in other words, he does not include an explanation for how the "copying with errors" of the dimensionless constants occurs, he simply asserts it as an assumption of the theory which some other, external theory would have to provide an explanation for. He later discusses the question of finding that explanation:

The hypothesis that the parameters p change, on average by small random amounts,
should be ultimately grounded in fundamental physics. We note that this is compatible
with string theory, in the sense that there are a great many string vacua, which likely
populate the space of low energy parameters well. It is plausible that when a region of
the universe is squeezed to Planck densities and heated to Planck temperatures, phase
transitions may occur leading to a transition from one string vacua to another. But there
have so far been no detailed studies of these processes which would check the hypothesis
that the change in each generation is small.

One study of a bouncing cosmology, in quantum gravity, also lends support to the
hypothesis that the parameters change in each bounce[48].
[48] is this paper.
 
grosquet said:
Fredrik:
It would be interesting to read your comments on the Scientific American article, "Was Einstein Wrong?: A Quantum Threat to Special Relativity" at http://www.sciam.com/article.cfm?id=was-einstein-wrong-about-relativity

That seems only very remotely connected to the thread, and it discusses SR and QM, and doesn't relate to evolution of law?

I can see some remote links to the physical basis of the wavefunction and evolution, but that's not discussed in that paper. I expect that SR and GR to be emergent from not yet known deeper principle. Those ideas are "radical" enough that a parallell discussion of the problems withing the original EPR context is moderately interesting IMO.

Did you see any connections between that paper and evolution?

/Fredrik
 
Fra said:
anyway I thought I capture the falsification point with
"after some evolution the a randomly picked universe are likely to be somewhat selected for it's reproductive fitness"

You are right. Your paraphrase was clear on that point. I didn't read carefully enough.
I always want to emphasize that point because empirical testing is so essential as the contrast that Coin draws illustrates.
 
Thanks for your comments Coin.

Coin said:
I think the motivation for Smolin's natural selection cosmology is best understood when you contrast it with other multiverse scenarios, such as eternal inflation. Most multiverse scenarios conclude that universes are being created all the time with random properties. If this is true, then we would expect our universe to have random properties. This is not really a prediction. What Smolin was trying to do was modify the multiverse scenario such that some kind of universe would be preferred over some kind of other universe in the distribution of produced universes, so that our expectations if we assume a multiverse will be in some way different than our expectations if we do not.

Perhaps I'm just coming from a different reasoning. I haven't ever considered choosing a between different multiverse theories. I was attracted to the evolution idea, not the multiverse part (it's the part I don't like). The notion of multiverse often implies some kind of strange external view. I'm trying to find an intrinsic view.

(My point/hope here was that I saw a way to implement an evolutionary idea, but toss the multiverse talk, and combined it with a more explicit idea of how the DNA itself evolved)

To me the evolution concept has a specific purpose - it's the solution to the problem of infinite regress, appearing in the problem om induction. Or rather, the infinite regress can be thought of as an ongoing computation, and rather than seeing this as a problem of analysing a problem, I give it a physical interpretation and conjecture that it's simply evolution we also call time evolution in the short perspective.

The rough idea I was after is that observers "opinion" are manifestations of physical law, and that their evolution - selection for microstructure of observers in the population is a sort of evolving DNA of physical law that goes on all the time, and the selection mechanism is made by the local environment, an system with totally twisted "opinion" of physical law would not survive, it would have very short lifetime.

So there would be a collective self-stabilisation, and one this is worked out falsificaiton should be possible in that this logic alone, should have some preferred first emergent structures. Ideally these structures and there interactions would correspond to the standard model we know. If a different structure was predicted, it would be unlikely to be true.

Coin said:
So given this goal, the function of the black hole "bounces" is to provide a mechanism for certain kinds of universes outnumbering the other universes. If new universes come from black hole bounces, then the new universes will have some predictable set of values (i.e. values close to those of the parent universe) and not just random values.

As I picture the alternative the source of possible variation is simply the uncertainty. Variation is made around the expectation ("parent"), so that very large mutations are unlikely. So it would not be random. Any structure that isn't fit to maintain stability in it's interaction with the environment, will eventually be destroyed.

I picture instead of spawning baby universes, that populating the universe with observers that share their opinion of physical law is the way to "propagate thte DNA". In a way one systems induces it's DNA of physical law to it's own environment by simply acting as per a particular logic (coded by the DNA) and this puts a selective pressure on the environment to negotiate. So the selection is a mutual pressure between two interaction systems.

Another analogy is that, simply by interacting and talking to you right now, we are exerting a selective pressure on each other. As I see it, each communication is a kind of subtle negotiation, wether we see it or not.

This could be combined with ideas of the origina of inertia, since as I like to see it, the information capacity of the observers are closely related to the inertia. If a "gene" if we put it like that, has a certain "count" to it's favour, that is the inertia of the gene, but since there is limited capacity, it competes with other genes, a gene that isn't reinforced will eventually be lost. New genes appear by unpredictable mutations, and once a fortunate mutation appears it's likely to preserve itselve.

That might not be readable, but I was hoping that smoling himself or someone else inspired by the evolutionary idea (not by the multiverse thing) has developd this. This is pretty much what I am slowly trying to work on myself, and the general reasoning with evolution: variation and selection is close to what I think is the right way for other reasons, that I coulnd't help hoping for more.

I think Smolins reasoning is interesting. But in order to take it further I think the origin of what I'd like to call the microstructure of the DNA (the parameters space itself of the standard model) should also be described by the same logic? Otherwise the variation and selection is only taking place on a fixed DNA placeholder structure/parameter space so to speak, that shouldn't be necessary if the essence of the evolutionary idea is taken all the way.

Perhaps I'll have to keep an eye out for this in upcoming papers.

/Fredrik
 
  • #10
Fra said:
selection for microstructure of observers in the population is a sort of evolving DNA of physical law that goes on all the time, and the selection mechanism is made by the local environment, an system with totally twisted "opinion" of physical law would not survive, it would have very short lifetime.

In this view, one might see the problem is then of course, how to make a predicition. What prevents any environment from simply preseving itself and thus beeing equally likely, as per some antrophic reasoning?

Here I pictured the trick to consider how our current state of the universe evolved from a state with no observers (we might as well call it the big bang, but wether that state was uniqued I am not sure, but I an not even sure it matters, different initial conditions might possibly evolve into the same equilibrium state) then all the intrinsic pictures are very simple, simply because every intrisinc pictures is constrained by the complexity of the obserervers, and with no observers yet, the complexity of observed physical law is strongly constrained (this is why I talked about constructing intrinsic measures by scaling the complexity of the view in some past threads). Then I have not worked this out yet, because many problems are related, but the VISION is that trick of the SIMPLE strongly intrinsic view, that evolves to more complex views, will come with a probability measure of what emergent structures that are possible. The whole structure formation would be guided by this relational selective pressure driven by self-preservation. Maybe one can also call these resonance structures, as they would correspond to a structure that, when duplicating itlsef into the environment, produces a stable/consistent state that will not collapse. Then there would probably be an hierarchy of such structures, ordered by increasing complexity (probably corresponding to system inertia).

This should also have builtin an idea of the origin of inertia, as a self-organising self-preserving measure of it's own environment.

If this would work, it might very well be that smolins black hole picture is still right, this doesn't per see contradict thta, but hopefully it would be a more constructive angle. I think it must be a deeper place to attach the evolutionary reasoning, along with the bounce idea.

/Fredrik
 
  • #11
Coin, the writings you dug up anwers the questions about Smolins reasoning. It seems he is himself looking for that deeper fundamental physics. Then it makes sense. Now we just need to find it.

Coin said:
the "DNA", if you want to call it that, of the universe would simply be its "dimensionless parameters", that is things like the 20 parameters of the standard model.
...

But then he basically has a "microstructure of the DNA" and ponders evolution of it's microstate (parameter settings), but if you really take the evolutionary idea serious, one would expect the microstructure itself to also be explained.

I guess a very good motivation for this is still that doing so seem very difficult, and meanwhile you at least get started. I think it's just that when you get a part, you want it all :)


Coin said:
He does not explain what mechanism causes the parameters on the other side of a black hole bounce to vary, or what constrains that variance to be "small".

My expectation and personal hypothesis that I think is in line with the logic is that the mechanism to keep a small variance, is related to inertia. Each parameters has an inertia. But for that to even make sense, a model is needed for the physical basis of this. Variation is unavoidable, to to uncertainty, but at the same time a certain stability should be guranteed by the inertia of the entire construct.

To understand this I expect no less prerequisite that to understand the origin of inertia.

Coin said:
The hypothesis that the parameters p change, on average by small random amounts,
should be ultimately grounded in fundamental physics. We note that this is compatible
with string theory, in the sense that there are a great many string vacua, which likely
populate the space of low energy parameters well.

Even though I see this, the prime problem of string theory from the point of view of an evolutionary reasoning is that the strings themselves are a fixed microstructure that is put in as a conjecture. I think this leap in reasoning is possible why you end up with a landscape. There is no way I see how a string, can qualify as a fundamental starting point. Strings may
still play a role as I see it, there is a decent chance that string like structure will emerge as primordal self-preserving structures way down in the complexity scale. The reason why I fail to consider strings fundamental is that they are pictured are continuum structures, relating to an external spacetime (part of the microstructure).

If instead strings could be understood to emergey from a deeper theory, I thikn the landscape would also narrow.

/Fredrik
 
  • #12
grosquet said:
Fredrik:
It would be interesting to read your comments on the Scientific American article, "Was Einstein Wrong?: A Quantum Threat to Special Relativity" at http://www.sciam.com/article.cfm?id=was-einstein-wrong-about-relativity

Very good article.

In my humble opinion, one can forget about the last two proposals (Tumulka and Albert). But this is, of course, not a criticism of the article, but a personal preference. And my preference is a quite simple one: I prefer the conceptually simplest theory, which is clearly the de Broglie-Bohm pilot wave theory. That's not because I'm very much prejudiced in favour of classical determinism - if there would be good evidence, I would throw it away without bothering much. But there simply is no good evidence.

Thank you for the link.

Ilja
 
  • #13
Ilja said:
And my preference is a quite simple one: I prefer the conceptually simplest theory

I have a similar peference. But I it seems to me that simplicity is relative. I have yet never seen a universal measure of simplicity :)

To me simplicity means low degree of speculation. Simplest possible theory to me is the one which adds a minimum of assumptions and speculations. This relates simplicity to measures of certainty of information.

In that setting, classical physics is fairly speculative. The whole assumptions of existence of absolute references is to me quite speculative. When the remove the speculations, you also loose definitness, but it gets simpler in the sense of less speculative. This is how probabilisitc models could be less definite, but also "simpler" in my mind.

Edit: this is the sense where I personally think the various quantum weirdness and actions based on expectations and information at hand, rather than som realism is simple. It is somehow more natural. A systems "simplest or smallest action" seems to be the one, that adds a minimum of speculation, and is thus per construction the most proable one.

/Fredrik
 
Last edited:
  • #14
Fredrik –

I think what you’re suggesting makes sense, or at least it sounds a bit like a train of thought I’ve been pursuing, in connection with Carlo Rovelli’s Relational Quantum Mechanics. (I think Lee Smolin might also have been involved in developing that approach to QM, but he doesn’t seem to have made a connection between that and his idea of an evolutionary explanation for fundamental physics.)

I think the key issue is whether we can understand the basic functionality of “observing” or “measuring” as the kind of thing that can evolve. In biology, the basic functionality is that of organisms making multiple copies of themselves. Since we know evolution is possible on that basis, Smolin supposes that universes might also evolve by making copies of themselves. But I agree with you that we can imagine an evolutionary process that’s much closer to home, connected with “the ordinary flow of time” in this universe.

In Relational QM, as you know, any local physical system counts as an “observer”. The fundamental physical process is the measurement of one system S by another system O, or equivalently, the communication of information between the two systems. In any case, the result is an agreement between S and O on certain information (“the state of S”) that both S and O can then potentially communicate on to other systems.

The thing is, for something like this to happen, it’s not enough that S and O just interact. Most interaction between things remains merely “virtual”, and communicates no definite information. For O to determine something about S, O has to be “set up” in a certain way – so its interaction with S can have a definite outcome, i.e. make some specific difference to O’s subsequent interactions with the world. Likewise S has to be ”prepared” in such a way that an interaction with O can leave it in a specific state, that makes a difference to S’s subsequent interactions.

So we can envision the world as something like a web of interactions that communicate information between “measurement situations”. In order for a system to be “set up” to measure something, or “prepared” to be measured, there has to be other background information in its environment, communicated to it from previous measurement events.

The information communicated in each interaction would be merely random. But as you suggest, only information that turns out to be consistent with other information could help make a coherent background-context for future measurements. What we call “the flow of time” would have to do with the way information determined in one situation gets passed on as background information to “set up” other measurement situations.

Is that the sort of thing you have in mind? If we’re thinking about a web of interactions that define information in the context of other information, it seems reasonable that certain basic structures might evolve as a kind of “DNA” – i.e. base-level information that has to be agreed on in every interaction, in order for it to constitute a measurement that contributes to the coherent body of “the real world.”

Conrad
 
  • #15
Hello Conrad and welcome to the forum! It looks like you connect unexpectadly well given that this seems to be your first post :)

ConradDJ said:
In any case, the result is an agreement between S and O on certain information (“the state of S”) that both S and O can then potentially communicate on to other systems.

Emergent agreements, is exactly how I think of it to, and the very process of negotiation is the physical interaction itself. So by trying to understand in the general case, the logic of negotiation and feedback due to mutual selection pressure due to disagreement, we can hopefully also understand the deep information basis of physical interactions.

The early parts of the reasoning in Rovelli's RQM is brilliant IMHO. However I do not like how he develops it, in particular how he explicitly avoids the physical basis of probability, but nevertheless his initial reasoning is one of the more worth-reading papers. A brilliant philosophical beginning that IMO still awaits the same brilliant completion.

ConradDJ said:
The thing is, for something like this to happen, it’s not enough that S and O just interact. Most interaction between things remains merely “virtual”, and communicates no definite information. For O to determine something about S, O has to be “set up” in a certain way – so its interaction with S can have a definite outcome, i.e. make some specific difference to O’s subsequent interactions with the world. Likewise S has to be ”prepared” in such a way that an interaction with O can leave it in a specific state, that makes a difference to S’s subsequent interactions.

So we can envision the world as something like a web of interactions that communicate information between “measurement situations”. In order for a system to be “set up” to measure something, or “prepared” to be measured, there has to be other background information in its environment, communicated to it from previous measurement events.

The information communicated in each interaction would be merely random. But as you suggest, only information that turns out to be consistent with other information could help make a coherent background-context for future measurements. What we call “the flow of time” would have to do with the way information determined in one situation gets passed on as background information to “set up” other measurement situations.

I think your reasoning is well tuned to mine given that this is your first post here. What you write goes well in line with my reasoning.

Smoling black hole stuff seems in the light of the reasoning that you seem to share, to be a special case at best. I think time evolution itself must be seem in the same evolutionary sense. I don't doubt for a second it's a viable way forward, it's so plausible and presents the most consistent way of reasoning I'm aware of. But it still contains many complex problems.

Are you aware of anyone who has published anything along this reasoning? I am somewhat puzzled why not more progress has been made along these lines (or why I haven't found it). Perhaps not enough people has payed attention to this in the past.

ConradDJ said:
Is that the sort of thing you have in mind? If we’re thinking about a web of interactions that define information in the context of other information, it seems reasonable that certain basic structures might evolve as a kind of “DNA” – i.e. base-level information that has to be agreed on in every interaction, in order for it to constitute a measurement that contributes to the coherent body of “the real world.”

Yes, something like that is definitely what I mean! Contexual information without universal references. The only common references existing are emergent from mutual interactions. It may seem mad but like you seem to agree with, there is a good chance that even such very background independent logic, may produce locally definite patterns of self-consistenct structures.

/Fredrik
 
  • #16
ConradDJ said:
The information communicated in each interaction would be merely random. But as you suggest, only information that turns out to be consistent with other information could help make a coherent background-context for future measurements. What we call “the flow of time” would have to do with the way information determined in one situation gets passed on as background information to “set up” other measurement situations.

Is that the sort of thing you have in mind? If we’re thinking about a web of interactions that define information in the context of other information, it seems reasonable that certain basic structures might evolve as a kind of “DNA” – i.e. base-level information that has to be agreed on in every interaction, in order for it to constitute a measurement that contributes to the coherent body of “the real world.”

The problem used to be that in this fully contextual and relative thinking, you have nothing to start with. So where do you even start?

I've come to the decision that in principle, any starting point would be valid, and apply the reasoning forward. But that gives me an infinite number of starting points, and it also probably makes the discovery process unnecessary complicated. So I've decided to start at the zero end of the complexity scale. Ie. how does very light observer interact? The exploit is that a simple observer, can not even encode arbitrary complex interactions. So I start like that. I use a qualifier of complexity as the number of distinguishable states. There are two such state spaces, the internal space and the communication channel state. If I can find a logic to that: how does that entire logic scale, as the observer acquires higher mass(complexity) and how does the mass acquisition process look like?

So I have at least 3 problems.

1. Do understand the basic logic in the simplest cases
2. How this this "logic" scale with mass, certaintly new complex interactions and structures will emerge
3. How is mass formed (or as I think of it, how is confidence or inertia formed; how can a light observer, grow massive)

I have some hints of ideas on all these, but the problem is that they are related, so I am trying to make parallell progress. The exact scaling of the logic, is intimately related to the origin (growth) of mass. In fact I think the two points are almost the one an same.

About time, I picture time as a random walk, guided by a constantly evolving internal map in the observer. Mass acuiqution corresponds to a more "massive" and thus more confident map. A light map, has poor resoltion and is more frequently wrong (less reliable).

So far my starting poits have been combinatorical, and the selective pressure and evolution is to understand how such constructud structures are either supported or destructed in interactions, structurs that encode the wrong logic or picture, will not be stable in the environemnt. That's the logic of the selection. SO I think of the DNA as the "logic" that constructs the microstructure and it's evolution.

/Fredrik
 
Last edited:
  • #17
Fra said:
I have a similar peference. But I it seems to me that simplicity is relative. I have yet never seen a universal measure of simplicity :)

To me simplicity means low degree of speculation. Simplest possible theory to me is the one which adds a minimum of assumptions and speculations. This relates simplicity to measures of certainty of information.

I would clearly disagree. Certainty of the theory has nothing to do with simplicity. These are simply very very different things.
 
  • #18
My guess:
LQG is wrong, because spin-networks is a correct idea, but the 'quantisation' into Planck-scale is wrong and superficial.
Superficial because one unexplained phaenomenon is explained over an other unexplained phaenomenon.
Relativity doesn't work with 'nodes' of spacetime, This is due to certain angles, that shift timelike and spacelike behavior into each other. That doesn't work with nodes, because an angle in spacetime is a velocity in respect to an observer. Since we can't have 'steps in velocity' we can't have nodes of spacetime.
And: the observed sky is cristall-clear for billions of light years, any discrete structure of spacetime would blur the images of distant stars a bit, what isn't observed.
 
  • #19
thomheg said:
My guess:
LQG is wrong, because spin-networks is a correct idea, but the 'quantisation' into Planck-scale is wrong and superficial.
Superficial because one unexplained phaenomenon is explained over an other unexplained phaenomenon.
Relativity doesn't work with 'nodes' of spacetime, This is due to certain angles, that shift timelike and spacelike behavior into each other. That doesn't work with nodes, because an

I started to read up on Rovelli's LQG thinking, and even though I am very impressed by his initial analysis and reflections, when he laters moves on to formulating the new ideas, I don't think they follow the same reasoning I smelled in some of his more reflective papers.

In my mind, the relativity transformations should be emergent along with spacetime as systems interact and evolve. The most serious objection is that while he has interesting interpretations in the RQM paper, in the end it effectively ends up still beeing the same quantum formalism. I personally felt that his view should imply and suggest a modification of the formalism itself.

These are some very basic reasons why I think LQG as I understand it, isn't ambitious enough. I sense a higher ambition in he early parts of Rovelli's reasoning, that's what was surprising to me.

At first I though that his spin networks was best interpreted as the same "microstructures" as I call them and is pondering, but I soon realized that while that might be a possibilitiy, it's not what rovelli is doing, thus I had no help of his formalism. I had to go back where it in my opinion went wrong.

His reasoning that the only way to compare observations, are by Observer-Observer communication is great, and that such interactions boils down to nothing but the normal physical interactions is great, but then he concludes that "and these interactions must be treated quantum mechanically" and then he throws in the same old QM. That's the step that's a gigantic leap to me. My conclusion from that point would instead be that the quantum mechanics formalism we know, is rather also just emergent. And the emergent process is an evolutionary one.

Somehow that's where my reasoning start off, I am expecting a different turn to the good reasoning he started.

/Fredirk
 
  • #20
Fra said:
At first I though that his spin networks was best interpreted as the same "microstructures" as I call them and is pondering, but I soon realized that while that might be a possibilitiy, it's not what rovelli is doing, thus I had no help of his formalism. I had to go back where it in my opinion went wrong.

His reasoning that the only way to compare observations, are by Observer-Observer communication is great, and that such interactions boils down to nothing but the normal physical interactions is great, but then he concludes that "and these interactions must be treated quantum mechanically" and then he throws in the same old QM.

/Fredirk
In my own model I use something, that you could call a spin-network. But it's build over a smooth spacetime with a very simple model. Since I think, this is possible, I would think LQG is wrong. QM is based on linear algebra, observations and real space. Now think about a kind of space with geometric relations of a multiplicative kind, based on imaginary numbers. That's called geometric algebra and related to quaternions.
What we call matter are timelike stable structures within that. A field is the connection over a timelike hypersheet. That looks static, because that sheet is defined this way.
An electron in this model is a full turn. Since it is anti-symmetric it requires two rounds for a return. Now left and right are different, too, a structure emerges with two types of electrons.
This structure could be shifted, what makes electrons radiate, since they expose the aspect of rotation to a distant observer.
 
  • #21
thomheg said:
In my own model I use something, that you could call a spin-network. But it's build over a smooth spacetime with a very simple model. Since I think, this is possible, I would think LQG is wrong. QM is based on linear algebra, observations and real space. Now think about a kind of space with geometric relations of a multiplicative kind, based on imaginary numbers. That's called geometric algebra and related to quaternions.
What we call matter are timelike stable structures within that. A field is the connection over a timelike hypersheet. That looks static, because that sheet is defined this way.
An electron in this model is a full turn. Since it is anti-symmetric it requires two rounds for a return. Now left and right are different, too, a structure emerges with two types of electrons.
This structure could be shifted, what makes electrons radiate, since they expose the aspect of rotation to a distant observer.

I don't know anything about your model but I wonder if your model are expected to predict for example particle masses from your choice of first principles?

I personally consider the mass generation to be one major challange, to understand how and why there is a preferred spectrum of frequently observed small structures (particles) and why have have their specific masses.

/Fredrik
 
  • #22
Fra said:
I don't know anything about your model but I wonder if your model are expected to predict for example particle masses from your choice of first principles?
Look at this https://www.physicsforums.com/showthread.php?t=293626".
What do we call 'mass'? It is related to the behaviour of matter to keep a certain direction of motion (intertia) and to gravity. Matter is, what is more or less stable. No think about a gyroscope. That keeps it's orientation. Think about a rotation in a spacetime with three imaginary axes and time as a scalar. Than any object has this property of rotation, that it tries to keep. A velocity is an angle in this model. Any such object tries to keep the velocity in respect to a distant observer.
A mass is related to the 'speed' of this roation. The more massive the more it stays with the observer and the lesser it wants to leave its orientation. Such a structure is a very narrow light-cone, what I call 'the mass term'. That is related to a very flat 'radiation term'.
For a massless particle, spacelike and timelike intervals are equal and that generates a light-cone for empty space. Since it is light that defines our usual euclidean space, that space is empty, too.
 
Last edited by a moderator:
  • #23
Fra said:
I started to read up on Rovelli's LQG thinking, and even though I am very impressed by his initial analysis and reflections, when he laters moves on to formulating the new ideas, I don't think they follow the same reasoning I smelled in some of his more reflective papers.

...The most serious objection is that while he has interesting interpretations in the RQM paper, in the end it effectively ends up still being the same quantum formalism. I personally felt that his view should imply and suggest a modification of the formalism itself.

Fredrik -- Many thanks for the warm welcome!

I also have the sense that there are deep implications to Relational QM that Rovelli hasn’t tried to work out. But I like it that he doesn’t want to modify or extend the quantum formalism. I agree with him that the difficulty is to see what QM is saying about the world, because we’re still coming from a point of view where it makes sense to talk about the “state” or properties of a system without thinking about the context of physical interaction in which those things make a difference and can be observed.

For me, the limitation of Rovelli’s RQM is that he doesn’t try to analyze what’s involved in “physical systems giving descriptions of other physical systems.” He says, “Information is exchanged via physical interactions” – clearly true – but then he goes on, “The actual process through which information is collected and perhaps stored is not of particular interest here, but can be physically described in any specific instance.” So he stays at the level of abstraction of the QM formalism, and doesn’t dig down to the specifics of what it takes to make this measurement / communication business work.

The thing is, asking how measurement works in physics is like asking how reproduction works in biology. It happens in a lot of different ways, and none of them are simple. And even though we know that self–replicating systems must have evolved out of some very primitive original form, envisioning what that may have looked like is really hard. But, we don’t have to have a good picture of how life began in order to understand how it evolved. So maybe the situation could be similar in physics.

Obviously Darwin wasn’t the first to realize that biological organisms reproduce themselves. But he was the first to realize what that simple fact means. If something can make copies of itself, and the copies can make copies, you can fill up a planet with millions of species of extremely complex life-forms, just by accident. Even the first self-replicating systems of molecules must have been fairly complicated. But by your definition of simplicity as “low degree of speculation”, Darwinian evolution is amazingly simple.

In physics, we all know that things are observable and measurable – i.e. that the physical world communicates information about itself. The problem is, this is so obvious we take it for granted, and hardly think about what that means. Has anyone has ever tried to invent a model where all the parameters are “measured” by other parameters of the model?

Anyhow, though I’m fascinated by models of the “basic logic” of physics, my understanding of QM and Relativity is pretty rudimentary... so I’m not in a position to grasp the pros and cons of quantum gravity theory. I’m just trying to understand what kind of structure could provide “measurement situations” that actually define and determine all its own structural elements. It seems as though that might be what the QM formalism is telling us – i.e. that there are no determinate elements of physical reality that are not actually being measured and communicated through the web of interaction-events. It seems as though a number of different basic structures might be needed, that provide measurement-contexts for each other.

Conrad
 
  • #24
ConradDJ said:
But I like it that he doesn’t want to modify or extend the quantum formalism.

Here we disagree on one point :) My personal opinion and view is that the extension of his reasoning (or rather my reasoning, which I assume is almost the same as his up to the point where I loose him), demands a modification of the QM formalism.

I think he takes GR too seriously. This requires him to compromise with the logic he previously held high.

I rather see the GR structure and actions as emergent, not fundamental. And what I find so paradoxal is that I think the extension of his own reasoning (with some twists) can produce it.

As I see it, in general, the various symmetries that defines the observer-observer transformations we know from various theores, for example GR, are a result of evolving observers negotiation. If we can understand the evolutionary mechanism which selects these symmetries, then I think we are close to home. And hopefully not only GR, but also the symmetries of the standard model should be explained fully analogous to that. No need to appeal to "mathematical beauty" of certain symmetry groupes, I think the groups are there for very special reasons, not by conincidence. It's this logic I'm after. I think we see hints of it.

ConradDJ said:
He says, “Information is exchanged via physical interactions” – clearly true – but then he goes on, “The actual process through which information is collected and perhaps stored is not of particular interest here

Your points of objections are very similar to mine. Unless you've studied physics, I'm curious what your source of intuition is? Biology? I have to say though that I do not know if he but wtih that statement refers to the somewhat limited scope of the the RQM paper - in which case he does have a point. But clearly in the general full QG case, the actual process whereby information is collected and store is even of key interest IMHO.

/Fredrik
 
  • #25
Fra said:
No need to appeal to "mathematical beauty" of certain symmetry groupes, I think the groups are there for very special reasons, not by conincidence. It's this logic I'm after. I think we see hints of it.

Unless you've studied physics, I'm curious what your source of intuition is? Biology? I have to say though that I do not know if he but with that statement refers to the somewhat limited scope of the RQM paper - in which case he does have a point. But clearly in the general full QG case, the actual process whereby information is collected and store is even of key interest IMHO.

/Fredrik

My background is in philosophy, but I’ve been thinking about foundational issues in physics for longer than I care to admit. And I agree with you about the scope of the RQM paper. Rovelli was trying to stay as close as possible to the established framework, to get his main point across, and I think he did a fine job. But it seems that to go further and see where this orientation leads, we need to think about what’s involved in the functionality of physical measurement / communication.

Your post points to one reason why this hasn’t been much explored – it’s been taken for granted for centuries that physics has a mathematical basis, and that if we can even ask the question why that structure is whatever it is, the answer can only be mathematical. Back in 1933 Einstein said “Our experience hitherto justifies us in the belief that nature is the realization of the simplest conceivable mathematical ideas.” There’s an amazing irony in that, given how vastly complicated base-level physics has since become.

The alternative is to understand the universe as some sort of functional system – it is the way it is, because that’s what works. Clearly this makes sense to you, but of course it’s very different from the traditional orientation of physics. That’s what made Smolin’s book on the Life of the Cosmos so remarkable, that he was actually taking this thought seriously.

Since I can’t contribute at the level of technical discussions, I’ve tried to focus on the question about what kinds of functionality can conceivably evolve. Smolin still thinks in terms of self-replication. What got me excited about Rovelli’s paper – besides his basic premise about giving up on “the state of a system” as real in and of itself – was that in RQM, the measurement of S by O and the communication of the result from O to O’ are treated as the same process. To determine something (“collapse of the wave function”) is to communicate it, and vice-versa.

Just copying information almost doesn’t happen in the physical world – though we humans have figured out lots of ways to make it happen. But physical things don’t make copies of themselves, in general, which is why the origin of life is so hard to envision – and why Smolin’s idea about black holes seems so speculative. But this thing of measuring / communicating information occurs constantly, in every type of physical interaction.

We used to think of measurement as just copying data about something into a standard format, as when we measure the length of a stick with a ruler. And we still tend to think of communication as a matter of copying data from one person’s brain to another. But I think QM shows us that something much more complex and interesting is going on... because there’s always the question of how the information gets defined, how it gets to mean something specific, on both sides of the communication. Just to know whether or not the information got across, always takes further communication.

Anyway, I agree with you that the mathematical structures of QM and Relativity aren’t just “given” but reflect whatever’s going on at a deeper level.

Conrad
 
  • #26
Fredrik --

In answer to your original question... I came across this paper by Zurek on Quantum Darwinism -- relates to his work on decoherence. Maybe this overlaps with your ideas? He seems to be thinking about evolution through replicating information, rather than (as I was trying to suggest above) an evolution of communication between "measurement situations." Maybe it comes to the same thing after all.

http://arxiv.org/PS_cache/quant-ph/pdf/0308/0308163v1.pdf

He has more recent papers but this one seems to be most to the point. He says:

"This view of the emergence of the classical can be regarded as (a Darwinian) natural selection of the preferred states. Thus, (evolutionary) fitness of the state is defined both by its ability to survive intact in spite of the immersion in the environment (i.e., environment-induced superselection is still important) but also by its propensity to create offspring – copies of the information describing the state of the system in that environment."

Conrad
 
  • #27
ConradDJ said:
Fredrik --

In answer to your original question... I came across this paper by Zurek on Quantum Darwinism -- relates to his work on decoherence. Maybe this overlaps with your ideas? He seems to be thinking about evolution through replicating information, rather than (as I was trying to suggest above) an evolution of communication between "measurement situations." Maybe it comes to the same thing after all.

http://arxiv.org/PS_cache/quant-ph/pdf/0308/0308163v1.pdf

He has more recent papers but this one seems to be most to the point. He says:

"This view of the emergence of the classical can be regarded as (a Darwinian) natural selection of the preferred states. Thus, (evolutionary) fitness of the state is defined both by its ability to survive intact in spite of the immersion in the environment (i.e., environment-induced superselection is still important) but also by its propensity to create offspring – copies of the information describing the state of the system in that environment."

Conrad


It would seem to me that information about a physical system is secondary to the foundational physics. A distribution which contains the information can only be foundational if it is introduced in only one form for fundamental reasons. What is the most fundamental distribution and why would it be introduced?
 
  • #28
ConradDJ said:
Fredrik --

In answer to your original question... I came across this paper by Zurek on Quantum Darwinism -- relates to his work on decoherence. Maybe this overlaps with your ideas? He seems to be thinking about evolution through replicating information, rather than (as I was trying to suggest above) an evolution of communication between "measurement situations." Maybe it comes to the same thing after all.

http://arxiv.org/PS_cache/quant-ph/pdf/0308/0308163v1.pdf

He has more recent papers but this one seems to be most to the point. He says:

"This view of the emergence of the classical can be regarded as (a Darwinian) natural selection of the preferred states. Thus, (evolutionary) fitness of the state is defined both by its ability to survive intact in spite of the immersion in the environment (i.e., environment-induced superselection is still important) but also by its propensity to create offspring – copies of the information describing the state of the system in that environment."

Conrad

I've read some of Zurek's papers and yes there are some ways of his reasoning that I share. One of my favourite Zurek quote, and one of my overall favourites all times is

From his paper
"Decoherence, einselection, and the quantum origins of the classical"
"...What the observer knows is inseparable from what the observer is: The physical state of his memory implies his information about the Universe..."
-- http://arxiv.org/abs/quant-ph/0105127

The way I choose to interpret that, it is a very deep statement about the connection between the nature of information (which exact notion is generally somewhat of an enigma) and the physical nature of reality. A kind of relation between ontology and epistemology.

However, Zurek's analysis which is in the context of decoherence, he still has an apparently different view of information than I do. Although there is no doubt that the environment exterts an selection on any system, the difference lies in the frog vs birds view.

I am all in on this idea of environmental selection, but there is different ways to picture that. From the inside view, the environment as in the reminder of the universe, is simply the unknown, thus it's not a structured definable "thing". Thus it can not be used to explain emergence of stable subsystems. I think it's probably part of the explanation, but the biggest keys are still missing as I see it.

But I am not updated on what his very latest papers treat.

/Fredrik
 
  • #29
friend said:
It would seem to me that information about a physical system is secondary to the foundational physics.

I personally disagree about that. I think they are strongly connection. But it's true that a satisfactory understanding of that connection is still missing. But I see enough bits and pieces to have confidence in the road forward in that direction.

friend said:
A distribution which contains the information can only be foundational if it is introduced in only one form for fundamental reasons. What is the most fundamental distribution and why would it be introduced?

One of the most fundamental starting points I have adopted is the concept of and observer dependent notion of distinguishability. This I consider to be the basics of boolean states and the basis for counting.

Then if we can find a mechanism for emergent memory (which I think is possible) then we can construct integers (countable state spaces), and in the large N case, we have effective continuum models. To me the generation memory structure is key to understand generation of of mass.

This can be implemented as a kind of combinatorical basis information, rather than the continuum probability.

As I see it ANY structure, can be used to encode information. But the structure itself is the manifestation of the "prior", on which the information relates. So we really have information on information.

This circular reasoning is why I think the evolutionary selection idea is the best. Any alternative imples relating to a microstructure to define information, and it's as arbitrary as to find a unique prior probability distribution in bayesian reasoning. To me, the observer is the physical manifestation of the prior, and it is evolving, and is subject to selection.

So there is no fundamental information, just evolving information on information, and I think the world we see, are from the point of view of evolved observers, selected for it's stability.

To me the deepest level of physical law, would be to try to understand as large part of this logic of evolving information, manifested as interacting matter systems as possible. To understand what is matter, and how it's built, would be the same as to understand what information is.

The usual definition of information such as shannon entropy, is clearly not even in the ballpark to be sufficient here. It's very simple minded as I see it. A true evolving measure of information, can not make use of a fixed background microstructure, becase the microstructure is itself hidden prior information. The key we IMHO at least, need to understand is how the diffculty to define a universal definition or measure of information, IMPLIES interactions and evolution. But then, this does not mean we are on the wrong track, because look around us, we DO se interactions.

So the trick is not IMO to find the ultimate entropy formula. No such formula could make sense. I think we instead need to see the measure of information, as evolving. The question then is to understand how a measure (a subsystem; an observer; a particle) responds to feedback from it's environment, and how not only his state of information is revised, but how his memory hardware is revised.

/Fredrik
 
  • #30
friend said:
It would seem to me that information about a physical system is secondary to the foundational physics.

In my picture, the information one system has about another system, and how that is described, is absolutely critical to foundational physics of how those two systems interact. I think any system acts as per a logic that is implied by it's information about it's own environment. This information is both explicit, and the implicit prior information manifested by the memory hardware.

However, I personally don't think the standard entanglement considerations is a satisfactory description of this. The problem is that it makes use of arbitrarily chosen structures. So those measures are not intrinsic.

Ideally the memory hardware should emerge from these evolutionary process, and if this is correct that something interesting should pop out, that match what we have in the standard model.

/Fredrik
 
Last edited:
  • #31
Fra said:
In my picture, the information one system has about another system, and how that is described, is absolutely critical to foundational physics of how those two systems interact. I think any system acts as per a logic that is implied by it's information about it's own environment. This information is both explicit, and the implicit prior information manifested by the memory hardware.

However, I personally don't think the standard entanglement considerations is a satisfactory description of this. The problem is that it makes use of arbitrarily chosen structures. So those measures are not intrinsic.

Ideally the memory hardware should emerge from these evolutionary process, and if this is correct that something interesting should pop out, that match what we have in the standard model.

/Fredrik

Can you hear yourself talk? You say "information about it's own environment". When you say the word "about", you automatically imply that what it is "about" exist prior to calculating information about it. In this case you are talking about information about the memory hardware. This would make the memory hardware more fundamental then the information calculated from it.
 
  • #32
friend said:
Can you hear yourself talk? You say "information about it's own environment". When you say the word "about", you automatically imply that what it is "about" exist prior to calculating information about it. In this case you are talking about information about the memory hardware. This would make the memory hardware more fundamental then the information calculated from it.

It's not easy to word this. Instead, my point is that the memory hardware, and the information it's microstructure endodes evolves together. One without the other doesn't make sense.

In a sense though, I think it's fair to say that the memory hardware is more fundamental, but the memory hardware is in my picture only a condensed form of information, that can evaporate. So it is not really fundamental.

The difficulty is to understand how the background (the memory structure) evolves along with the state living on the background.

The main distinguishing factor between the two phases of information, the microstate of a microstructure, and the microstructure itself, is that the inertia of the microstructure is much higher, that's why the background evolves much slower. In a way I think of the the microstructure (ie the memory hardware) is a very "massive microstate".

But it's true that whatever I say, is still only relative to me and my personal reasoning. This is something we can never get around, but consensus in science still emerges, it has done so in the past and I'm sure it will keep doing that. After all my reasoning is also continously formed by interacting with my environment. So the flaws in my wordings (in the above sense), really doesn't contradict what I'm trying to convey. There are not fool poof arguments, and life is a game. I think the same applies to fundamental physics. But there seems to be a logic to why this wild game does self-organize, and stable structures appears.

/Fredrik
 
  • #33
friend said:
When you say the word "about", you automatically imply that what it is "about" exist prior to calculating information about it.

But I mean to find out what this "about" is, and to "calculate" information about it, is in my picture the one and same process. There are no fixed points. Only the evolving and related structured can be described, by similarly evolving descriptions.

But there is ALWAYS an uncertainty in the statement itself, and in the *measure* of information. It is not possible to make deductive calculations of information. All information are IMO to be seen as speculations, as in a game, and the game itself, creates a selection among the players, this is how the memory structure is formed. The individual player doesn't know WHY there hardware is what it is, it is however their reference from which to play on.

I think the uncertainty in this, that you probably identify, can be formalized, and turned into a evolution, where the drive is to minimize the uncertainty, and thus increase the stability of the observer. The difficulty in reaching a consensus in this discussion is the flip side of the difficulty of finding universal static observers, they also evolve.

I probably picture there are some degrees of freedom of "distinguishability-bits" from which each picture is constructed. But unfortunately I don't think it makes sense to think of the degrees of freedom as fundamental, I think of them as observer dependent. And the symmetry relating the different observers is probably bound to be 1) emergent, and not fundamental; 2) generally, non-information preserving, and the transformations are not to be seen as mathematical transformations, but rather as physical processes involving time, and subject to selection.

/Fredrik
 
  • #34
friend said:
It would seem to me that information about a physical system is secondary to the foundational physics.

In physics as in ordinary language, we generally assume that if information is meaningful, it's because it accurately represents some given reality. So reality is always more fundamental.

But QM leads a lot of us to question whether this continues to be reasonable for the foundations of physics, because it's easier to understand QM as describing a structure of information than a structure that's real and well-defined "in itself."

To justify this -- well, none of us ever experiences reality "in itself". That doesn't mean there is no reality, but it probably does mean that information can be meaningful without reference to reality... i.e. by referring to other meaningful information (that refers to other information...). Because this is the fundamental nature of the world we actually experience.

That point of view could only make sense in physics, though, if we could show how and why a structure of inter-referential information might evolve to resemble a well-defined reality -- a tall order.

friend said:
A distribution which contains the information can only be foundational if it is introduced in only one form for fundamental reasons. What is the most fundamental distribution and why would it be introduced?

Again, a fundamental principle in physics is that whatever's foundational needs to be "in only one form" and hopefully fairly simple. I think Fra would agree, but I'm not sure I do, because I'm not sure any information can be meaningfully defined unless there are different kinds of information out there in the environment.

Conrad
 
  • #35
Fra said:
So there is no fundamental information, just evolving information on information, and I think the world we see, are from the point of view of evolved observers, selected for it's stability.

To me the deepest level of physical law, would be to try to understand as large part of this logic of evolving information, manifested as interacting matter systems as possible. To understand what is matter, and how it's built, would be the same as to understand what information is.

The usual definition of information such as shannon entropy, is clearly not even in the ballpark to be sufficient here.

Well, I've already made clear I think you're on the right track, for whatever that's worth. But "selected for stability" doesn't sound quite right, maybe because to me that seems to imply a background time structure (even if not a time-metric). In biology, the evolutionary game is all about how to get complex systems to last through time (by copying them before they inevitably break down due to their complexity). But in physics, the problem seems to be different, namely how to define any information in a meaningful way in the first place -- how to get any information to make a difference to anything. Time maybe comes about only in this process.

The Shannon theory is of course important to physics because it's merely quantitative -- it abstracts from all questions about what information "means" (how it affects things, how it's measured), and that's good if physics doesn't yet have ways of dealing with those questions. But I agree with you entirely that we need an information theory that explains why and how information does what it does -- i.e. gets defined / determined and in turn provides a context that defines / determines other information.

On the other hand, the connection between stability of information and mass/inertia is intriguing...

Conrad
 
  • #36
ConradDJ said:
Again, a fundamental principle in physics is that whatever's foundational needs to be "in only one form" and hopefully fairly simple. I think Fra would agree, but I'm not sure I do, because I'm not sure any information can be meaningfully defined unless there are different kinds of information out there in the environment.

Conrad

Seriously, information can only be calculated about some other structure. It never exists in a vacuum. But can co-exist with that stucture. It is not fundamental in that the underlying structure must exist first before any information can be calculated about it.

I'm thinking in terms of purely mathematics. How does one calculate information from nothing? But then again, maybe it IS possible to "derive" structure from its information content. That sounds like a harder thing to do.

For example, it is very easy to show how the Path Integral from QM can be derived solely from the Dirac Delta function. See:

http://hook.sirus.com/users/mjake/delta_physics.htm

But the Dirac Delta function is itself a distribution about which information can be calculated. Now, if it should turn out that all of physics can be derived from a Dirac Delta function, and that the information of the Delta function is constant, for example, then maybe we can derive a law of conservation of information and then laws of physics from that. Who knows?
 
Last edited by a moderator:
  • #37
ConradDJ said:
Well, I've already made clear I think you're on the right track, for whatever that's worth.
Yes you do seem to connect to the main reasoning and I appreciate your feedback.
ConradDJ said:
But "selected for stability" doesn't sound quite right, maybe because to me that seems to imply a background time structure (even if not a time-metric). In biology, the evolutionary game is all about how to get complex systems to last through time (by copying them before they inevitably break down due to their complexity). But in physics, the problem seems to be different, namely how to define any information in a meaningful way in the first place -- how to get any information to make a difference to anything. Time maybe comes about only in this process.
This is hard to write about but I tried to describe that different observers might see different degrees of freedom, and that this conflict when the observers interact evolves their memory structures. Here I associate physical interaction with negotiation. The negotiation imposes a mutual selective pressure to find an agreement. In that sense consensus is emergent.
ConradDJ said:
The Shannon theory is of course important to physics because it's merely quantitative -- it abstracts from all questions about what information "means" (how it affects things, how it's measured), and that's good if physics doesn't yet have ways of dealing with those questions. But I agree with you entirely that we need an information theory that explains why and how information does what it does -- i.e. gets defined / determined and in turn provides a context that defines / determines other information.
One objection to shannons entropy is that it is often treated like a universal measure of information, but it is in fact relative to the choice of a equiprobable microstructure. I argue that this CHOICE must not be treated as a theorists armchair maneuver, I think there is physics behind the choice, and this choice is evolving, because the microstructure is a dynamical entity formed by interactions.
ConradDJ said:
On the other hand, the connection between stability of information and mass/inertia is intriguing...

In information process, a decent analogy is inertia ~ confidence. Inertia is resistance against change, and in an information update, clearly the reason to update a previous state of information which is in contradiction with new information, is directly dependent on the confidence in the contradictory piece, and the confidence in the prior. This ensures stability and overfitting.

This inertia also ensures that variation is small. But it also implies that the variation on microstructures with low inertia will be more violent. At some point the fluctuatios are so large that the microstructure can not be distinguished from random fluctuations.

One can also picture that two communicating inertial systems, will converge in information space, and the idea is also (like Ariel caticha thinks) that spacetime structure, and the distance metrics, can be identified with emergent information measures in some kind of information geometric way.

So spacetime might possibly come as an emergent degrees of freedom as a result of communicating observers, and the measures of spacetime, should then hopefully relate back to probabilistic notions (which I do not take from standard probability theory, I rather think of information theory in terms of combinatorics of distinguishable states, and this leads to a quantized probability itself. Ie. probability doesn't take a continuum of vales from 0 to 1, it´s all about combinatorics, and the continuum probability would only be recovered in the larg number limit, but I think interesting physics happens when the large N approximation is invalid). Ariel CAtichas has hoped to derive GR from a MaxEnt principle where he chooses a particular entropy. See http://arxiv.org/PS_cache/gr-qc/pdf/0508/0508108v2.pdf I don't share his reasoning all the way, but he has many good insights worth reading.

One technical advantage of his, is that it comes with a natural "cutoff" to prevent infinities. The cutoff is simply implicit in the discreteness of the combinatorics. And the complexity of the structures will probably take the role of mass. The mass of the microstructure is then simply the confidence in the microstructure.

But this cutoff is not universal, it would be a function (among other things) of the observers own mass. Thus a massive observer, sees a different discretization spectrum of the "continuum" than a light observer does. As I see it this quest, in the way I picture the solution, really implies a full reconstruction of the mathematical contiuum. The flat introduction of the continuum in physics is a big leap. There is no doubt even a physics behind the continuum. And what you miss out when you flatly introduct contiuum models is that you loose track of the physical degrees of freedom and get lost in the mathematical redundance. Also situations such as infinity - infinity or 0*infinity is highly unphysical, yet they tend to show upp all over the place. This is patological IMHO.

But this is tricky, and I have regular headache due to this. I am trying my best to find what others are doing, and I have some favourites, but it still seems along way to go. But I have at least managed to find hte confidence to stick myself to the quest.

/Fredrik
 
  • #38
friend said:
Seriously, information can only be calculated about some other structure. It never exists in a vacuum. But can co-exist with that structure. It is not fundamental in that the underlying structure must exist first before any information can be calculated about it.

I certainly see your point. If we assume that the world consists of some well-defined factual structure, then there seem to be various ways we can pull out and represent ("calculate"?) aspects of that structure as information about the world.

But if well-defined factual structure is something that has to evolve, physically, we may need another way of understanding information. The point I was trying to make above, not very clearly, is that in our ordinary experience we are constantly making sense of the world and gaining information about it, by comparing different kinds of perceptual information in the context of other information stored in memory. We never perceive anything but perceptual information. Of course it's reasonable to believe this information more or less accurately represents the structure of a well-defined reality out there. But an equally valid inference, I think, is that information can be meaningfully defined purely in and through other information.

There is a partial analogy to human language, in that words are defined by other words. But we can also make words meaningful by pointing to things in the physical world: look, that's what we call a "tree."

I think the deep difficulty in fundamental physics might be that physics is a kind of "language" that has no referent outside itself. Maybe no aspect of its structure just "exists" as a well-defined starting-point for everything else -- maybe everything has to make a definable (measurable) difference to something else (that makes a difference to something else, and so on).

So yes, it doesn't make sense to "calculate information from nothing." But the evolutionary idea is that maybe we don't need a well-defined factual reality to start with. If we had a better understanding of how information is actually measured and communicated in the physical world, we might be able to see how that kind of system could have evolved out of mere quantum randomness.
 
  • #39
ConradDJ said:
So yes, it doesn't make sense to "calculate information from nothing." But the evolutionary idea is that maybe we don't need a well-defined factual reality to start with. If we had a better understanding of how information is actually measured and communicated in the physical world, we might be able to see how that kind of system could have evolved out of mere quantum randomness.

I think one problem is that with "calcuate" we usually think that calculations are deterministic processes, can as a kind of deduction produces a result given input.

PREMISE=STATE1 --/given deterministic rule of reasoning/--> STATE 2

So can we deduce something, as per some deductive rules of reasoning, given NO premises. Of course not.

But, if we instead look at a kind of inductive evolving reasoning, then we can make a guess, a random guess if you like, just to break the symmetry so to speak, and then act upon the feedback. Once things are in motion, it keeps evolving both the states and the reasoning itself.

The obvious problem with induction, and guessing is that it seems to not be scientific at first sight. How do you know if a guess is right or wrong, if there is no clear rules of reasoning, and no clear premises?

I think the answer is to put this in a larger context. Good reasoning, is reasoning that is "successful", bad reasoning is not constructive, and in fact self-destructive. In an evolutinary perspective successful laws, are the one that are consistent with it's own environment.

Look at how people make money on the stock market. How can you make money, without money? how can you make billions, if you have nothing to bet with? Look at how the stock market works, if you have no money, convince your environment that you have good ideas, lend money, take well calculated chances, and if your bettings strategy is among the better, you will make money. Return your loans, and you have created money from nothing.

Here an admittedly silly and oversimplified schematics

0: UNCERTAIN STATE --/UNCERTAIN REASONING/--> ACT
1: FEEDBACK FROM ENVIRONMENT --/UNCERTAIN REASONING/--> REVISE STATE & REASONING
2: GOTO 0

Howto separate the state from the reasoning, is something I have some ideas on but that's not relevant to the main point I think. Somehow, the nature of the uncertainty, SUGGESTS, the reasoning (ie. the way to proceed), pretty much by statistical reasons, you execpt the entropy to increase. But it's more involved and would involved a reconstruction of probability theory and the contiuum.

If you see the analogy, it's not totally unlike how I picture physics. All you need is a fluctuations if you like, and then the ball is rolling, and nothing will stop if from evolving into things that look nothing like the fluctuation it started as.

/Fredrik
 
Last edited:
  • #40
ConradDJ said:
But if well-defined factual structure is something that has to evolve, physically, we may need another way of understanding information.
...
But the evolutionary idea is that maybe we don't need a well-defined factual reality to start with.
Your use of the word evolve seems to imply a direction of time. Time may be an emergent, classical thing; time might not exist as we know it at the quantum level.


ConradDJ said:
I think, is that information can be meaningfully defined purely in and through other information.
I'd have to see an example of what you are talking about.
 
  • #41
friend said:
Your use of the word evolve seems to imply a direction of time. Time may be an emergent, classical thing; time might not exist as we know it at the quantum level.

... I'd have to see an example of what you are talking about.

Or, time as we know it might be an evolutionary process. When life first began, it probably wouldn't have been recognizable as life, immediately. When some sort of network of connections first got going in physics, that could evolve in some sense, there might not have been a direction of time definable outside that network.

I've tried to start a thread in the Philosophy section that tries to get at this from another angle -- called "Are Space and Time Definable without Atoms?" I think it's pertinent to the subject of this thread, i.e. can we imagine an evolutionary scenario that's not based on black-hole reproduction.

Besides your very reasonable request for examples, there's a lot in this thread that I want to think about further, as I get some time.

Conrad
 
  • #42
On "information defined in terms of other information" --

You can think of it in terms of equations -- velocity is defined as distance divided by time. To measure a velocity you need to be able to measure a distance and a time.

Or, you can define a velocity as a momentum divided by a mass, and measure a velocity that way (observe the impact of a body of known mass on another body).

Mathematically, physics is a semantic web of parameters that define each other through equations. Each parameter (like velocity) can be defined in several different ways. Some parameters seem more "primitive" than others -- like distance and time and mass -- because we have rulers and clocks and scales that seem to measure these things "directly". So in classical mechanics it seemed to made sense to consider parameters like velocity, energy and action as in some way secondary, derivative.

On the other hand, Relativity suggests that the velocity of light plays a fundamental role, while space-intervals and time-intervals lose their absolute character. And QM gives us a fundamental constant of action. It remains unclear where mass comes from. So it's no longer obvious which parameters should be considered the fundamental ones, from which the others are derived. What is clear, I think, is that any parameter that could not be defined in terms of the others, in this inter-referential system, would simply have no significance in physics.

Every mathematical definition of a parameter points to a potential way of measuring that parameter. The "semantic" structure of the web of parameters tells us something about the physical structure of the web of interactions -- many different kinds of interactions -- that can actually determine the values of these parameters in a specific case.

We tend to take for granted this inter-referential character of the structure of physics. Of course, every parameter has to be definable in terms of others, to have any meaning. Of course, for every real, physical characteristic of a thing, there has to be some physical context in which it can be observed. A physical characteristic that could not be observed in any way, could not be defined in any meaningful way.

But it's easy to make up examples of systems that don't have any way of defining their own properties. Take a Minimal Newtonian Cosmos, consisting of pointlike mass-particles interacting in Euclidean space and time, only via Newtonian gravity -- nothing else. This seems like a mathematically well-defined system, but clearly it provides nothing that could actually measure a distance or a time-interval or a mass.

So maybe there's something very special about a world like ours, that does provide the physical means for observing and defining every one of its characteristics, in terms of other observable characteristics... that not not only is what is, it but also communicates everything about itself, to itself.

It seems to me that the role of "the observer" in Relativity and especially in QM points us in this direction. QM in particular seems to be telling us that there are no facts about the world except communicated facts. So then, understanding the "semantic structure" of the interaction-web -- how different kinds of facts are defined and communicated in terms of other kinds of facts -- might turn out to be important.
 
  • #43
ConradDJ said:
Or, time as we know it might be an evolutionary process.

I agree with this. In my use of the word evolve, I make no fundamental distincion between time evolution as we know it, and possible evolution of law. It's just a matter of scales.

The arrow of time is only a sort of preferred direction of change. Why there is a preferred direction of change, and how this preferred direction is defined by each observers is part of the evolving idea.

What we usually think of as say hamiltonian time evolution, can still be thought of as part of a larger evolving process, where the hamiltonian is also evovling it's just that we have imagined a part of the structure as fixed law, to define parameterizing time according to that. I don't there is an objective and uniqe choice in such separation.

I see subjective time as an internal parameterization of the expected changes, consistent with the subjective information. This would define at locally a direction of time.

The relation between different observers, that we usually think of as a symmetry (ie. the idea that reality is indifferent to the choice of observers, and thus reality is represented not by the individual observes view, but with the symmetries relating the set of all observsers) is IMO emergent only. The idea of these symmetries is a typical realist argument IMHO.

My objection is that symmetries are not primary observables. The symmetries are only inferred, by risky arguments, from observations, and then used as a basis for further reasoning - this in itself is sound and good. What is not conceptually sound IMHO at least, is this what I call "realist view" of what a symmetry really is.

ConradDJ said:
It seems to me that the role of "the observer" in Relativity and especially in QM points us in this direction. QM in particular seems to be telling us that there are no facts about the world except communicated facts. So then, understanding the "semantic structure" of the interaction-web -- how different kinds of facts are defined and communicated in terms of other kinds of facts -- might turn out to be important.

I agree with the sentiment here too. This is also rovelli's sentiment, in his Relational QM. He even says that there are only relations, and furthermore only relational relations. However, he does not go all the way. At some point he simply throws in a wet sock and says about "beeing communicated", that communication is a physical interaction that is described by quantum mechanics (ie. the same QM;why?). In this neighbourhood of his reasoning he also notes the wish not to discuss the meaning of probability.

This is exactly (IMHO) where the wet sock is stuck. I would like to revise his reasoning from this poitn and the reconstruct the rest. This is why I couldn't appreciate the logic foundatin of LQG. It really is not radical enough.

/Fredrik
 
  • #44
http://arxiv.org/abs/0906.2700

Anthropomorphic Quantum Darwinism as an explanation for Classicality
Authors: Thomas Durt
(Submitted on 15 Jun 2009)
Abstract: According to the so-called ``Quantum Darwinist'' approach, the emergence of ``classical islands'' from a quantum background is assumed to obey a (selection) principle of maximal information. We illustrate this idea by considering the coupling of two particles that interact through a position-dependent potential. This approach sheds a new light on the emergence of classical logics and of our classical preconceptions about the world. The distinction between internal and external world, the Cartesian prejudice according to which the whole can be reduced to the sum of its parts and the appearance of preferred representation bases such as the position is seen here as the result of a very long evolution and would correspond to the most useful way of extracting stable and useful information from the quantum correlations.
 
  • #45
Ah, now I get where you were going with this.

The future handshaking with the past gives a present, each observer sees these handshakes from their own perspective, through another series of handshakes, and thus each observer claims a unique set of handshakes to be simultaneous. Their Now is not the same as any other observers Now.

I like to zoom all the way out and consider the state of a Universe as a whole with these processes going on inside of it.

For some reason I get this image of a sphere, which I know to be representing a 4 dimensional object, filled with static. This static is composed of 3-dimensional nows as described by each observer, at each location, at each point in time. Tracing a line between adjacent bits of static can provide a causal relationship divided into spacelike, timelike, and lightlike neighbors.

Each bit of static represents a snapshot of the Universe, and the reason I feel compelled to toss in the word static is that they're jostling each other, constantly blurring and tweaking the snapshots described by their neighbors.

I don't however see a reason why this would have to happen according to the causal rules which we follow, we are composed of certain types of structures, and those happen to "face downstream". If an event we observed and moved beyond were changed, we would not know of it.

The interactions between the now static bits are intimately related to the speed of light.

Any changes induced by the jostling from neighbor bits would be passed along at or below that rate. The act of observation itself would be the same type of effect, and propagate accordingly.

An idea I've been developing, which ties into these concepts, relates the extent to which observers can influence/have their now bits influenced to their mass. I digress on that though, as this is not my thread and not the topic of discussion.
 
  • #46
John86 said:
http://arxiv.org/abs/0906.2700

Anthropomorphic Quantum Darwinism as an explanation for Classicality
Authors: Thomas Durt
(Submitted on 15 Jun 2009)
Abstract: According to the so-called ``Quantum Darwinist'' approach, the emergence of ``classical islands'' from a quantum background is assumed to obey a (selection) principle of maximal information. We illustrate this idea by considering the coupling of two particles that interact through a position-dependent potential. This approach sheds a new light on the emergence of classical logics and of our classical preconceptions about the world. The distinction between internal and external world, the Cartesian prejudice according to which the whole can be reduced to the sum of its parts and the appearance of preferred representation bases such as the position is seen here as the result of a very long evolution and would correspond to the most useful way of extracting stable and useful information from the quantum correlations.

Thanks for the link John! I'll skim it tomorrow.

While I like some of Zurek's notes, I don't share the overall decoherence view because IMHO it is not a true intrinsic picture, it's rather en external reconstruction of fictive intrinsic pictures. But I'll skim that and see if they have a different view.

/Fredrik
 
  • #47
Max™ said:
Ah, now I get where you were going with this.

Not sure if this was directed to me or some other poster in the thread?

Max™ said:
The future handshaking with the past gives a present, each observer sees these handshakes from their own perspective, through another series of handshakes, and thus each observer claims a unique set of handshakes to be simultaneous. Their Now is not the same as any other observers Now.

I like to zoom all the way out and consider the state of a Universe as a whole with these processes going on inside of it.

As I see it, one of the points is that the notion of timless or observer independent "state of the universe" or even state space of the universe is flawed as an scientific abstraction and the use of it is source of problems. This is also partly Smolins view as I read him.

He calls the various timeless laws and state spaces of universes fiction, and I think he is right.

The closest think to the zoomed out picture you mention, is still constrained by a subsystem of the universe - the state space of the entire universe will not fit into this subsystem.

So, the result is that your zoomed out view of the universe as a "whole" it nevertheless subjective. And each observer has their own version. However I think this is aslo exactly why we have interactions between observers.

I think your idea works much better if the observer is very large relative to the system in question. For example particle physics. But even here there are problems, if you try to understand WHY particles have a particular action. This again may force us to emphasise that the actions of a particle might be simple only when seen from the proper inside view.

Your "universe at large" is to me an external view, an in particular a non-physical view that never occurs in nature. Ie. there is no physical system in nature which can encode this view - which is also why it is impossible to understand how such universal action is encoded.

/Fredrik
 
  • #48
Fra said:
Thanks for the link John! I'll skim it tomorrow.

While I like some of Zurek's notes, I don't share the overall decoherence view because IMHO it is not a true intrinsic picture, it's rather en external reconstruction of fictive intrinsic pictures. But I'll skim that and see if they have a different view.

I skimmed part of the paper and there are traits of their ambition that I do like, however for my taste it's not radical enough. They admit that they don't solve all problems, but since there are other problems that need to be solves as well, and I think when an attempt to do so in the way I picture, the basis for their arguments will be removed, thus I am not so motivated to study some partial ideas, based on premises I don't think will survive anyway.

Indeed I also think in terms of evolutionary processes, BUT IMHO it's inadequate to see this as evolution of a reduced basis in a grand master state space. In fact that is exactly what I object to. I claim that even this "master state space" is evolving, and there exists no inert context in which this can be thought of as simple moving around in a space. Their reasoning contains to much birds view for my taste.

I think one needs to be more radical.

But I agree that the structure, and processes that are seen in nature, must be seen as a result of some kind of evolution with a selection. The question is what the quantitative formalism for this should be.

The complication I take seriously is that this abstraction itself, is subject to the same evolutionary perspective. Ie. I don't think it makes conceptual sense to think that our own science can be distinguished from these same constraints.

As I see it the challange is to find the coherent reasoning which treats physical law on the same footing as the action of a physical system. The microstructure of matter, containing it's actions upon the environment must be a result of the same process as is evolution of physical law, since physical law is really the expectations of the future, given the present, which in turn implies an action, given a sort of "rational player" assumption.

This coherennce in reasoning is missing as I see it in that paper, which is why I think it's not radical enough.

/Fredrik
 
  • #49
Where is the assumption that the overall state space configuration is unchanging coming from?

I didn't intend that myself, I'd think it would be a strange result to have a state where the components were constantly undergoing changes, but the overall state remained static.

I didn't mean static as in unchanging, I meant like on your TV when you swap to an unused channel. *kksssshhhhhhhhhhhh*
 
  • #50
Max™ said:
Where is the assumption that the overall state space configuration is unchanging coming from?

I admit I found your first post a little confusing in the first place. I made a closest fit assumption to what you meant :)

I interpreted this since you talk about the "state of the universe as a whole". This is a typical phrasing (IMO) for someone picturing fixed state spaces.

Also maybe there is a double confusion, you say
Max™ said:
Where is the assumption that the overall state space configuration is unchanging coming from?

I was talking about the configuration state space, not state space configuration. I'm not sure what you mean. I'm not suggesting that you say that universe is static, but I THOUGHT you said that the universe is changing, but you are describing this as a evolution in a picturs statespace, or multiverse if you want (the effect it the same)??

If this is wrong then just forget my comments. I probably got your first post all wrong then :) sorry.

/Fredrik
 
Back
Top