ConradDJ said:
Well, I've already made clear I think you're on the right track, for whatever that's worth.
Yes you do seem to connect to the main reasoning and I appreciate your feedback.
ConradDJ said:
But "selected for stability" doesn't sound quite right, maybe because to me that seems to imply a background time structure (even if not a time-metric). In biology, the evolutionary game is all about how to get complex systems to last through time (by copying them before they inevitably break down due to their complexity). But in physics, the problem seems to be different, namely how to define any information in a meaningful way in the first place -- how to get any information to make a difference to anything. Time maybe comes about only in this process.
This is hard to write about but I tried to describe that different observers might see different degrees of freedom, and that this conflict when the observers interact evolves their memory structures. Here I associate physical interaction with negotiation. The negotiation imposes a mutual selective pressure to find an agreement. In that sense consensus is emergent.
ConradDJ said:
The Shannon theory is of course important to physics because it's merely quantitative -- it abstracts from all questions about what information "means" (how it affects things, how it's measured), and that's good if physics doesn't yet have ways of dealing with those questions. But I agree with you entirely that we need an information theory that explains why and how information does what it does -- i.e. gets defined / determined and in turn provides a context that defines / determines other information.
One objection to shannons entropy is that it is often treated like a universal measure of information, but it is in fact relative to the choice of a equiprobable microstructure. I argue that this CHOICE must not be treated as a theorists armchair maneuver, I think there is physics behind the choice, and this choice is evolving, because the microstructure is a dynamical entity formed by interactions.
ConradDJ said:
On the other hand, the connection between stability of information and mass/inertia is intriguing...
In information process, a decent analogy is inertia ~ confidence. Inertia is resistance against change, and in an information update, clearly the reason to update a previous state of information which is in contradiction with new information, is directly dependent on the confidence in the contradictory piece, and the confidence in the prior. This ensures stability and overfitting.
This inertia also ensures that variation is small. But it also implies that the variation on microstructures with low inertia will be more violent. At some point the fluctuatios are so large that the microstructure can not be distinguished from random fluctuations.
One can also picture that two communicating inertial systems, will converge in information space, and the idea is also (like Ariel caticha thinks) that spacetime structure, and the distance metrics, can be identified with emergent information measures in some kind of information geometric way.
So spacetime might possibly come as an emergent degrees of freedom as a result of communicating observers, and the measures of spacetime, should then hopefully relate back to probabilistic notions (which I do not take from standard probability theory, I rather think of information theory in terms of combinatorics of distinguishable states, and this leads to a quantized probability itself. Ie. probability doesn't take a continuum of vales from 0 to 1, it´s all about combinatorics, and the continuum probability would only be recovered in the larg number limit, but I think interesting physics happens when the large N approximation is invalid). Ariel CAtichas has hoped to derive GR from a MaxEnt principle where he chooses a particular entropy. See
http://arxiv.org/PS_cache/gr-qc/pdf/0508/0508108v2.pdf I don't share his reasoning all the way, but he has many good insights worth reading.
One technical advantage of his, is that it comes with a natural "cutoff" to prevent infinities. The cutoff is simply implicit in the discreteness of the combinatorics. And the complexity of the structures will probably take the role of mass. The mass of the microstructure is then simply the confidence in the microstructure.
But this cutoff is not universal, it would be a function (among other things) of the observers own mass. Thus a massive observer, sees a different discretization spectrum of the "continuum" than a light observer does. As I see it this quest, in the way I picture the solution, really implies a full reconstruction of the mathematical contiuum. The flat introduction of the continuum in physics is a big leap. There is no doubt even a physics behind the continuum. And what you miss out when you flatly introduct contiuum models is that you loose track of the physical degrees of freedom and get lost in the mathematical redundance. Also situations such as infinity - infinity or 0*infinity is highly unphysical, yet they tend to show upp all over the place. This is patological IMHO.
But this is tricky, and I have regular headache due to this. I am trying my best to find what others are doing, and I have some favourites, but it still seems along way to go. But I have at least managed to find hte confidence to stick myself to the quest.
/Fredrik