Is Information Theory Driving Astrophysics in the 21st Century?

In summary: Summary In summary, the article discusses how the mathematical theory of complexions can be used to deduce that groups can induce geometry on sets. This is relevant to information theory because entropy/information is equated to the size of groups.)
  • #1
Chronos
Science Advisor
Gold Member
11,440
750
Is information theory the crucible of astrophysics in the 21st century? The number and quality of cosmologically related IT papers in the past year is impressive. I am admittedly swayed by this approach. For example:

arXiv:0708.2837
The Physics of Information
Authors: F. Alexander Bais, J. Doyne Farmer
 
Physics news on Phys.org
  • #2
In my personal opinion information theoretical and game theoretical views on physics is definitely the what I am convinced to have huge potential, not specifically in astrophysics but to fundamental physics in general.

Though there seems to be several different approaches to "information physics", some doesn't appeal to me, while others do.

/Fredrik
 
  • #3
Someday we will toast IT as the road to reality - IMO. The methodology is powerful.
 
  • #4
I am often accused for beeing "philosophical" but I personally put large emphasis on strategy and coherence of reasoning. I want to see an implementation of the scientific method in our formal frameworks to a larger extent. In this quest, the information approaches seems to me to be the natural approach of choice in this spirit. I think it will revolutionize not just specific theories but more importantly the overall strategy of theoretical physics as a strategy.

/Fredrik
 
  • #5
Fra said:
I am often accused for beeing "philosophical" but I personally put large emphasis on strategy and coherence of reasoning. I want to see an implementation of the scientific method in our formal frameworks to a larger extent. In this quest, the information approaches seems to me to be the natural approach of choice in this spirit. I think it will revolutionize not just specific theories but more importantly the overall strategy of theoretical physics as a strategy.

/Fredrik

I also see information theory as holding a key to the fundamental structure at all levels. There is some complexity to any structure, I assume, and so it must take some information to describe that structure, even at the smallest level. My question is what is the most basic definition of information? And how does it relate to the basic parameters of space and time. Does the curvature of spacetime itself have structure and therefore information?
 
  • #6
Chris Hillman said:
Simply from the fact the field equation of a classical gravitation theory typically comprises a system of coupled differential equations, one can seek to apply general theories attempting to capture the "size" of the solution space, and since measures of "size" or "dimension" often are allied to some "entropy", this can introduce some notion of "information". Similarly, if your spacetime model incorporates probability theory (see for example recent work on the Einstein-Vlasov equations), you can in principle start computing probabilistic entropies. However, as I tried to stress above, you shouldn't compute stuff unless you have a clear rationale for expecting that these quantities capture phenomena of interest, since otherwise you are following a well-established recipe for "lying with mathematics".

This perspective appeals to me. As I understand it, information is a relation about probabilities. Probabilities come about ONLY when you talk about the number of different ways that a result can occur - from an experiement, or a solution of an equation. And since we are talking about mathematical relations that describe physical events, we can narrow our concerns to multiple solutions of the same mathematical problem (assuming a mathematical description can be found). I wonder if it's always the case that if there are multiple solutions, then there is always an underlying symmetry group involved. Then information/entropy would always be a measure of the size of some underlying symmetry group. What do you think?
 
Last edited:
  • #7
Chris Hillman said:
Indeed, the mathematical context assumed in the theory of complexions is that some group G acts on some set X. (For concreteness, in this post I will consider left actions, but sometimes it is convenient to take right actions instead.) According to Klein and Tits, this induces a notion of "geometry" on X such that G (or rather a homomorphic image of G, if the action is not faithful) serves as the geometrical symmetry group of the "space" X.

A group G acting on some set X "induces a notion of 'geometry' on X"... Wow... That says a lot. If entropy/information is equated to the size of groups, and geometry is deduced from groups, then can this be used to equate entropy/information to gravitation/geometry? Have you seen any papers on this?

The math you use is over my head, sorry. I'm still trying to justify why I should study the subject. It looks as though this all is becoming relevant. Thanks.
 
  • #8
I actually said the exact opposite of what you thought I said!

Mike2 said:
A group G acting on some set X "induces a notion of 'geometry' on X"

See for example Ken Brown, "What is a building?", Notices of the Amer. Math. Soc. 49 (2002), no. 10, 1244--1245. http://www.ams.org/notices/200210/what-is.pdf (Be warned that you'll need a fairly solid background in math to understand the question in the title.)

Mike2 said:
If entropy/information is equated to the size of groups,

The whole point of what I said about complexions was that "information" need not be measured by a number. I said that the entropies in that theory are the dimensions of the complexions (in the case when G is a Lie group acting by diffeomorphisms on a smooth manifold X).

Mike2 said:
[if] geometry is deduced from groups, then can this be used to equate entropy/information to gravitation/geometry?

The whole point of what I said about learning about IT was to be very careful to avoid glib identifications, interpretations, and so on. These are likely to be terribly misleading or flat out wrong.

I most certainly did not say that "entropy" can be "equated" with "gravitation".

Mike2 said:
I'm still trying to justify why I should study the subject. It looks as though this all is becoming relevant.

Information theory has always been relevant. The whole point of my posts was that anyone interested in new colonies of the IT empire, so to speak, should spend the time and energy to learn about the great cities of IT, since you can't understand IT without knowing something about what the greatest information theories (particularly Shannon's theory) actually say.

Mike2 said:
The math you use is over my head, sorry.

On reflection I deleted my posts since I'd rather remain silent than to be so badly misunderstood. However, I take the point that it was not your fault that you misunderstood me: I am not familiar with this subforum, and I mistakenly assumed a higher ambient level of mathematical sophistication. Sorry I confused you!

I stand by my main point, that "information theory meets gravitation" is probably the trickiest area in all of contemporary physics, and to warn that speculations which are not grounded in exceptionally solid and wide ranging background in mathematics, physics, and philosophy are very unlikely to be of any value to serious students of physics.
 
Last edited:
  • #9
I agree that confirmation of both what is, and is not observed is the bar for any ATOE [almost theory of everything!] like quantum gravity. IT is not unlike string theory - unphysical [or at least unobservable] artifacts abound. It is, however, fascinating to examine models that approximate observational limits. It is inevitable that one such model will be at least functionally correct. The devil, of course, is in the details. Also not unlike the Anthropic Principle, IT is a useful tool for detecting naked emperors.
 
  • #10
Chris Hillman
... On reflection I deleted my posts ...
But ... I already read them...and got full of salt
 
  • #11
Chris, thanks for participating in this thread! I think it was a pity (even in despite of our ignorance) to delete your responses, which also contained some nice references, fortunately I was at least able to read your shortlived posts once last night :)

Like Chris wrote in the deleted posts there are many approaches to the topic, and I am not an expert in any particular topic I just try to follow my own strategy to answer my questions, I like to learn what's relevant along the way. It sure is easy to diverge away in your own thoughts but I find it similarly risky to be persuade by existing formalisms. I try to find the balance to both stay on track but not waste the intrinsic motivation and creativity.

Chris has commented on this already but here as some more comments, the way I see it, without implications on how others see it.

Mike2 said:
My question is what is the most basic definition of information? And how does it relate to the basic parameters of space and time.

Most commonly information is defined in terms of entropy. Entropy is usually regarded as a qualitative measure of missing information. The problem is then of course, what is entropy? And there are several different definitions of entropy, yielding the information concept somewhat arbitrary.

There are some axioms one can use to derive particular entropies, but then again, what's up with those axioms? I personally try another route. If you treat the entropy right, i think it will to a certain extent no matter as much which version you use, since the absolute entropy is not as interesting, it's more the dynamics of the entropy that is interesting I think.

For my own thinking I am still pondering and working on it, but I try to work in a probabilistic framework, without explicitly defining the entropy. The interesting part to me is stuff like the transition probabilities, which are certainly related to relative entropies, and there are also similarities to the "action". It bears resemblance to the feymann path integral but I am looking for something more explicit.

I am doing this entirely on a hobby basis, and don't have much time, so things work slow.

Mike2 said:
Does the curvature of spacetime itself have structure and therefore information?

Ariel Caticha, has written some interesting papers where his idea is to define the distance measure of space in terms of the probability of mixing up two space points. A kind of information geometry. He also tries to elaborate on the dynamics by entropy dynamics. He has some ideas to derive GR from principles of inductive inference. From what I know he hasn't succeeded yet. I like all his papers, though there are specific things he do that I personally do not find satisfactory. This has exactly to do with the entropy stuff.

Check out his http://arxiv.org/PS_cache/gr-qc/pdf/0301/0301061v1.pdf but also all his papers are interesting. But the details are thin as it's clear that the approach is young and most of hte owrk is left to be done.

What I miss in this papers is a fundamental account for mass an energy. My intuitive idea is that the obvious similariy in interial and gravity phenomena and the intertial phenomena in information world are probably too similar to be a conicidence. I am stll trying to find the proper formalism for it, but the first loose association is with intertial energy or mass and information capacity. I've also got some ideas how learning mechanics gives rises to self organised structures.

I personally have started my own elaborations so far from the basic concept of distinguishability. This is closely related to Ariels approaches. This means you start with a basic boolean observable. I have had some headache over this but I have a hard time to figure out an simpler and a more plausible starting point. Then, given that there is an observer with some memory capacity, relations can be built during hte course of fluctuations. Here one can imagine several options and I'm still thinking. But one way is to consider that the natural extension to this boolean observable is an extension from {0,1} to {0,1,2} to {0,1,2,3} etc. Where 1 2 3 are simply labels and could as well be a b c etc. Ultimately we get a on dimensional "string" corresponding to continuum between the boolean 0 and the boolean 1. next this cna go on if the environment so calls for, to inflate the string in more dimensions... And meanwhile these structures must be correlated with the state of the observer. The complexity is constrained by the information cpacity of the observer.

In this case I loosely use information capacity as a relative notion.

I find the difficulty to be that in order to get this consitent the representations and the dynamics are related, and there seems hard to find objective hard references. I think of it as basically correlations between self organisting structures.

The problem with information is also that it ultimately builds on probability theory, and the problem with that (at least my personal problem) is that any realistic model must infere the probability space from experiments, or experience. So the probability space is also constrained byt hte observers information capacity. Anything else seems like an unacceptable idealisation to me. This renders this notiosn aslo relative. Not only are probabilities relative in the bayesian sense, different views may also evaluate the probability space differently.

I am hoping to produce a paper with this eventually but time is the problem. The good thing about the slow pace though, is that I have plenty of time to reflect properly over things and not just bury in mathematics.

/Fredrik
 
  • #12
Fra said:
The problem with information is also that it ultimately builds on probability theory, and the problem with that (at least my personal problem) is that any realistic model must infere the probability space from experiments, or experience. So the probability space is also constrained byt hte observers information capacity. Anything else seems like an unacceptable idealisation to me. This renders this notiosn aslo relative. Not only are probabilities relative in the bayesian sense, different views may also evaluate the probability space differently.

I forgot to wrote that what first seems as a problem, or circular reasoning gets a nice resolution, I think of it as requiring evolution to stay consistent. The notion of probability is strictly speaking uncertain, we can only find what we think is the probability. WE can measure the frequency, but what is the "true probability"? Like Chris note this brings you down to the axioms of probability and the interpretations. I see a need to tweak them. Meaning we can only get hte probabiliy of the probability, and not even thta! which means you end up with probability of probability of probability... which sort of makes no sense. I'm not sure if this is related to what chris refers to as the algorithmic entropy? (i forgot his wording as the post was deleted), any way... this seems to be the only CORRECT one... but then... it takes infinite time, memory and data to computet it! So it's useless.. the resolution is the evolutionary view... the drifting view... and the drifting rate is constrained by constraints on information capacity...

/Fredrik
 
  • #13
Chris Hillman said:
The whole point of what I said about complexions was that "information" need not be measured by a number. I said that the entropies in that theory are the dimensions of the complexions (in the case when G is a Lie group acting by diffeomorphisms on a smooth manifold X).
Sorry you deleted your posts. You are not responsible for someone else's misunderstanding. Even if you were, these forums are not so formal (as peer reviewed forums are) that you should have to worry about it. I hope your disappointment with the ambient skill level will not deter you. We all appreciate your efforts. Thanks.

My assumptions seem too esoteric to give up lightly, so let me reiterate with emphasis to see if understanding can be gained...

First, let's restrict our conversations to mathematical models. I understand that entropy/information can be measured by observation. But I suspect that we will eventually find a mathematical model for everything, and we will want to also describe entropy in terms of that model.

Now I suppose that some mathematical models may have multiple solutions. One question is if there are multiple solutions, then does this always imply an underlying symmetry group?

Another question is can the alternatives always be normalized into a probability distribution for the various possibilities? Or is it more the case that just because there are alternatives doesn't mean we can know how probable one solution is over another? Or is there a natural measure of how likely one solution is in terms of how much of the underlying set is occupied by each solution?

If information is so broady defined that it need not even be describable with a number, then would entropy be a more fitting term used to describe probability distributions?

If alternatives can be normalized into a probability distribution, would that mean that the size of the underlying symmetry group relates to entropy? Or do we need more than just the size of the group to form a distribuition? And could the needed information be gotten also from group properties in order to form a distribution? Or is it more the case that knowing the symmetry and group properties of a solution space still may not be enough to form a distribution?

You said, "Indeed, the mathematical context assumed in the theory of complexions is that some group G acts on some set X. (For concreteness, in this post I will consider left actions, but sometimes it is convenient to take right actions instead.) According to Klein and Tits, this induces a notion of "geometry" on X such that G (or rather a homomorphic image of G, if the action is not faithful) serves as the geometrical symmetry group of the "space" X." "

This sounded intriging, of course, but perhaps I read more into that than warranted, my appologies. To me this seemed to contain the seeds for a generalized surface entropy formula for any underlying set X. My understanding of simplicial complexes is that they can approximate any manifold. Or am I misunderstanding your use of complexion? If I'm reading you right, it would seem to mean that we have a fundamental definition of entropy (maybe not information) for any mathematical model with symmetry properties. Does this sound right? Thanks.
 
Last edited:
  • #14
Mike2 said:
Another question is can the alternatives always be normalized into a probability distribution for the various possibilities? Or is it more the case that just because there are alternatives doesn't mean we can know how probable one solution is over another? Or is there a natural measure of how likely one solution is in terms of how much of the underlying set is occupied by each solution?

If information is so broady defined that it need not even be describable with a number, then would entropy be a more fitting term used to describe probability distributions?

Mike, forgive me if I misinterpret you but some reflections of mine FWIW...

IMO, these questions are good and interesting and they trace down to the fundamentals of probability theory in relation to reality.

Since we are talking about reality and physics, rather than pure mathematics, the question is how to interpret and attache the notion of probability to reality in the first place. In QM the idealisation made is that we can know the probability exactly, but how do you actually make an observation of a probability? (and if you don't what's up with it?) The typical make an infinite measurement series and then the relative probability converges to the "true probability" is very very vauge IMO. Sometimes it makes sufficient sense for all practical purposes, like when big is effectively "close enough" to infinity and when the environment and experimental settings can be assume to not have change, which is in general the other major problem if you make an experiment that takes infinite time.

So IMHO at least, the notion of information usually is related to the notion of entropy, which is related to the notion of probability. Which means the issues of information or missing information is ultimately rooted in the probability theory itself.

Usually the entropy of a given probability distribution, can be directly conceptually associated with the probability of that specific distribution in the larger probabilityspce consisting of the space of all distributions.

But the problems, are the choice of a prior, and the induction of the space of distributions in the first place. If they are given, the problem is easier. But in reality, these things aren't just "given" like that. This is also quite a philosophical problem.

/Fredrik
 
  • #15
What I've personally tried to do, is to use pure combinatorics on information quanta (boolean states) to infere the probability of a given distribution (that is beeing built in a particular way to not loose relations to the first principles), without distracting the concepts with first defining an arbitrary entropy and then relation this entropy to a probability. Since the whole point of the entropy is to generate probabilities in the first place, I thought it would be cleaner to do that directly.

My conceptual problem I'm struggling with is how the dynamical equations will be like, and exactly how to pull the time parameter out of this. My idea is that time is just a parametrisation of change, along the direction of the most probable change, and the unist beeing normalised by an arbitrary "reference change". And I expect that this will imply a builting bound on rate of changes and thus information propagation.

/Fredrik
 
  • #16
See the math forum for remarks by Chris. He gives a powerful and enlightening presentation. So, grab a drink, sit back, and enjoy. It is a refreshing and educational review of IT and its place in the cosmos.
 
  • #17
Thanks for the pointer Chronos. With all the subforums on here I had no idea. Due to time I never even looked into most subforums except a few.

/Fredrik
 
  • #18
Musing on misunderstood warnings

Fra said:
Chris, thanks for participating in this thread! I think it was a pity (even in despite of our ignorance) to delete your responses, which also contained some nice references, fortunately I was at least able to read your shortlived posts once last night :)

You are welcome, but do you understand why the following Washington Post article reminded me of this thread?

http://www.washingtonpost.com/wp-dyn/content/article/2007/09/03/AR2007090300933_pf.html

Chronos said:
See the math forum for remarks by Chris. He gives a powerful and enlightening presentation. So, grab a drink, sit back, and enjoy. It is a refreshing and educational review of IT and its place in the cosmos.

I really really hope that you all redeem yourself by reading carefully henceforth, and by studying some of the sources I cited---at the very least, Cover and Thomas and the on-line expository papers I cited. To repeat:

http://www.math.uni-hamburg.de/home/gunesch/Entropy/shannon.ps
http://www.math.uni-hamburg.de/home/gunesch/Entropy/entropy.html
http://www.math.uni-hamburg.de/home/gunesch/Entropy/dynsys.html
 
Last edited:
  • #19
Danger Will Robinson!

Fra said:
I am not an expert in any particular topic I just try to follow my own strategy to answer my questions, I like to learn what's relevant along the way. It sure is easy to diverge away in your own thoughts but I find it similarly risky to be persuade by existing formalisms. I try to find the balance to both stay on track but not waste the intrinsic motivation and creativity...I have plenty of time to reflect properly over things and not just bury in mathematics.

Some of this seems to reflect the myth (common among "armchair scientists") that delving into the literature will stifle creativity. The truth is quite the opposite. By studying good textbooks and expository papers about a field you would like to contribute to, you avoid repeating one after another common beginner's mistakes, which makes your progress toward attaining some level of mastery much more efficient. Furthermore, reading really good ideas from those who are already experts in the field makes it much more likely that your own creativity will lead to something genuinely novel and possibly interesting to others.

Fra said:
The problem with information is also that it ultimately builds on probability theory

Once again, one of the major points I tried to make is "it ain't neccessarily so". While at the same time urging you all to stop posting here and read Shannon 1948 and some other good sources of information about Shannon's information theory (the first, by far the most highly developed, and in many ways the most impressive, but nonetheless, not the most suitable for every phenomenon of possible interest in which "information" appears to play a role).

Fra said:
The notion of probability is strictly speaking uncertain, we can only find what we think is the probability. WE can measure the frequency, but what is the "true probability"?

That is only one of the issues I mentioned concerning "uncertainties of probability".

Fra said:
Like Chris note this brings you down to the axioms of probability and the interpretations.

You're doing it again. That is not what I said!

Mike2 said:
My assumptions seem too esoteric to give up lightly

Too esoteric to give up lightly?

Mike2 said:
I understand that entropy/information can be measured by observation.

That is not what I said!

Mike2 said:
But I suspect that we will eventually find a mathematical model for everything, and we will want to also describe entropy in terms of that model.

I trust you mean "model of something other than communication" (or "information"). Or perhaps "theory" of something?

One of my points was that Shannon 1948 is "the very model of a mathematical theory". Therefore, it behooves anyone seeking to build a mathematical theory of anything to learn what Shannon did.

Mike2 said:
Now I suppose that some mathematical models may have multiple solutions.

Are you sure you are not confusing mathematical model with field equation?

The Schwarzschild perfect fluid matched to a Schwarzschild vacuum exterior is a mathematical model of an isolated nonrotating object, formulated in a certain physical theory, gtr. It is also a (global, exact) solution to the Einstein field equation which lies at the heart of that theory.

The Markov chains discussed by Shannon in his 1948 paper form a sequence of mathematical models of natural language production. As he is careful to stress, this kind of model cannot possibly capture aspects of natural language other than statistics. His ultimate point is that the [i[mathematical theory[/i] he constructs, motivated by this sequence of Markov chains (which provide more and more accurate models of the purely statistical aspects of natural language production), it turns out that statistical structure is the only kind which is needed. Which is why I was careful to stress that in Shannon's theory, a nonzero mutual information between two Markov chains does not imply any direction of causality, only a statistical correlation in behavior.

Mike2 said:
One question is if there are multiple solutions, then does this always imply an underlying symmetry group?

Can you clarify what you mean by "solution" and "model"?

Mike2 said:
Another question is can the alternatives always be normalized into a probability distribution for the various possibilities?

You should be able to answer that yourself, I think. (This comes up in any good textbook on quantum mechanics, for example.)

Mike2 said:
Or is it more the case that just because there are alternatives doesn't mean we can know how probable one solution is over another? Or is there a natural measure of how likely one solution is in terms of how much of the underlying set is occupied by each solution?

I really, truly, deeply urge you to study Shannon 1948.

http://www.math.uni-hamburg.de/home/gunesch/Entropy/infcode.html

Mike2 said:
If information is so broady defined that it need not even be describable with a number,

I didn't say that!

Mike2 said:
If alternatives can be normalized into a probability distribution, would that mean that the size of the underlying symmetry group relates to entropy? Or do we need more than just the size of the group to form a distribuition?

I discussed a number of quite different theories of information. The whole point was that there are many quite different ways of defining notions of information. Some use very little structure (e.g. Boltzmann's theory), some require the presence of a probability measure (Shannon's theory) or a group action (theory of Planck's "complexions"). So I think you may be mixing up at least two theories and two or three levels of mathematical structure.

Mike2 said:
And could the needed information be gotten also from group properties in order to form a distribution?

In a situation in which the "entropies" defined in two or more information theories makes sense, because the requisite mathematical structure (probability, action) are present, it is reasonable to ask how these quantities are related. As I said, in general they are not numerically the same, but they may approximate each other or even approach each other in some limit.

If you do the exercises I suggested, you should be able to answer your own question.

Mike2 said:
This sounded intriging, of course, but perhaps I read more into that than warranted, my appologies. To me this seemed to contain the seeds for a generalized surface entropy formula for any underlying set X. My understanding of simplicial complexes is that they can approximate any manifold. Or am I misunderstanding your use of complexion?

Are you perhaps confusing the so-called holography principle with something you think I said?

Mike2 said:
If I'm reading you right, it would seem to mean that we have a fundamental definition of entropy (maybe not information) for any mathematical model with symmetry properties.

I said that whenever we have a group action, we have complexions, and these obey essentially the same formal properties as Shannon's entropies, in particular the quotient law. Thus, any structural invariant of these will also respect the formal properties of Shannon's entropies, and thus will admit an interpretation in terms of "information". I briefly mentioned two case in which such "Galois entropies" are obvious (actions on finite sets, and finite dimensional Lie groups of diffeomorphisms), but I did not imply that such quantities can be found for any group action whatever.

(Regarding axiomatics: note that Shannon's statement of the formal properties he takes as axiomatic are given in the context of probability. This is why what I just said doesn't contradict his famous unicity theorem. The formal properties of which I speak can however be expressed in a more general context that probability theory, namely what I called (in "What is Information?") join-sets, a kind of weakening of lattice as in lattice theory.)
 
Last edited:
  • #20
Chris Hillman said:
One point which I should think would be self evident is that by studying good textbooks and expository papers about a field you would like to contribute to, you avoid repeating one after another common beginner's mistakes, which makes your progress much more efficient.
I don't think anyone is going to write a book detailing all the mistakes others have made in physics. I wish they would. I wonder what the table of contents would look like.

Another is that reading really good ideas from those who are already experts in the field makes it much more likely that your own creativity will lead to something genuinely novel and possibly interesting to others.
Foundational issues don't seem to be what most experts are interested in. I feel (probably like Fra) that it is too easy to get lost because too many trees obstruct the forrest.



Once again, one of the major points I tried to make is "it ain't neccessarily so". While at the same time urging you all to stop posting here and read Shannon 1948 and some other good sources of information about Shannon's information theory (the first, by far the most highly developed, and in many ways the most impressive, but nonetheless, not the most suitable for every phenomenon of possible interest in which "information" appears to play a role).
We all appreciate your efforts, and we consider your posts to have some authority. I think the problem is that we're trying desperately to simplify all the information you've given us. Thank you for keeping us on our toes.

I wonder if the various kinds of information you've noted can be classified into two areas - one, that is based on probabilities understood by observation (how many faces of a die, or how many possible letters in a words), and two, based on probabilities determined from mathematical models, like the number of eigenstates or something like that? Or were you saying that some information is not based on probability at all? If not based on probability, are they all at least based on alternatives?

I said that whenever we have a group action, we have complexions, and these obey essentially the same formal properties as Shannon's entropies
But not the other way around - whenever we have Shannon type entropies we necessarily have group action? That is really the question I'm curious about.

PS. I have read some of Shannon's work... years ago. At the time I found it facinating, and that's why I'm interested now.

Thanks.
 
Last edited:
  • #21
Reply to Fra

I will respond here to some questions asked in the other thread, but for the future, let's not import the discussion in this thread into the (hopefully more thoughtful!) discussion in the other thread, OK?

Fra said:
And what are your current views/research? What's your views on fundamental physics? What's your stance on QG?

I hesitate to express my "views" because... etc., etc., etc.

Fra said:
What's your stance on QG?

Not for amateurs? Maybe even not for anyone intellectually inferior to Witten?

Not for zealots? Not for careless or unwise thinkers?

Not in fact the most burning question of the 21st century? From the Google cache:

Scientists are currently re-examining two of the most venerable institutions in academic science: publishing and tenure. (For the latter see The Scientist http://www.the-scientist.com/ I feel that this presents a wonderful opportunity to reshape science for the new century, killing four birds with two stones, as it were.

...

No-one denies that unifying gravitation with The Rest is a topic of natural and enduring interest in physics. But what have we gotten for all this effort? The Bogdanov scandal.

...

Compare the number of mainstream news stories, public policy papers, judicial pronouncements, and so on, which quote a statistical analysis. Statistics is not just an important part of science, it is the most important part. All good things in our society flow from science, and science requires statistics to operate, indeed government requires statistics to operate. Yet, contemporary statistical practice is highly suspect, because the foundations of statistics and probability theory remain so mysterious.

Yet few departments of philosophy or statistics even offer a solid course on the most important problem in science or indeed 21st century society (see above), the philosophy of statistics.

To avoid misunderstanding: the bulk of research funding in physics still goes to such venerable topics as Newtonian hydrodynamics, and I am not suggesting that this money is mis-spent. To the contrary, these areas of physical research involve a healthy interaction of theory and experiment.

From the Google cache, a followup post:

I'm arguing that a career in stat meets 21st century math meets philosophy will allow you to change the world!

P.S. for the OP: see Brian Hayes, "Sorting out the genome", Am. Scientist 95 (2007). Study Cameron, Permutation Groups. (Same Cameron you might encounter in Theory of Designs in a stat course.) Can you see how the sorting a stack of pancakes using a griddle is related to "solving" Rubik's cube? Hint: both are examples of the "restoration problem" in group actions, which is fundamentally related to "symmetry meets information". Can you see how signed permutations are related to the wreath product (see the book by Cameron)? The larger point is that the mathematics needed for genomics is the same kind I argue should be the center of attention for the New Statistics.

Anyone wishing to rant against any of these suggestions, please start a new thread in the "General Discussion" forum.
 
Last edited:
  • #22
Mike2 said:
I don't think anyone is going to write a book detailing all the mistakes others have made in physics. I wish they would. I wonder what the table of contents would look like.

That would be a huge encyclopedia of folly, not just one book.

Mike2 said:
Foundational issues don't seem to be what most experts are interested in.

Are you back to the search for QG now? If so, to the contrary, my impression is that the experts are well aware of the importance of such issues.

Mike2 said:
I feel (probably like Fra) that it is too easy to get lost because too many trees obstruct the forrest.

I don't mean to sound flippant, but if you are back to QG now, I think it sensible to assume that QG may not be an appropriate interest for anyone who is not a very fast learner. Fortunately, there are many equally interesting but less demanding topics!

Mike2 said:
We all appreciate your efforts, and we consider your posts to have some authority. I think the problem is that we're trying desperately to simplify all the information you've given us. Thank you for keeping us on our toes.

You are welcome, but the best way to show your gratitude would be to read the textbooks and expository papers I mentioned!

Mike2 said:
I wonder if the various kinds of information you've noted can be classified into two areas - one, that is based on probabilities understood by observation (how many faces of a die, or how many possible letters in a words), and two, based on probabilities determined from mathematical models, like the number of eigenstates or something like that?

One of my major points is that probabilities need not be involved at all in setting up a workable information theory.

Mike2 said:
Or were you saying that some [definitions of] information [are] not based on probability at all?

Yes, exactly! :smile:

Mike2 said:
If not based on probability, are they all at least based on alternatives?

There is such a variety that I hesitate to generalize, but off the top of my head, I can say this:

* the "shannonian theories" (those in which "entropies" possesses the same formal properties as Shannon entropies, but which need not be defined in terms of probabilities) are based upon alternatives

* notions of entropy based upon the logarithmic growth rate of some counting series could be said to be based upon counting alternatives, and thus, similarly for many "dimensional" notions of entropy

One information theory I didn't even mention has something of the flavor of certain topics in mathematical logic and AI research. See Devlin, Logic and Information.

Mike2 said:
whenever we have Shannon type entropies we necessarily have group action?

The situation with group actions is analogous to the situation with probability measures. These are very general concepts, so it is almost always possible to define any number of actions or probability measures. One of my major points was that it matters very much that you use the right action, or the right probability measure. Sometimes neither actions nor probabilities are what you really want.

Mike2 said:
PS. I have read some of Shannon's work... years ago. At the time I found it facinating, and that's why I'm interested now.

Shannon 1948 is one of the greatest scientific papers of all time, and also one of the most charming. As such it is always worth re-reading! I hope you will also read the papers by McKay and Wyner for a very different perspective on Shannon's information theory.
 
Last edited:
  • #23
Chris Hillman said:
. . . By studying good textbooks and expository papers about a field you would like to contribute to, you avoid repeating one after another common beginner's mistakes, which makes your progress toward attaining some level of mastery much more efficient. Furthermore, reading really good ideas from those who are already experts in the field makes it much more likely that your own creativity will lead to something genuinely novel and possibly interesting to others. . . .
Indeed. This is a terrific illustration of the best way to do science - standing upon the backs of giants. In fairness to the other posters [and myself], we are merely less well informed, not crackpots. Your insights and instructional references are exactly what is desired and much appreciated. Think of us as chess duffers, always eager, but, undisciplined. I agree that probability theory is often misapplied to information theory.
 
  • #25
First, thanks for the response in the other thread.

Chris Hillman said:
Some of this seems to reflect the myth (common among "armchair scientists") that delving into the literature will stifle creativity. The truth is quite the opposite. By studying good textbooks and expository papers about a field you would like to contribute to, you avoid repeating one after another common beginner's mistakes, which makes your progress toward attaining some level of mastery much more efficient. Furthermore, reading really good ideas from those who are already experts in the field makes it much more likely that your own creativity will lead to something genuinely novel and possibly interesting to others.

These are good points and my main point was not to suggest that strongly what you react upon. I certainly do read others work, and books as much as I have the time to. It's just seems some work is more worth the time reading than others and it would easily take many man-lifes to read all papers ever written, so I still consider it essential to have a strategy for investing your time in order to make maximum progress in a given time. Spending 100% of my time trying to decode what others are doing does not seem like a good idea to me.

I can not read everything even if I wanted to. This was in part a disclaimer since I am not a matematician or having that as my prime focus even though I have a basic math education - which without I would be completely crippled.

I am going to try to locate some of your references to see what it is, I was also unaware of your nice info theory summary website until now.

Chris Hillman said:
While at the same time urging you all to stop posting here and read Shannon 1948 and some other good sources of information about Shannon's information theory (the first, by far the most highly developed, and in many ways the most impressive, but nonetheless, not the most suitable for every phenomenon of possible interest in which "information" appears to play a role).

I personally haven't actually read shannons original papers (at least what I can remember atm) but from other sources his definition of information is not what I find useful for my purposes, any mathematical beauty aside.

I will try to look up your information definition that you say have nothing to do with probability, I haven't encountered this before.

Since I'm not a matematician, perhaps my view is different. There are clearly many definitions of informations that are fine in themselves from mathematical viewpoint, but the hard part is to find out what is most useful for the purpose in question. My personal mission is to try to understand the laws of physics in terms of information processing. And in that context the question is what version of the information concept that makes sense. I've read other peoples work on this, and the start with the cox axioms and defined the relative entropy which is a major improvement but there is still something that ain't right there. But the objections are not of mathematical nature.

I have on my todo list to check some of the references that Chris suggested! Thanks.

/Fredrik
 
  • #26
Chris Hillman said:
The situation with group actions is analogous to the situation with probability measures. These are very general concepts, so it is almost always possible to define any number of actions or probability measures. One of my major points was that it matters very much that you use the right action, or the right probability measure. Sometimes neither actions nor probabilities are what you really want.

Thanks Chris,

It sounds like you are saying (I hope you are saying) that probability measures can ALWAYS be represented by a group action, and visa versa. Is this right? Thanks.
 
  • #27
Ignorance of Shannon is just plain ignorance

Fra said:
I certainly do read others work, and books as much as I have the time to. It's just seems some work is more worth the time reading than others and it would easily take many man-lifes to read all papers ever written...I can not read everything even if I wanted to.

Good, it sounds like we are in agreement on at least two generalities (value of studying the literature, neccessity of prioritizing on the basis of incomplete information), but:

Fra said:
I personally haven't actually read shannons original papers (at least what I can remember atm) but from other sources his definition of information is not what I find useful for my purposes, any mathematical beauty aside.

My efforts have failed unless everyone who read this thread is studying Shannon 1948, which is by common consent one of the most important scientific papers of all time, and arguably also one of the most charming. IMO, while it is possible to be an educated citizen without having read Nikolai Gogol or Harper Lee, it is not possible to be an educated citizen unless one has read Shannon 1948. It's that important. And you are certainly wrong about what you think you know from "other sources" about the utility and scope of his theory. Which is also one of the most beautiful mathematical theories ever devised, incidently. It is a joy to see how perfectly Shannon concocted just the right definitions for the problem which concerned him (reliable communication over a noisy channel).

Fra said:
I will try to look up your information definition that you say have nothing to do with probability, I haven't encountered this before.

IMO, that would be silly unless you also study Shannon 1948, but in the collected papers of Einstein you'll find him discussing complexions with Planck and the Ehrenfests, whil earguing over the approach of Boltzmann to statistical mechanics. But you should have no trouble (if you've had an undergraduate group theory course) in verifying my claims, since these are easy undergraduate exercises (just unwind the definitions and pull on the thread).

One of my major points was that not all questions concerning "information" need be treated using Shannon's theory. That doesn't make sense unless you already have a good sense of how fantastically powerful and versatile Shannon's ideas have proven.
 
  • #28
Mike2 said:
It sounds like you are saying (I hope you are saying) that probability measures can ALWAYS be represented by a group action, and visa versa. Is this right?

I've lost track of how many times you and I have exchanged the following dialog in this one thread:

CH: [itex]\alpha[/itex]

M2: So you're saying [itex]\neg \alpha[/itex]

CH: No, I said [itex]\alpha[/itex].

I can't keep this up indefinitely.
 
  • #29
Just an note

I checked some papers and stuff relating to to Boltzmann and also discussion by Einstein, and it seems they indeed used the term complexions and complexion number, for some reason I never put this term on memory.

Anyway, since I'm not seeing it from the point of pure math, boltzman and einstein seems to be using (my interpretation from context) it pretty much synonymous or a slight generalisation to the notion of the set of distinguishable microstates consistent with the macrostate, and the complexion number is the number of possible distinguishable microstates or "possibilities" consistent with the constraints or macrostate. This is like the boltzmanns entropy except I suppose the microstate is generalised beyond the mechanical analogue. This way you can use this to define an entropy of choice without touching the notion of probability directly, by only combinatorics and you thus get around the issue of defining probability directly in terms of measurements, instead one has to be able to infer _distinguishable states_ microstructure from input, or the constructs is still uncelar (ie any "hidden" microstructures are not acceptable). This is to the limit of my ignorance not trivial either, in particular as the complexity and memory sizes vary - THIS is the keys where I think many interesting things. Oddly enough, I think this is very interesting, even for an amateur.

Chris will probably get upset if this is not what he meant, so I explicitly declare that this does not necessarily have any relation to it. I still wear my ignorance with pride :)

But it is more or less what I referred to in post 15. I don't follow Chris all steps. He is a matematician, I am not. This alone explains the communication issues. But I figure there must still be a way for communication to take place :)

/Fredrik
 
  • #30
I had no idea Chris was so well versed, and appreciative of the power of IT. That is awesome, as are the references. Give me about 2 weeks to bone up before daring to comment. I thought modern scientists were virtually ignoring this, IMO, extraordinary resource. I stand [actually jumping up and down] corrected.
 
  • #31
Go ahead, flatter me!

Chronos said:
I had no idea Chris was so well versed, and appreciative of the power of IT.

:smile:

IT was my first mathematical interest. People who know me only from my public postings seem to assume I only know about general relativity, but a page count of my personal notes and a title count of my mathematical library suggests that only [itex]1/15[/itex] to [itex]1/12[/itex] of my mathematical knowledge directly concerns gtr. Of course, perhaps the most wonderful aspect of mathematics is that techniques valuable in one area often turn out to be valuable in others; for example, my interest in perturbation theory doesn't arise from gtr but from my interest in symmetries of differential equations; however, it is very useful in gtr!

Chronos said:
That is awesome, as are the references. Give me about 2 weeks to bone up before daring to comment. I thought modern scientists were virtually ignoring this, IMO, extraordinary resource. I stand [actually jumping up and down] corrected.

:smile:
 
  • #32
Told ya!

Fra said:
I checked some papers and stuff relating to to Boltzmann and also discussion by Einstein, and it seems they indeed used the term complexions and complexion number

People sometimes seem to assume I am making the whole thing up! :rolleyes:

Fra said:
Anyway, since I'm not seeing it from the point of pure math, boltzman and einstein seems to be using (my interpretation from context) it pretty much synonymous or a slight generalisation to the notion of the set of distinguishable microstates consistent with the macrostate

It will certainly help to extract the pure math. Suppose we have some set X. A finite partition of X, written [itex]\pi[/itex], is a decomposition of X into r disjoint blocks [itex]A_j \subset X[/itex] such that [itex]X = \cup_{j=1}^r A_j[/itex] (perhaps needless to say, "disjoint" means [itex]A_j \cap A_k = \emptyset, \; j \neq k[/itex]). You can say that the elements of X are microstates each of which gives rise to a unique macrostate, with the macrostates corresponding to the blocks of the partition. Or you can say that the blocks are the preimages of some function [itex]f:X \rightarrow \mathbold{R}[/itex]; for example, a function assigning an "energy" to each element (taking a different value on each block). My point is that none of this need have anything to do with physics.

Fra said:
and the complexion number is the number of possible distinguishable microstates or "possibilities" consistent with the constraints or macrostate. This is like the boltzmanns entropy except I suppose the microstate is generalised beyond the mechanical analogue.

Right, if you follow up my citation of the expository paper by Brian Hayes in American Scientist, and if you recall the fundamental orbit-stabilizer relation from elementary group theory (see any good book on group theory, for example Neumann, Stoy, and Thompson, Groups and Geometry), you should be able to see that the natural action by [itex]S_n[/itex] on an n-set X induces an action on the set of partitions of X, and the size of the orbit of [itex]\pi[/itex] is then
[tex]\frac{n!}{n_1! \, n_2! \dots n_r!}[/tex]
while the orbit itself is the coset space
[tex] S_n/\left( S_{n_1} \times S_{n_2} \dots \times S_{n_r} \right)[/tex]
where the partition is [itex]X = \cup_{j=1}^r A_j[/itex] with [itex]|A_j| = n_j[/itex]. Here, the stabilizer of [itex]\pi[/itex] is a subgroup of [itex]S_n[/itex] which is isomorphic to the external direct product [itex]S_{n_1} \times S_{n_2} \dots \times S_{n_r}[/itex]; in other words, the stabilizer is a Young subgroup, a kind of "internal direct product" of subgroups which are themselves symmetric groups.

If we are thinking of a function and we take [itex]\pi[/itex] to be the partition of X into preimages, then the stabilizer consists of those permutations which respect the partition, i.e. don't map any point to a point lying in another preimage.

Here, the complexion is the coset space; that is, the orbit of [tex]\pi[/itex] under the induced action by [itex]S_n[/itex] on the partitions of X. (This action can carry a given partition into any other partition with the same block sizes.) In Boltzmann's work the complexion of [itex]\pi[/itex] is the set of microstates corresponding to a given macrostate (e.g. having a given energy value), and to get an "subadditive" measure of the "variety" of the sizes of the blocks, we use the logarithm of the size of the complexion as the Boltzmann entropy. Indeed, for finite complexions the logarithm of the sizes of the complexions is always a generalized Boltzmann entropy. As I already remarked, entropies are essentially dimensions, so we should expect that in another famously tractable case of group actions, finite dimensional Lie groups of diffeomorphisms, the dimension of the cosets (which are finite dimensional coset spaces) will behave as entropies, and they do.

From this point of view, the Boltzmann entropy of a function (or if you prefer, of the partition into preimages induced by this function) measures the "asymmetry" of the partition. If you are familiar with Polya enumeration theory, you are already familiar with the idea that among geometric configurations consisting of k points in some finite space, the more symmetrical configurations have smaller orbits under the symmetry group, while the more asymmetric configurations have larger orbits. A good example is [itex]D_n[/itex] acting on a necklace strung with n beads.

Incidently, from the perspective of the theory of G-sets (sets equipped with an action by some specific group G; this category is analgous to the category of R-modules where R is some specific ring), the induced action on partitions is remarkable in that it satisfies an analogue of the primitive element theorem: every intersection of stabilizers of individual partitions, [itex]G_\pi, \, G_{\pi^\prime}, \dots[/itex], is the stabilizer of some partition. In general, it is certainly not true that every intersection of point stabilizers [itex]G_x, \, G_{x^\prime}, \dots[/itex] is the stabilizer of some point!

In general, given any action by some group G on some set X, there are many interesting "induced actions" one can consider, several of which have "regularizing" properties, in the sense of improving the behavior in some respect. In the induced action on partitions we altered the set being acted on, but we can also alter the group which is acting. For example, it is easy to define the wreath product of two actions (by G on X and by H on Y) and the result is an action by [itex]G \wr H[/itex] on [itex]X \times Y[/itex], in which we take copies of Y indexed by X, thinking of the copies of Y as fibers sitting over the base X, and let copies of H independently permute the copies of Y and let G permute these fibers.

In the case of finite permutation groups this gives a direct connection between Polya enumeration and complexions. Namely: the pattern index enumerates the conjugacy classes of pointwise stabilizers but forgets the lattice structure. For example, let us compare the natural permutation actions by the transitive permutation groups of degree five. Then [itex]C_5, \, D_5[/itex] both give pattern index [itex]1,1,2,2,1,1[/itex] while [itex]F_{5:4}, \, A_5, \, S_5[/itex] give pattern index [itex]1,1,1,1,1,1[/itex]. These numbers correspond to the stabilizer lattice; for example the stabilizer lattice for [itex]C_2 \wr D_5[/itex] (now writing fiber first, as appropriate for right actions), acting in the wreath product action on the subsets of our 5-set, starts with [itex]G=C_2 \wr D_5[/itex] at the top, which covers a conjugacy class of five index ten subgroups (the stabilizers of the five points), which covers two classes of index four sugroups (two distinct types of five pairs of points), which each cover two conjugacy classes of index two subgroups (two distinct types of five triples of points), which each covers a single conjugacy class of index two subgroups (five quadruples of points), which covers a congacy class consisting of a unique index two subgroup (the trivial subgroup). Note that [itex]10 \cdot 4 \cdot 2 \cdot 2 \cdot 2 = 320 = | C_2 \wr D_5 |[/itex]. In more complicated cases, one really requires a Hasse graph to depict the stabfix lattice (modulo conjugacy). One thing I find useful is to attach to the edge from [itex]C[/itex] down to [itex]C^\prime[/itex] not only the stabilizer subgroup index, but also a symbol [itex]m/n[/itex] indicating that each subgroup belonging to class [itex]C[/itex] contains m subgroups belonging to class [itex]C^\prime[/itex], while each subgroup belonging to class [itex]C^\prime[/itex] is contained in n subgroups belonging to class [itex]C[/itex]. These integer ratios (not neccessarily in lowest terms!) together with the Hasse diagram describe the incidence relations among the fixsets.

Fra said:
you thus get around the issue of defining probability directly in terms of measurements, instead one has to be able to infer _distinguishable states_ microstructure from input, or the constructs is still uncelar (ie any "hidden" microstructures are not acceptable). This is to the limit of my ignorance not trivial either, in particular as the complexity and memory sizes vary - THIS is the keys where I think many interesting things. Oddly enough, I think this is very interesting, even for an amateur.

I am not sure I see what you are getting at.

Fra said:
Chris will probably get upset if this is not what he meant, so I explicitly declare that this does not necessarily have any relation to it. I still wear my ignorance with pride :)

Better to just say "if I understand you correctly" over and over, until we agree that you do understand correctly what I said. (This policy is symmetrical, of course.)

Fra said:
But it is more or less what I referred to in post 15. I don't follow Chris all steps. He is a matematician, I am not. This alone explains the communication issues.

Well, my posts have only been sketches. There is a lot to say so if I tried to fill in all the background for a general audience and write out all the arguments, I'd quickly have a book (my notes on this stuff are in fact more extensive than my notes on gtr).
 
Last edited:
  • #33
A quick comment
Chris Hillman said:
People sometimes seem to assume I am making the whole thing up! :rolleyes:

I want to add quickly that other misconceptions aside, in no way do I want you to think that I think that you were "making anything up" in any way - your posts tells me that you are very likely to be someone that knows a lot about the various formalisms relating to this and it was equally extremely likely that you had something interesting to say!

What I wasn't sure about though what your remarks had for bearing on my interests (speaking for myself, I do not speak for any other participants in this thread). Your terminology was unusual to me, but then you wrote yourself you are a matematician to training and inclination - and I am not (so I figure I am pretty ignorant relative to your position). I wasn't sure if you used matematical terms for something that I would label something else. Also I don't know you since before, except seeing your name adding sophisticated comments on GR in posts, it makes it harder to understand. You also appeared certain that I was wrong in any impressions, which was amazing since I never explained my application in detail, but it's in either case quite different from shannons problem from my viewpoint. But wether this difference is due to ignorance or something more substantial, is by construction impossible for me to know, or what's the difference? I figure everything may be due to my ignorance, which is also the very problem under examination (to me at least).

I tried to understand the meaning of your message, which is also why I asked about your opinions of QG, and what your interests were, so I could get an image of you(the sender), so as to better guess the meaning :smile:

/Fredrik
 
  • #34
Fra said:
I want to add quickly that other misconceptions aside, in no way do I want you to think that I think that you were "making anything up"

Sorry, I didn't mean you, or anyone participating in this thread (unless I have encountered someone before under a different "handle").

I don't understand the rest of your post.
 
  • #35
Chris Hillman said:
It will certainly help to extract the pure math.
...
My point is that none of this need have anything to do with physics.

Yes, this makes sense and your point it well taken. I guess I might ask, extract the pure math of what? :) Perhaps this explains in a nutshell my angle. I see many ways of coming up with mathematics, unless I know what how it relates to reality.

I'm sort of trying to apply this to what I percept as reality, I am trying to find a relation between reality and a mathematical formalism. My focus in all my comments is on this latter part and perhaps more importanly in the relation between mathematics and science that sort of also induces a reality to mathematics too. I admit that this touches not only physics but also philosophy of physics and scientific method.

Chris Hillman said:
Right, if you follow up my citation of the expository paper by Brian Hayes in American Scientist, and if you recall the fundamental orbit-stabilizer relation from elementary group theory (see any good book on group theory, for example Neumann, Stoy, and Thompson, Groups and Geometry), you should be able to see that the natural action by [itex]S_n[/itex] on an n-set X induces an action on the set of partitions of X, and the size of the orbit of [itex]\pi[/itex] is then
[tex]\frac{n!}{n_1! \, n_2! \dots n_r!}[/tex]
while the orbit itself is the coset space
[tex] S_n/\left( S_{n_1} \times S_{n_2} \dots \times S_{n_r} \right)[/tex]
where the partition is [itex]X = \cup_{j=1}^r A_j[/itex] with [itex]|A_j| = n_j[/itex]. Here, the stabilizer of [itex]\pi[/itex] is a subgroup of [itex]S_n[/itex] which is isomorphic to the external direct product [itex]S_{n_1} \times S_{n_2} \dots \times S_{n_r}[/itex]; in other words, the stabilizer is a Young subgroup, a kind of "internal direct product" of subgroups which are themselves symmetric groups.

...

together with the Hasse diagram describe the incidence relations among the fixsets.

I roughly get what your saying and it all looks familiar, but I admit that due to other things that I do, I am currently rusty with the formal algebra, so I can not on top of my head give a response at your level here. You are describing in a stronger formal way things that what I understand really isn't that terribly complicated from a conceptual point of view.

From my point of view, "an energy partition" is quite fuzzy to start with, since what is energy? Unless the notion of energy is defined, the entire construct is compromised. I am trying to define all notions in terms of information at hand. Meaning that the part of history that has not dissipated from the observer memory is at hand.

I am trying to find a way to attach everything to principal feedback/experiment. This includes the spaces that the observables sit in. At this point however, the ideas are not mature enough to be explained in the exact way that I think you would want in order to understand what I am saying. I have been appealing to your intuitive understanding to communicate.

Chris Hillman said:
I am not sure I see what you are getting at.

Chris Hillman said:
Well, my posts have only been sketches. There is a lot to say so if I tried to fill in all the background for a general audience and write out all the arguments, I'd quickly have a book (my notes on this stuff are in fact more extensive than my notes on gtrc

I'll point out that I am not starting by looking among existent formalisms, I try to use physical intuition guided by my view of the scientific method to identify/invent/learn the formalism I need. This method in fact reflects my philosophy which I am at the same time testing.

I realize that you can't write a book on here, and you already supplied plenty of information so I don't ask for more. Also I think that if I were to define my application in your preferred language, I would have to quite some time to refresh my group theory skills and perhaps more importantly, make sure my own ideas mature, so I can describe them in the receivers preferred base.

I'll start by nothing that my starting point is more like the "microstate approach". But I'm looking for howto define the notion of distinguishable state. So far my best shot is to consider a boolean state to be the simplest possible observable. Either two states are distinguished, or they are not.

But at the same time, an observable is constrained in complexit by the observers memory. A light observer can no fit a complex observation in memory at once. This suggest a non-unitary evolution. I also try to keep relations all the way, so that the priors are induced in all higher constructs from first principles. This way there should never have to be a case of missing strategy.

I am also in this picture working on implementing mass in terms of information capacity. And try to deduce interial phenomena from the intertial phenomena already existing in information thinking, namely the mass of the prior, imposes a interia to ANY incoming information.

Anyway, I could expand this too... but I'm working on it and it's hard to convey something that is not mature. I figure that due to what I suspect your excellent expertise in GR as well as these information formalisms... you may already have some interesting reflections on this? I hope you note that the nature of my question is a philosophical one. To translate it into a strict mathematical problem, assumptions need to be made.

Many problems here...

Howdo we identify the microstructure of reality in a scientific way?
Is it strings? :)
what is it? and more importantly, how can we infer a guess in the spirit of "minimum speculation", thish relates also to the various maximum entropy principles and entropy dynamics that other people work on. It's related, but I find it's difficulty to find a satisfactory answer.

I want to find an induction principle (like Ariel Caticha) that by construction works by the minimum speculation principle, that generates a guide for betting. This togther with the unavoidable element of uncertainty I want to use to infere the laws of physics and probably also to guess the most likely simplest possible AND distinguishable mictrostructure.

I offer my apologees in advance in case this is message appears scrambled.

/Fredrik
 
Last edited:

Similar threads

  • Beyond the Standard Models
Replies
9
Views
483
  • Beyond the Standard Models
Replies
3
Views
1K
  • Beyond the Standard Models
Replies
2
Views
2K
  • Beyond the Standard Models
Replies
14
Views
3K
  • Beyond the Standard Models
Replies
11
Views
2K
  • Beyond the Standard Models
2
Replies
39
Views
5K
Replies
72
Views
5K
  • Beyond the Standard Models
Replies
10
Views
2K
  • STEM Academic Advising
Replies
13
Views
2K
  • STEM Academic Advising
Replies
2
Views
1K
Back
Top