Some things to get the right background.
- Like I said before, I share a good deal of the sentiment of Jaynes and Ariel. Their "mathematics" is essentially that of probability theory. I am reconstructing another formalism, that is IMO a generalistion of probability theory as the answer to inductive inference.
- I share the vision of Ariel and laws of physics can be seen as rules of inference, however I differ in that I complicate their picture by suggesting he the inference system is subject to evolution. unless it's already obvious the analogy goes like this...
Standard physics is usually formulated as an initial value problem, where you have a initial state in a given configuration space, from which the future state follows from the laws of physics (equations of motion, or schrödinger equation etc). Not that this is a deductive logic.
premise ~ initial conditions, or the current state
deductive system ~ laws of physics
In the inductive inference view, the idea is that there exists no certain deductive system, only an inductive system. But you can still have different views on induction.
For example unpredictable phenomena like QM, where you can not deduce the outcome of an experiment, can then be said to be an form of induction, but one can easily describe this as a form of probabilistic deduction, or almost equivalently a deductive probability.
So QM still fits the deductive inference model if we accept that state of matter are only statistical.
But we have implicitly used probabiltiy theory and statistics here quite uncritical. But as I've argued several times, and like Smolin argued in his argument "against timeless laws" - which in my opinion is also an arrgument against deductive inference - this model is not universally sensible. It makse sense when we study a small subsystem, but not when a small system studys it's own environment.
So, the idea is that the laws of physics would be identified as the laws of inference. But since I reject the validity of standard probability, what do I suggest instead?
I suggest to replace flat usage of probability theory with a reconstruction of an intrinsic measure that is really a way to count evidence. Actually this is the original way Jaynes argued, he pondered over how to count evidence, and represent degrees of belief. Philosophical arguments was translated into axioms, when ultimately lead to a formalism that is standard probabiltiy theory. This is in his book "probabiltiy theory - the logic of science".
IMO, he makes some mistakes that makes me reject the reconstruction. He introduced the continuum too lightly, that's the first mistake, and it proves to be a key one.
tom.stoer said:
You have to define how a rule looks (or a law or whatever) like, and you have to define how the negotiation between rules (which are mathematical entities) looks like.
Can you explain how this approach could look like mathematically?
What are your symbols, relations, axioms etc.? How does a rule look like?
To stick with close terminology to show that it's close to probability theory, I replace probability distrubiotn with microstate, and the space of distributions with microstructure.
I work with finite sets of natural numbers, to represent microstates.
Also, every microstructure has a complexity number, I call M.
Every microstructure has a eventspace volume k, this is the number of disntighuishable types of evidence/events.
The numbers of each set sum up to this complexity number.
The microstructure is the set of these sets, with complexity M.
From this point what I have is a discrete probability, where not only the event space is finite but - more importantly - the probability itself is discrete/quantized simply because there is no finite representation of the continuum.
similarly, by combinatorical considerations, you can define a natural way to measure one microstate relative to another one. If you calculate the probability for one microstate taking another microstate to be the prior, then in the expression you can separate information divergence and the complexity of the microstate in an interesting way.
You get P = w e^{-M{S_{KL}}} where S_KL is the information divergecen between the states, which is independent of complexity. w is a factor that scales with complexity, but w -> 1 asymptotically approaches as complexity -> infinity.
Thisi s very simple, but this is the seed to what later will generate physical actions. The least action principle is in this view, simply the principle of maximum probability, except that in my view, the "probability" is an INSIDE counting of evidence. And in the case of high complexity this also conicinced with the principle of minimum information divergence (minimum speculation), but this is not generally the case for LOW complexity.
Some postulates will be that for example the expected action of an inference system, is that which maximised the transition probability - this is the correspondence of the least action principle.
But this is just the starter, next I consider that an observer actually consists of sets of microsturctures (sets of sets) that have relations by means of transformations, which is really a form of recodign of information. Thus information (and evidence counts) can flow between the "spaces", and that an observer complex is infact a system of communicating spaces constructed in a special way.
THIS is where interesting things will happen. Becuase new logic appears here that does not comply to standard probability. For example quantum logic should be explainable as a result of inforamtion equilibrating between the dual spaces and their relation would explain how to make sense of out negotiations of X and P, when they belong to different spaces.
But there are a lot of opne things here for sure.
But so far we haven't even gotten to the evolution part yet. By my idea is to start descrive how ANY inside view should look like, this is possible since the options are finite if you start at the low complexity limit.
Then one can see which combinations of observer complexes that can coexist so to speak.
Note that I start with a distinguishability index - not a spacetime. space should also emerge as a preferred index structure, certainly I expect the dimensionality to be related to stability of hte complexes. But I am not there yet :) the problems is that several problem are related so it really doens't quite work to solve one problem at a time; this is why even my very approach to this is evolutionary, I work on all things at once and make broad but slower progress.
tom.stoer said:
You have to define how "rules act on rules": you have a negotiation process for which you need rules; "rules acting on rules" can therefore be negotiation between physical laws, but it can also be the evolution of some physical entity.
How and when do which rules interact? How are two (or three? four?) interacting rules selected? How do they "come together"? How does the DNA look like? How does mutuation, crossing-over and spawning of new rules look like?
How do you count rules or members of classes of rules in order to decide which rules are successfull = dominant?
What will our universum be? One master rule or a colletction of the most successfull rules?
process - which then becomes part of it and is subject to negotiation as well? what are your initial conditions?
Alot of questions, all justified. but each comment would be be long, and still incomplete due to the nature of the incomplete progress... need to goto sleep now. maybe the previous comments helped also on some of the latter questions?
---
some final comments on how the reconstructed inference system would differ from the standard probabilistic and entropic on of jaynes and ariel:
Each distribution would come with a natural kind of "mass", thus information has mass.
This serves as inertia when these two distrubions are forced into negotiation.
This intertia also fills a purpose in the process of increasing the complexity of the observer - this will I think relate to gravity (certainty attracts more certainty)
More importantly the reconstruction comes with a natural information mesure, no need to postulate a specific choices of entropy. The natural information measure comes in the form of probability of probability in the framwork I reconstruct.
But the major point is that the meaure I reconstruct, is not to be interpreted as a frequentist thing, or rely on ensembles of systems (say ensenmbles of universes) it is simply an inside COUNT of evidence. You need no ensemble or repatable experiments. The new concept is a proper evidence count.
/Fredrik