Hurkyl said:
We don't know what the next generation theory of the universe will look like. One general program for the search is to consider vast classes of potential theories, and test if they're suitable. If you can prove "theories that have this quality cannot work", then you've greatly narrowed down the search space.
What I am looking for is a strategy where the search space is dynamical - therefore I'm looking for a new "probabilistic" platform, this means that we can have both generality and effiency of selection, the tradeoff is time scales, but this we haven't discussed in this thread.
Hurkyl said:
Fredrik said:
This is a point but it can also be elborated.
If we consider how nature works, in reality, I see no such distinction. Life evolves, there is no "training period". The "training" is ongoing in the form of natural selection and selfcorrection. Of course an organism that is put into a twisted environment will learn howto survive there.
I don't see how any of this is relevant to the creation and evaluation of physical theories.
They way I see it there is only one nature. Physics, chemistry and ultimately biology must fit into the same formalism in my vision, or something is wrong.
En envision a model which is scalable in complexity.
I think the imagine evolution of the universe - big bang, particles etc should be described in the same language as the biological evolution. I see no fundamental reason to not have one theory for both. This is exactly why I'm making so much abstractions. My main motivation is physics, but what is physics? where does physics end and biology start? I see no border and I see no reason to create one.
Hurkyl said:
What's wrong with "correctly predicts the results of new experiments"? After all, that is the stated purpose of science, and the method by which science finds application.
The problem of that is that it gives the illusion of objectivity, but when it still contains an implicit observer - whos experiments? and who is making the correlation?
There are no fundamentally objective experiments. For us, many experiments are of course effectively objective, but that is a special case.
Note that with "who" here I am not just referring to "what scientist", I am referring to an arbitrary observer. Human or not, small or large. When two observer are fairly similar it will be far easier to find common references, but like I said I see this is a special case only.
Hurkyl said:
Even if the database is only large enough to remember one experiment, it still fulfills the criterion that it adapts to new results. (And perfectly so -- if you ask it the same question twice, you get the right answer the second time!)
Clearly if the "memory" of the observer is extremely limited, so will it's predictive power be. The whole point is that the task is how a given observer can make the best possible bet, given the information at hand!
To ask a more competent observer that has far more information/training what he would bet is irrelevant. The first observer, is stuck with his reality to try to survive given his limited control.
A "simple" observer, has limits to what it CAN POSSIBLY predict. An observer that has a very small memory, has a bounded predictive power no matter how "smart" his predictive engine is.
My question I set out to find, is to try to find a generic description on the best possible bet. This is by definition a conditional subjective probability.
There are no objective probabilities. This is one of the major flaws (IMO) in the standard formalisms.
The complexity of the observer (which I've called relational capacity, but can also be associated to memory, or energy) implies an inertia, that prevents overfitting in case of an observer that as the relational power to do so - because the new input is not adapted in it's completness, it's _weighted_ against the existing prior information, and the prior information has an intertia that resists changes.
Hurkyl said:
Or even better -- maybe it doesn't remember anything at all, but simply changes color every time you ask it a question and show it the right answer. It's certainly adapting...
These are clearly bad theories -- but by what criterion are you going to judge them as bad?
Something that just changes colour (boolean state) probably doesn't have the relational capacity to do better. Do you have a better idea? If your brain could store just one bit, could you do better? :)
This is really my point.
Also, observers are not necessarily stable, their assembly are also dynamical in my vision, and an observer assembly that fails to learn will be destabilised or adapt! and eventually be replaced by more successful assemblies.
Hurkyl said:
Let's go back to the larger database. I never said it was infinite; just large enough to store every experiment we've ever performed. We could consider a smaller database, but let's consider this one for now.
This database perfectly satisfies your primary criterion; each time it answers a question wrongly, it adapts so that it will get the question right next time. Intuitively, this database is a bad theory. But by what criterion can you judge it?
Note now the discussion a possible path to implementing a self-correcting formalism, has went on to dicuss my personal visions, which is under construction. This wasn't the main point of the thread though. I'm currently using intuition to try to find a new formalism thta satisfies what I consider to be some minimal requirements.
To reiterate the original question: Are we here discussing ways to implement what I called "self correcting models", or are we still discussing the relevance of the problem of finding one?
Needless to say, about the first question, I declare that while I'm working on ideas I have no answer yet.
/Fredrik