Riskanalysis in modelbuilding?

  • Thread starter Fra
  • Start date
In summary, the conversation revolves around the necessity of doing risk analysis when creating a fundamental model of nature and physics. The discussion touches on the importance of flexibility and adaptation in scientific theories and the potential impact of erroneous theories on society and resources. The main question posed is whether a fundamental theory should include a mechanism for self-correction in response to feedback or if revisions should be left to human intervention. The conversation also delves into the philosophy of the scientific method and its role in the larger context of society and economics.

Should a new fundamental "theory/framework/strategy" be self-correcting?

  • Yes

    Votes: 2 50.0%
  • No

    Votes: 1 25.0%
  • Don't know

    Votes: 1 25.0%
  • The question is unclear

    Votes: 0 0.0%

  • Total voters
    4
  • #36
Sean Torrebadel said:
In other words, when faced with a crisis, a phenomenon or new discovery that is inconsistent with current science, scientists will not discard the original theory until it can be replaced.
Theories are never discarded. The very fact that a model became a scientific theory means that the model has demonstrated the ability to accurately and consistently predict the results of some class of experiments. When the theory fails to predict the result of a new kind of experiment, that doesn't invalidate the theory's proven success on the old kind of experiment.

For example, pre-relativistic mechanics works wonderfully for most experiments; in fact, its flaw wasn't even (originally) detected through experimental failure!

Yes, we now know that special relativity is a better description of space-time than pre-relativistic mechanics... but that doesn't change the fact that pre-relativistic mechanics gives correct results (within experimental error) for most experiments, and so it is still used in calculation. Furthermore, the success of pre-relativistic mechanics put a sharp constraint on the development in special relativity; in fact, many of the laws of special relativity are uniquely determined by the constraints "must be Lorentz invariant" and "pre-relativistic mechanics is approximately right".
 
Physics news on Phys.org
  • #37
Hurkyl said:
We don't know what the next generation theory of the universe will look like. One general program for the search is to consider vast classes of potential theories, and test if they're suitable. If you can prove "theories that have this quality cannot work", then you've greatly narrowed down the search space.

What I am looking for is a strategy where the search space is dynamical - therefore I'm looking for a new "probabilistic" platform, this means that we can have both generality and effiency of selection, the tradeoff is time scales, but this we haven't discussed in this thread.

Hurkyl said:
Fredrik said:
This is a point but it can also be elborated.

If we consider how nature works, in reality, I see no such distinction. Life evolves, there is no "training period". The "training" is ongoing in the form of natural selection and selfcorrection. Of course an organism that is put into a twisted environment will learn howto survive there.

I don't see how any of this is relevant to the creation and evaluation of physical theories.

They way I see it there is only one nature. Physics, chemistry and ultimately biology must fit into the same formalism in my vision, or something is wrong.

En envision a model which is scalable in complexity.

I think the imagine evolution of the universe - big bang, particles etc should be described in the same language as the biological evolution. I see no fundamental reason to not have one theory for both. This is exactly why I'm making so much abstractions. My main motivation is physics, but what is physics? where does physics end and biology start? I see no border and I see no reason to create one.

Hurkyl said:
What's wrong with "correctly predicts the results of new experiments"? After all, that is the stated purpose of science, and the method by which science finds application.

The problem of that is that it gives the illusion of objectivity, but when it still contains an implicit observer - whos experiments? and who is making the correlation?

There are no fundamentally objective experiments. For us, many experiments are of course effectively objective, but that is a special case.

Note that with "who" here I am not just referring to "what scientist", I am referring to an arbitrary observer. Human or not, small or large. When two observer are fairly similar it will be far easier to find common references, but like I said I see this is a special case only.

Hurkyl said:
Even if the database is only large enough to remember one experiment, it still fulfills the criterion that it adapts to new results. (And perfectly so -- if you ask it the same question twice, you get the right answer the second time!)

Clearly if the "memory" of the observer is extremely limited, so will it's predictive power be. The whole point is that the task is how a given observer can make the best possible bet, given the information at hand!

To ask a more competent observer that has far more information/training what he would bet is irrelevant. The first observer, is stuck with his reality to try to survive given his limited control.

A "simple" observer, has limits to what it CAN POSSIBLY predict. An observer that has a very small memory, has a bounded predictive power no matter how "smart" his predictive engine is.

My question I set out to find, is to try to find a generic description on the best possible bet. This is by definition a conditional subjective probability.

There are no objective probabilities. This is one of the major flaws (IMO) in the standard formalisms.

The complexity of the observer (which I've called relational capacity, but can also be associated to memory, or energy) implies an inertia, that prevents overfitting in case of an observer that as the relational power to do so - because the new input is not adapted in it's completness, it's _weighted_ against the existing prior information, and the prior information has an intertia that resists changes.

Hurkyl said:
Or even better -- maybe it doesn't remember anything at all, but simply changes color every time you ask it a question and show it the right answer. It's certainly adapting...

These are clearly bad theories -- but by what criterion are you going to judge them as bad?

Something that just changes colour (boolean state) probably doesn't have the relational capacity to do better. Do you have a better idea? If your brain could store just one bit, could you do better? :)

This is really my point.

Also, observers are not necessarily stable, their assembly are also dynamical in my vision, and an observer assembly that fails to learn will be destabilised or adapt! and eventually be replaced by more successful assemblies.

Hurkyl said:
Let's go back to the larger database. I never said it was infinite; just large enough to store every experiment we've ever performed. We could consider a smaller database, but let's consider this one for now.

This database perfectly satisfies your primary criterion; each time it answers a question wrongly, it adapts so that it will get the question right next time. Intuitively, this database is a bad theory. But by what criterion can you judge it?

Note now the discussion a possible path to implementing a self-correcting formalism, has went on to dicuss my personal visions, which is under construction. This wasn't the main point of the thread though. I'm currently using intuition to try to find a new formalism thta satisfies what I consider to be some minimal requirements.

To reiterate the original question: Are we here discussing ways to implement what I called "self correcting models", or are we still discussing the relevance of the problem of finding one?

Needless to say, about the first question, I declare that while I'm working on ideas I have no answer yet.

/Fredrik
 
  • #38
When I've read many other papers and also some of the books of the founders of QM, it's one thing that points our more than anything, and that is how the concept of an objective probability space is adopted, and that the sum of all possibilities are always 100%. The question is how to make a _prior relation_ to a possibility, before we have any input?

I suspect you find some of my comments strange, but if I am to TRY to put the finder what I think is the single most important issue the yields us differing opinion here, that would be the justificaiton of the Kolmogorov probability theory and how it's axiomatized into physics. This is IMO where I think most issues can be traced to, and this is also the origin of my alternative attempts here. It is not just the bayesian vs frequentist issue, it's also the concept of how to relate to possibilities, without prior relations, is doesn't add upp to me, and that's it. But this has been debated before. But I think this is the root of my ramblings here, but I try to develop it and find something better.

/Fredrik
 
  • #39
Fra said:
They way I see it there is only one nature. Physics, chemistry and ultimately biology must fit into the same formalism in my vision, or something is wrong.
...
I think the imagine evolution of the universe - big bang, particles etc should be described in the same language as the biological evolution. I see no fundamental reason to not have one theory for both. This is exactly why I'm making so much abstractions.
But why should philosophy also fit into the same formalism? Why should theories be described by the same language?

Incidentally, "biology" is not synonymous with "biological evolution". Evolution is simply a theory that explains why we see the particular biological structures we see. There are already parallels to this in other fields -- e.g. we see stars because that is a relatively stable configuration of matter that 'beats out' many other configurations that matter could have taken.



I suspect you find some of my comments strange, but if I am to TRY to put the finder what I think is the single most important issue the yields us differing opinion here, that would be the justificaiton of the Kolmogorov probability theory and how it's axiomatized into physics.
There are formulations of QM where probabilities are derived, not axiomatic.



Hurkyl said:
These are clearly bad theories -- but by what criterion are you going to judge them as bad?
My point is that you seem to be lacking a clear aim. (I think) it's clear you want a self-correcting model of something... but what for what purpose? My specific examples are trying to prompt you into making a statement like

"This model is a bad one, becuase it fails to ________________."

My thought being that, if you can make a few statements like this, it might become clear what purpose the self-correcting model is meant to fulfill.



Mainly, in this discussion, I feel like I'm adrift at sea, without any direction.
 
  • #40
Hurkyl said:
But why should philosophy also fit into the same formalism? Why should theories be described by the same language?

It depends on what you mean why.

I can only speak for myself and I am a human, and for me all my thoughts are real, and I wish to understand what I perceive. It will make me more fit, and give me more control of things. Part of understanding reality, is also to understand the understanding itself, at least to me. When you seem repeating patterns in nature, not only patterns in how nature looks like, but also patterns in how nature changes, and possibly how nature has been selected, I as an observer percepting such a sensation have no choice but to deal with it.

But this is my subjective way. You do it your way without contradiction.

My view of how come things are so stable and apparently objective in despite of my believeing in such fundamental mess, is that we have evolved together and been trained together. So it is not a conicidence that we have an effective common ground.

Hurkyl said:
There are formulations of QM where probabilities are derived, not axiomatic.

If you refer to the measurement algebras you mentioned in the other thread I see them as close to equivalent starting points that may looke better but still contains implicit nontrivial assumptions to map reality and onto formalism - it's not really a radical reformulation. Nothing wrong in itself though, because what can we do? I just do not see the world this way, it does not match what I see (for whatever reason).

How can you consider the result of measurements, without consider how these results are stored and processed? The concept of measurement usually represented by the operators are IMO an idealisation (as is everything even my suggestions of course) that I don't find good enough.

There is more than measurement. There is the concept of retaining the results (information) obtained from measurements. What about this? what determines the retention capacity of an observer? What happens if a "measurement" yields more data than the observer can relate to? there are many issues here that I haven't seen resolve in the "standard QM".

Hurkyl said:
My point is that you seem to be lacking a clear aim. (I think) it's clear you want a self-correcting model of something... but what for what purpose? My specific examples are trying to prompt you into making a statement like

"This model is a bad one, becuase it fails to ________________."

My thought being that, if you can make a few statements like this, it might become clear what purpose the self-correcting model is meant to fulfill.

If I understand you right, you want me to give a generic selection criteria for models? Well, that is part of the problem I am trying to solve and if you aren't appreciating the problem from first principles I think I'll have a hard time to convey to you the relevance of the question, until I already have the answer which will prove it's power.

But the rough idea is that the self-organised structures called observers, are themselves a representation of the model, which is exactly why I constrain the mathematical formalisms to more or less map onto the mictrostructure and the state of the selected observer, and what I hope to find is the equations that yields and answer the question to a "bad model" in that it is simply self-destructive.

So the answer to why a model is bad is not a "simple one". It can only (as far as I understnad) be answered in a dynamical context.

/Fredrik
 
  • #41
I should say I appreciate the discussion even though it seems like we disagree!

Hurky, just to get a perspective on your comments I'm curious what's your sort of "main area of interest" and what problems do you focus on?

/Fredrik
 
  • #42
Clairification.

Fra said:
what I hope to find is the equations that yields and answer the question to a "bad model" in that it is simply self-destructive.

What I meant to say is that self-destruction is how I define "bad", "wrong" in this context.

A good model is at minimum self-preserving but also possibly even self-organising and growing.

/Fredrik
 

Similar threads

Replies
29
Views
2K
Replies
1
Views
790
  • General Discussion
2
Replies
54
Views
3K
  • General Discussion
Replies
1
Views
780
Replies
5
Views
900
  • Special and General Relativity
Replies
30
Views
618
  • STEM Educators and Teaching
2
Replies
40
Views
2K
  • Biology and Medical
Replies
5
Views
1K
  • General Discussion
Replies
10
Views
849
  • General Discussion
Replies
18
Views
3K
Back
Top