Is Risk Analysis Essential in Developing Fundamental Models of Physics?

  • Thread starter Thread starter Fra
  • Start date Start date
Click For Summary
The discussion centers on the necessity of risk analysis in developing fundamental models of physics, emphasizing the importance of self-correction mechanisms within these theories. Participants argue that while models can be wrong, the ability to adapt and revise them based on feedback is crucial for progress. The conversation touches on the implications of erroneous models, particularly in fields like climate science, where incorrect predictions can have significant consequences. The idea of integrating a systematic approach to learning from mistakes is highlighted as essential for the evolution of scientific theories. Ultimately, the dialogue reflects on the philosophical aspects of the scientific method and the need for theories to be responsive to new information.

Should a new fundamental "theory/framework/strategy" be self-correcting?

  • Yes

    Votes: 2 50.0%
  • No

    Votes: 1 25.0%
  • Don't know

    Votes: 1 25.0%
  • The question is unclear

    Votes: 0 0.0%

  • Total voters
    4
  • #31
Sean Torrebadel said:
Perhaps I was just venting my frustration about Q physicists, certainly not all. The greatest problem I have with the foundation is the Heisenber Uncertainty principle. The idea that measuring something changes that thing being measured. You see this is a true statement. False logic, however, is to state that because this is true we should give up trying to visualize the electron as a particle with a trajectory. Instead, I am told that I need to see the electron in a linear sense, over a period of time, as a probability cloud.
There are things in nature that reveal themselves, that do not require us to do anything but record what they are. For instance, electrons are involved in the mechanism for generating atomic spectra. In that regard, I do not need to measure the electrons themselves so much as what they have produced. Working backwards, I should like to think that such phenomenon speak for themselves. And therein lies the second problem I have with QT. The spectra of elements is specific, clear, concise and essentially a blueprint or image that can be interpreted to infer the structure of the atoms. Logic dictates that if the spectra are so well structured, that the atoms themselves are also clearly defined. Accordingly I have a hard time visualizing the quantum atom, its probabilities, and such, as a true reflection of atomic structure. Instead, I see it as, exactly what it is, a highly successful theory that approximates the behaviour of electrons. It is in my mind a sophisticated, time consuming, probability theory. So I'm not denying the theory, I just believe that there is a better way...

I see. I kind of smelled this view from your other comments.

I don't share your view and disagree with a lot of your reasoning, but I still see your perspective.

I think you might want to read Heisenbergs philosophy book. It's a classic book and I'm sure you can find it on amazon or any other large bookshop. It's really an informal philosophical book and from what I remember there isn't a single forumla in it, it discusses the background philosophy and interpretations only.

Still, I have also issues with QM, but from what I can read out of your comments they are of a different nature. I have no problem by leaving the deterministic world of classical mechanics, but given that, there are OTHER issues with QM, which rather suggest that reality is even MORE weird than normal QM does, taking even further away from classical mechanics.

/Fredrik
 
Physics news on Phys.org
  • #32
Sean Torrebadel said:
I have W. Heisenberg's, 1953 book " Nuclear Physics". I didn't know that he wrote one about " Physics and Philosophy". If its anything like the one I have it will be a good read. I already have a list of books to acquire Re: QM. Dirac's is on the list.

In particular the first chapter (or chapters?) in dirac's book is interesting from the philosophical perspective, because that's the principles and the foundations. Then of course he works out the implications and applications of it in the other chapters.

/Fredrik
 
  • #33
I don' think the above books are necessarily the best pedagogical choice to "learn how to apply QM", their main value IMO is that they are first of all written by the founders of the theory - making it more interesting to analyse the wording, and they elaborate the foundations and philosophy.

/Fredrik
 
  • #34
Sean Torrebadel said:
What about the immune system-isn't that a self adapting mechanism.

With people working on other methods I mainly referred to physics.

There seems to be a small group of interested in these things, but it seems small compared to other groups, like the string group.

I'm no expert on that but normally the immune system is considered to have two parts, the innate and the adaptive immune system. But of course in the big picture, even the innate immune system and the adaptive immune system have been selection during evolution too.

Like you probably knows better than me, this applies I think to many cellular regulations. The regulations take places and several parallell levels. From transcription to various enzyme regulations. There are both short term and long term responses, which each have different pros and cons, so they complement each other. But even the adaptive strategies are adapting, as the adaptive strategy can also adapt.

/Fredrik
 
  • #35
Fra said:
This is part of my issues with for example the string framework. Generating too many options, and no matching selection strategy clearly stalls progress, and this is IMO related to "overfitting" in a certain sense, or the balance between option generation and option selection.
We don't know what the next generation theory of the universe will look like. One general program for the search is to consider vast classes of potential theories, and test if they're suitable. If you can prove "theories that have this quality cannot work", then you've greatly narrowed down the search space. If you fail to prove such a claim, then you have a strong lead.

I know very few actual details about the general programme of string theory, so I cannot comment directly on it.




This is a point but it can also be elborated.

If we consider how nature works, in reality, I see no such distinction. Life evolves, there is no "training period". The "training" is ongoing in the form of natural selection and selfcorrection. Of course an organism that is put into a twisted environment will learn howto survive there.
I don't see how any of this is relevant to the creation and evaluation of physical theories.

Another point may be also, why one would expect there is exists a objective method to determine what is good?
What's wrong with "correctly predicts the results of new experiments"? After all, that is the stated purpose of science, and the method by which science finds application.





One of constraints is that an observer in general does not have infinite memory capacity. So the problem becomes one of remodelling the stored data, and decisionmaking on what parts of the data to release. It seems natural to think that the released data would be what the observer thinks is least important.
Even if the database is only large enough to remember one experiment, it still fulfills the criterion that it adapts to new results. (And perfectly so -- if you ask it the same question twice, you get the right answer the second time!)

Or even better -- maybe it doesn't remember anything at all, but simply changes color every time you ask it a question and show it the right answer. It's certainly adapting...

These are clearly bad theories -- but by what criterion are you going to judge them as bad?


Let's go back to the larger database. I never said it was infinite; just large enough to store every experiment we've ever performed. We could consider a smaller database, but let's consider this one for now.

This database perfectly satisfies your primary criterion; each time it answers a question wrongly, it adapts so that it will get the question right next time. Intuitively, this database is a bad theory. But by what criterion can you judge it?
 
  • #36
Sean Torrebadel said:
In other words, when faced with a crisis, a phenomenon or new discovery that is inconsistent with current science, scientists will not discard the original theory until it can be replaced.
Theories are never discarded. The very fact that a model became a scientific theory means that the model has demonstrated the ability to accurately and consistently predict the results of some class of experiments. When the theory fails to predict the result of a new kind of experiment, that doesn't invalidate the theory's proven success on the old kind of experiment.

For example, pre-relativistic mechanics works wonderfully for most experiments; in fact, its flaw wasn't even (originally) detected through experimental failure!

Yes, we now know that special relativity is a better description of space-time than pre-relativistic mechanics... but that doesn't change the fact that pre-relativistic mechanics gives correct results (within experimental error) for most experiments, and so it is still used in calculation. Furthermore, the success of pre-relativistic mechanics put a sharp constraint on the development in special relativity; in fact, many of the laws of special relativity are uniquely determined by the constraints "must be Lorentz invariant" and "pre-relativistic mechanics is approximately right".
 
  • #37
Hurkyl said:
We don't know what the next generation theory of the universe will look like. One general program for the search is to consider vast classes of potential theories, and test if they're suitable. If you can prove "theories that have this quality cannot work", then you've greatly narrowed down the search space.

What I am looking for is a strategy where the search space is dynamical - therefore I'm looking for a new "probabilistic" platform, this means that we can have both generality and effiency of selection, the tradeoff is time scales, but this we haven't discussed in this thread.

Hurkyl said:
Fredrik said:
This is a point but it can also be elborated.

If we consider how nature works, in reality, I see no such distinction. Life evolves, there is no "training period". The "training" is ongoing in the form of natural selection and selfcorrection. Of course an organism that is put into a twisted environment will learn howto survive there.

I don't see how any of this is relevant to the creation and evaluation of physical theories.

They way I see it there is only one nature. Physics, chemistry and ultimately biology must fit into the same formalism in my vision, or something is wrong.

En envision a model which is scalable in complexity.

I think the imagine evolution of the universe - big bang, particles etc should be described in the same language as the biological evolution. I see no fundamental reason to not have one theory for both. This is exactly why I'm making so much abstractions. My main motivation is physics, but what is physics? where does physics end and biology start? I see no border and I see no reason to create one.

Hurkyl said:
What's wrong with "correctly predicts the results of new experiments"? After all, that is the stated purpose of science, and the method by which science finds application.

The problem of that is that it gives the illusion of objectivity, but when it still contains an implicit observer - whos experiments? and who is making the correlation?

There are no fundamentally objective experiments. For us, many experiments are of course effectively objective, but that is a special case.

Note that with "who" here I am not just referring to "what scientist", I am referring to an arbitrary observer. Human or not, small or large. When two observer are fairly similar it will be far easier to find common references, but like I said I see this is a special case only.

Hurkyl said:
Even if the database is only large enough to remember one experiment, it still fulfills the criterion that it adapts to new results. (And perfectly so -- if you ask it the same question twice, you get the right answer the second time!)

Clearly if the "memory" of the observer is extremely limited, so will it's predictive power be. The whole point is that the task is how a given observer can make the best possible bet, given the information at hand!

To ask a more competent observer that has far more information/training what he would bet is irrelevant. The first observer, is stuck with his reality to try to survive given his limited control.

A "simple" observer, has limits to what it CAN POSSIBLY predict. An observer that has a very small memory, has a bounded predictive power no matter how "smart" his predictive engine is.

My question I set out to find, is to try to find a generic description on the best possible bet. This is by definition a conditional subjective probability.

There are no objective probabilities. This is one of the major flaws (IMO) in the standard formalisms.

The complexity of the observer (which I've called relational capacity, but can also be associated to memory, or energy) implies an inertia, that prevents overfitting in case of an observer that as the relational power to do so - because the new input is not adapted in it's completness, it's _weighted_ against the existing prior information, and the prior information has an intertia that resists changes.

Hurkyl said:
Or even better -- maybe it doesn't remember anything at all, but simply changes color every time you ask it a question and show it the right answer. It's certainly adapting...

These are clearly bad theories -- but by what criterion are you going to judge them as bad?

Something that just changes colour (boolean state) probably doesn't have the relational capacity to do better. Do you have a better idea? If your brain could store just one bit, could you do better? :)

This is really my point.

Also, observers are not necessarily stable, their assembly are also dynamical in my vision, and an observer assembly that fails to learn will be destabilised or adapt! and eventually be replaced by more successful assemblies.

Hurkyl said:
Let's go back to the larger database. I never said it was infinite; just large enough to store every experiment we've ever performed. We could consider a smaller database, but let's consider this one for now.

This database perfectly satisfies your primary criterion; each time it answers a question wrongly, it adapts so that it will get the question right next time. Intuitively, this database is a bad theory. But by what criterion can you judge it?

Note now the discussion a possible path to implementing a self-correcting formalism, has went on to dicuss my personal visions, which is under construction. This wasn't the main point of the thread though. I'm currently using intuition to try to find a new formalism thta satisfies what I consider to be some minimal requirements.

To reiterate the original question: Are we here discussing ways to implement what I called "self correcting models", or are we still discussing the relevance of the problem of finding one?

Needless to say, about the first question, I declare that while I'm working on ideas I have no answer yet.

/Fredrik
 
  • #38
When I've read many other papers and also some of the books of the founders of QM, it's one thing that points our more than anything, and that is how the concept of an objective probability space is adopted, and that the sum of all possibilities are always 100%. The question is how to make a _prior relation_ to a possibility, before we have any input?

I suspect you find some of my comments strange, but if I am to TRY to put the finder what I think is the single most important issue the yields us differing opinion here, that would be the justificaiton of the Kolmogorov probability theory and how it's axiomatized into physics. This is IMO where I think most issues can be traced to, and this is also the origin of my alternative attempts here. It is not just the bayesian vs frequentist issue, it's also the concept of how to relate to possibilities, without prior relations, is doesn't add upp to me, and that's it. But this has been debated before. But I think this is the root of my ramblings here, but I try to develop it and find something better.

/Fredrik
 
  • #39
Fra said:
They way I see it there is only one nature. Physics, chemistry and ultimately biology must fit into the same formalism in my vision, or something is wrong.
...
I think the imagine evolution of the universe - big bang, particles etc should be described in the same language as the biological evolution. I see no fundamental reason to not have one theory for both. This is exactly why I'm making so much abstractions.
But why should philosophy also fit into the same formalism? Why should theories be described by the same language?

Incidentally, "biology" is not synonymous with "biological evolution". Evolution is simply a theory that explains why we see the particular biological structures we see. There are already parallels to this in other fields -- e.g. we see stars because that is a relatively stable configuration of matter that 'beats out' many other configurations that matter could have taken.



I suspect you find some of my comments strange, but if I am to TRY to put the finder what I think is the single most important issue the yields us differing opinion here, that would be the justificaiton of the Kolmogorov probability theory and how it's axiomatized into physics.
There are formulations of QM where probabilities are derived, not axiomatic.



Hurkyl said:
These are clearly bad theories -- but by what criterion are you going to judge them as bad?
My point is that you seem to be lacking a clear aim. (I think) it's clear you want a self-correcting model of something... but what for what purpose? My specific examples are trying to prompt you into making a statement like

"This model is a bad one, becuase it fails to ________________."

My thought being that, if you can make a few statements like this, it might become clear what purpose the self-correcting model is meant to fulfill.



Mainly, in this discussion, I feel like I'm adrift at sea, without any direction.
 
  • #40
Hurkyl said:
But why should philosophy also fit into the same formalism? Why should theories be described by the same language?

It depends on what you mean why.

I can only speak for myself and I am a human, and for me all my thoughts are real, and I wish to understand what I perceive. It will make me more fit, and give me more control of things. Part of understanding reality, is also to understand the understanding itself, at least to me. When you seem repeating patterns in nature, not only patterns in how nature looks like, but also patterns in how nature changes, and possibly how nature has been selected, I as an observer percepting such a sensation have no choice but to deal with it.

But this is my subjective way. You do it your way without contradiction.

My view of how come things are so stable and apparently objective in despite of my believeing in such fundamental mess, is that we have evolved together and been trained together. So it is not a conicidence that we have an effective common ground.

Hurkyl said:
There are formulations of QM where probabilities are derived, not axiomatic.

If you refer to the measurement algebras you mentioned in the other thread I see them as close to equivalent starting points that may looke better but still contains implicit nontrivial assumptions to map reality and onto formalism - it's not really a radical reformulation. Nothing wrong in itself though, because what can we do? I just do not see the world this way, it does not match what I see (for whatever reason).

How can you consider the result of measurements, without consider how these results are stored and processed? The concept of measurement usually represented by the operators are IMO an idealisation (as is everything even my suggestions of course) that I don't find good enough.

There is more than measurement. There is the concept of retaining the results (information) obtained from measurements. What about this? what determines the retention capacity of an observer? What happens if a "measurement" yields more data than the observer can relate to? there are many issues here that I haven't seen resolve in the "standard QM".

Hurkyl said:
My point is that you seem to be lacking a clear aim. (I think) it's clear you want a self-correcting model of something... but what for what purpose? My specific examples are trying to prompt you into making a statement like

"This model is a bad one, becuase it fails to ________________."

My thought being that, if you can make a few statements like this, it might become clear what purpose the self-correcting model is meant to fulfill.

If I understand you right, you want me to give a generic selection criteria for models? Well, that is part of the problem I am trying to solve and if you aren't appreciating the problem from first principles I think I'll have a hard time to convey to you the relevance of the question, until I already have the answer which will prove it's power.

But the rough idea is that the self-organised structures called observers, are themselves a representation of the model, which is exactly why I constrain the mathematical formalisms to more or less map onto the mictrostructure and the state of the selected observer, and what I hope to find is the equations that yields and answer the question to a "bad model" in that it is simply self-destructive.

So the answer to why a model is bad is not a "simple one". It can only (as far as I understnad) be answered in a dynamical context.

/Fredrik
 
  • #41
I should say I appreciate the discussion even though it seems like we disagree!

Hurky, just to get a perspective on your comments I'm curious what's your sort of "main area of interest" and what problems do you focus on?

/Fredrik
 
  • #42
Clairification.

Fra said:
what I hope to find is the equations that yields and answer the question to a "bad model" in that it is simply self-destructive.

What I meant to say is that self-destruction is how I define "bad", "wrong" in this context.

A good model is at minimum self-preserving but also possibly even self-organising and growing.

/Fredrik
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
250
  • Poll Poll
  • · Replies 19 ·
Replies
19
Views
4K
Replies
9
Views
3K
Replies
2
Views
2K
  • · Replies 42 ·
2
Replies
42
Views
7K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K