Is Risk Analysis Essential in Developing Fundamental Models of Physics?

  • Thread starter Thread starter Fra
  • Start date Start date
AI Thread Summary
The discussion centers on the necessity of risk analysis in developing fundamental models of physics, emphasizing the importance of self-correction mechanisms within these theories. Participants argue that while models can be wrong, the ability to adapt and revise them based on feedback is crucial for progress. The conversation touches on the implications of erroneous models, particularly in fields like climate science, where incorrect predictions can have significant consequences. The idea of integrating a systematic approach to learning from mistakes is highlighted as essential for the evolution of scientific theories. Ultimately, the dialogue reflects on the philosophical aspects of the scientific method and the need for theories to be responsive to new information.

Should a new fundamental "theory/framework/strategy" be self-correcting?

  • Yes

    Votes: 2 50.0%
  • No

    Votes: 1 25.0%
  • Don't know

    Votes: 1 25.0%
  • The question is unclear

    Votes: 0 0.0%

  • Total voters
    4
Fra
Messages
4,338
Reaction score
704
[I wasn't sure where to put this thread, it might as well have gone in philosophy of science but I choose to put it here because it seems more targeted still?]

I am curious about how different people consider the necessity of doing a riskanalysis when trying to make a fundamental model of nature and physics in particular.

Just to illustrate what I mean, some examples.

Trying to model reality is be it's nature risky business - we are often wrong, and what we "knew" was right, was LATER proved wrong, and needed revision. But consider that our life depends on it - we need an accurate model to simply survive. Then it would seem unwise to have no riskanalysis. It also seems crucial to test the most promising ideas first, and leave the less likely for later.

To be wrong is not fatal, as long as we have the capability to respone and revise our fundaments promptly. Experience from biology shows that flexibility and adaption is power, so that when for reasons beyond your control, you are thrown into a new environment, there are two choices. You adapt and survive, or you die because you fail to adapt and your old behaviour was in conflict with the new environment.

In history foundations are often revised or refined, so it still seems that flexibility is important.

So, when making new models, that apparently take many many years, a lot of both financial and intellectual investments... should we worry about what happens IF our models prove wrong? Obviously when they are wrong they are discared, but then what? Does this falsification show us HOW the old model should be modified? or does it jus tell us it's wrong and leave us without clue?

The basic question is, should a new fundamental theory include a mechanism for self-correction in a way that is not pre-determined, but rather is guided by input? So that when it's wrong (because we are wrong all the time) we do not only know so, we have also a mechanism to induce a new correction?

/Fredrik
 
Physics news on Phys.org
I moved it to philosophy of science, as this really has not much to do with "BTSM" theories, which have absolutely no technological or medical impacts at all. The only "risk" there is a career risk, and that's not necessarily related to the correctness of the theory, but rather to its popularity at the moment of important career turning points :biggrin:

Your point is actually much more to the point in phenomenological model building, such as is more the case in engineering disciplines, or medical sciences, where a wrong model might lead to wrong technological, technico-political or medical decisions, which are the only kind of "risk consequences" of scientific activities. Prime example: antropological climate change. There, errors in modelling and conclusions can have severe risk impacts. One or orther erroneous concept in string theory will probably have almost 0 impact, unless maybe on the careers of some people involved.
 
Last edited:
Thanks for your comments.

Perhaps my most fundamental point didn't come out. My main reason for elaborating "riskanalysis", is not just the risks to humanity if say string theory is wrong.

What I mean to find out, is if a fundamental theory ought to have the property of self correction, in response to feedback (since IMO a theory is something that is emergent). Ie. should the theory itself contain a method or strategy of self-revision in case of an "observed inconsistency" or can we leave this revisions to human hand (which by definition is a bit ad hoc).

My own implications of the question almost propagates down to the view of the scientific method in the sense that wether the induction of new theories, based on experience is part of science or not. In the typical poppian view, the focus is on falsifiability rather than "fitness" in adaption. The key points, is what is more important? and can we have something fundamental that doesn't reflect this? I guess my opinion is clear since I asked the question, but I'm curious about how others see it.

So with riskanalysis I meant something deeper, not just riskanalysis in the context of human society, although that is certainly a valid "single example". But picture this riskanalysis generalised and abstracted, and it can applied to many things.

I also associate this to the debate of background independent approaches. Ie. does it make sense from a fundamental viewpoint to have a background that can not change? ie that is not re-trainable? If so, what is the logic that has selected this background?

/Fredrik
 
vanesch said:
One or orther erroneous concept in string theory will probably have almost 0 impact, unless maybe on the careers of some people involved.

Except for "direct damange", there is also the problem of scheduling of resources. It may not be fatal, but it may drag down our performance and "in the general case" will give competitors an advantage.

But this thread is by no means intended to discuss research/funding politics! My intention it to bring the focus to the philosophy of the scientific method in a larger unified context.

/Fredrik
 
vanesch said:
The only "risk" there is a career risk, and that's not necessarily related to the correctness of the theory, but rather to its popularity at the moment of important career turning points :biggrin:

In the old days, a solid product that people need did the job.

Now making investers think you have what they think the customers want you to have so they can expect a return is the new economy in a nutshell. Predicting expectations is more important that predicting what people really "need".

One might wonder, why the life of en electron would be fundamentally different :wink: It's the expectations that governs life.

/Fredrik
 
I noticed the poll seems anonymous. Not that it matters much, but I was just curious. Most other polls to show ID of voters.

Did I miss something when I submitted the poll?

/Fredrik
 
Fra said:
I noticed the poll seems anonymous. Not that it matters much, but I was just curious. Most other polls to show ID of voters.

Did I miss something when I submitted the poll?

/Fredrik

Normally, you can choose yourself whether the poll is anonymous or not (before submitting it: you cannot change it afterwards of course).
 
Fra said:
T
What I mean to find out, is if a fundamental theory ought to have the property of self correction, in response to feedback (since IMO a theory is something that is emergent). Ie. should the theory itself contain a method or strategy of self-revision in case of an "observed inconsistency" or can we leave this revisions to human hand (which by definition is a bit ad hoc).

Well, in my book, for the physical sciences, the "property of self correction" is experimental validation/falsification, and beyond that point, the meaning of the word "correct" is even not clear. In what way is a theory that doesn't make any falsified predictions "correct" or "wrong" ? I don't even think that the concepts of "correct" and "wrong" are uniquely defined for such a case. It's only in an observable context that these notions make sense - it is probably what the sarcastic comment "not even wrong" tends to vehicle. Say that my theory of the universe is "A H = 0". Is that theory right or wrong ? Now, replace this equation by 2000 pages of symbols. What has changed now ?

Your remark is probably more to the point in mathematics. When is mathematics "wrong"? Obviously, when erroneous symbolic manipulations took place (sign error, ...). But is the axiom of choice "right or wrong" ? And when I give up on strict formal proof, but do "intuitive mathematics". When are certain manipulations "right" and when are they "wrong" if we already know that they are formally not correct (like interchanging integrations on divergent functions or something of the kind).

To me, right or wrong in the end only have a definite meaning wrt observation: "you told me it was going to be sunny at 3 o'clock, and look, it's raining: you were WRONG."

All the other rights and wrongs are to a point, matter of (social?) convention. So there are then two ways to make a theory be right or wrong: change the theory, or change the social convention.

That said, I fail to see why you call it "risk analysis". Risk is normally "cost" times "probability". What's the cost of a wrong theory (wrong in what way ?) and what's it's probability to be wrong ?
 
I think we are closing up on my point, thanks for your interest.

This gets more philosophical now...

vanesch said:
Well, in my book, for the physical sciences, the "property of self correction" is experimental validation/falsification, and beyond that point, the meaning of the word "correct" is even not clear.

Yes, I sort of agree but I make some what I think important distinctions here.

Validation/falsification gives us information/feedback about wether a particular specific prediction/estimate is correct or not, it does not in itself define how this feedback is used to revise/tune our "predictive engine". This latter part is crucial ot learning.

What I mean with self correction is that the feedback of the validations are explicitly used as input to the new predictions, this ensures that when we are "wrong" we will be sure to learn systematically.

Without so, the only learning method is to randomly select another theory, or make a "random" or arbitrary perturbation of the predictive engine and try again. Such a thing is also valid, but would one expect as system complexity goes up, so does the sophistication of this feedback mechanism?

Also, what does falsification mean for say an electron? Falsficiation/validation refers to human activies.

What is the fundamental distinction between a validation following by a adjustment, and a plain interaction?

vanesch said:
To me, right or wrong in the end only have a definite meaning wrt observation: "you told me it was going to be sunny at 3 o'clock, and look, it's raining: you were WRONG."

Except the prediction and the verification were not available at thte same time :) I see it as making bets based upon incomplete information. The bet might still be correct, but the fact that it was later shown to disagree is due to incompletness of information. And the only fatal thing is to fail to learn! Depending on system mass an intertia, usually beeing wrong once of twice does not destabilise you, but if you consistently fail to adapt your predictive engine you will decay and not survive.

A theory that is wrong AND fails to adapt is simply "wrong".

A theory can is wrong, but contains a self correction to evolve on, and learn from it's "mistakes" or "incompletness" can still be fit.

A theory that is right AND fails to adapt is just lucky! :bugeye:

My personal conjecture is that this is bound to apply at all levels of complexity. Perhaps some misunderstandings is because you originally thought I was talking about human endavours and sociology. I am suggesting that these abstractions apply to reality itself. To spacetime structure, and particles.

vanesch said:
That said, I fail to see why you call it "risk analysis". Risk is normally "cost" times "probability". What's the cost of a wrong theory (wrong in what way ?) and what's it's probability to be wrong ?

If you take the view of self-preservation. A theory that is wrong, or a structure that is in conflict with the environment will (I think) becoma destabilized in this environment and finally disappear.

Try to picture a sytem, say an atom to be a "theory" in a silly sense. This atom will respone to new input, and if it is in reasonable harmony with the environment, it will be stable, if it fails to adapt (learn/change state) it will probably decay.

This is a lot of imprecise terms, and I appeal to your intuition. There is to my knowledge not yet a proper language for this. But I think we need a new foundation for a kind of relational probabiliy that accounts for differences in information capacity. A given observer can I think not objectively talk about existence of strucutres that is larger than what can fit in his own relations to the environment.

So the probability spaces are I think bound to be relative too. This is why the notion of probability gets fuzzy. The probabiltiy itself can only be estimated, and subjectively so. The magic is why there is still an effective objective stable world? I think the foundational problems lies in self-organisation. These principles will yield to us the background we need for our theories.

Does my point show, why I see falsification philosophy is a "simplification" and why the concept of self correction seems fundamental?

/Fredrik
 
  • #10
vanesch said:
That said, I fail to see why you call it "risk analysis". Risk is normally "cost" times "probability". What's the cost of a wrong theory (wrong in what way ?) and what's it's probability to be wrong ?

To try to define the probability of something, by collecting statistics after it has happened is an akward thing IMO. This means the probability needs information from the future to be defined, this makes no sense as it's inaccesible to the observer to make his risk analysis. So the probabiltiy has to be estimated from the current information (what is retained from the past). But in this thinking, there is nothing thay guarantees the unitarity... ie. this expected probabiltiy need not be conserved, as the probabiltiy space itself is in possible evolution. If the observeed change does not fit the current structure and can not be accounted for by a simple state revision, the state of the state space needs to be revised as well. This is part of of my suggested "self-correction".

/Fredrik
 
  • #11
But not all of the information of the past can be "retainable", because the observers capacity to hold and store is bound to be limited. The exception must be if the observer is growing, but it wold probably be gradually rather than a black hole sucking everything up.

/Fredrik

Edit: This may seem too obvious to mention, since this correction is exactly what scientists do. But this is done by hand, and outside of the formalisms. I want this feedback INSIDE the formalism, so that nature itself can be seen to have this builtin when we understand it a little better.

/Fredrik
 
Last edited:
  • #12
The interest seems poor :)

Vanesch, since you're the only one participating here:

Do my posts come out unreadable and fail to make myself understood, or is people genuinely uninterested in these issues or consider them to be irrelevant to understanding reality and physics? :)

/Fredrik
 
  • #13
Let's see,
Model Building
Its ability to adapt to error...
Is this a requirement of a theory..?

A metaphysical insight. In the ancient world, say 500-300 bc, metaphysicists Thales of Miletus to Democritis used to construct metaphysical blueprints of how they though the Universe was constructed. Each philosopher in a chain advanced, modified and rebuilt essentially the same template- that eventually led Democritis et al to postulate the existence of atoms...

The underlying argument used by these thinkers was that there is simply too much knowledge, diversity wrt substance to formulate a grand solution. So that if there was to be a solution it had to come from first principles, an origin, a definition of how substance itself is generated. In other words, You begin with a definition of substance and then test to see if that definition accounts for what you know to be true, your knowledge base. Clearly, their models were simple, and necessarily had to adapt.

It is interesting the way Aristotle considered each of their theories and then developed his own using logic. His attempt to define the nature of matter and substance is the foundation upon which our science is suppose to rest ( his definition of matter). You see, he would use intuition, logic and such to pursue a possible solution, often he ended in a contradiction, had to retrace his steps and then continued to investigate the nature of matter and substance along another avenue.

I imagine that if one were to look for a more fundamental foundation ( a unified theory) in this way that it would be adaptable, and that it could be shaped by knew knowledge, phenomenon etc.

My perception of science, is that it is fragmented, that each has a limit, some problem that each discipline tries to overcome. Just because a new phenomenon contradicts a specific theory or idea is, in my opinion, no reason to discard the theory as a whole. Certainly, if you have found some degree of success with a theory, then it is improbable that the original construct was entirely flawed. You have to ask yourself which portion, which idea holds true, and then adapt or try to overcome the phenomenological obstacle.

A prime example would be the Bohr theory of the Atom. Although decidedly flawed, it was a valid theory because it did reproduce empirical facts, it did predict- but it had limits. Therein, and reluctantly so, Bohr conceded to quantum mechanics which inevitably absorbed many of the underlying 'acceptible' ideas of his model. It is my understanding that QT superimposes itself over Bohr's explanation of the H-atom and then exceeds it by reaching further.

This ability to adapt seems reasonable to me. There is another mechanism that I don't prescribe to. You see, there have been theories that have been found to be limited, and which then expand to account for new discoveries. I call these catalogue theories. For instance, the immergence of particle physics was catalyzed by the discovery of the pion, muons, Kaons etc. The initial models accounted for these particles... The problem is that so many more particles or resonances appeared. As a correction, theories like the quark theory, which were initially limited to the number and type of quarks, simply added more quark types and or flavours to accommodate the data. In my opinion this is nothing short of a catalogue, since the quark theory is limited in its ability to be tested.

This, of course, is my philosophical opinion and should not be taken as the fact. I keep getting warned not include my own non-peer reviewed ideas- but since this is a philosophical question pertaining to the philosophy of science, it should be ok...I hope.
 
Last edited:
  • #14
Thanks for your thoughts Sean. Your "own ideas" is exactly what I asked for. I make my own judgments so reviewed or not is a non-issue for me.

You describe a lot of scientific process as implemented by human history, and in that context it's almost needless to state that self-correction is part of the scientific process. To not adapt is the same as not to learn, or refuse to adapt evidence that does not fit in the initial assumptions.

What I suggest is that the scientific method that analyses the self-correction part itself, and so to speak try to make a scientific analysis of the scientific process itself, and then try to formalise this into a model, which by construction will almost be a learning model, and strategy for optimal learning, and such a model would also have a much wider area of application than physics only, so the benefits for any efforts made in such a model will be quite large, further increasing the motivation. Basically try to model the modelling. And find the association of the modelling of the model, with how the physical evolution has taken place.

My question was on the personal opinions of others that has reflected over foundational physics if a theory can fail to at least attempt to implement a decent level of learning power, can be fundamental?

Obviously most theories, if not all theories we have do not do this. So clearly a theory doesn't need this, but then that's more of effective theories - fit in special cases.

/Fredrik
 
  • #15
Maybe also a silly comment, but the evolutionary advantage of a self-correcting framework should be clear.

Considering the computing power we have these days, intelligent dataprocessing (which I think what modelling is) would boost a lot of progress, not only physics progress. This is why I tend to have a wider scientific interest than just physics. I find many physicists having a very narrow interest, and what "isn't physics" is apparently irrelevant to them. I like to understand reality and I don't care what the label is... but I am fascinated by biology as well, in particular the level of sophistican and "intelligence" and adaptive power that you see in single cells and how they respond to the environement. And not to mention the human brain. But my extended interest came to me, after the physics education... as a student I was a reductionist to mind... but when I started to study biology as part of a project I gained a new healthy perspective on things. I've always been philosophical but I am not "interested in philosohy" per see, except that the historical perspective on the philosophy of science is very interesting to read about.

/Fredrik
 
  • #16
I agree, I majored in biochemistry and physics. So mostly classical physics. Chemistry, organic and inorganic gave me a powerful insight into how physics was shaped by their discoveries and how the structure of the atom was shaped by the chemical phenomenon of the elements. Molecular models are easy to visualize...

I find it extremely frustrating how narrow minded quantum physicists are, to the point where they reject any philosophical perspective as to the foundation of science or even in the consideration of the underlying logic of their model. It's like they have their head in the clouds, electron clouds, and this prevents them from seeing any other alternatives.

I don't trust theories, as I have mentioned, that continuously add operators or terms to acquire accuracy. Or theories that simply diversify to account for new data. I also think that a model should have a predictive quality of its own. For if a theory or model is to be successful, it needs to go further and predict more-even new facts about nature than the previous one.

I've spent 15 years building subatomic and atomic models. In the beginning, there were lots of models. Eventually, there was just one. If I had given up on the way because of a mistake! In this respect, each new piece of data helps to shape a theory. A model can and will adjust if the underlying framework possesses at least some logic which is consistent with the nature of the constructs involved.
 
  • #17
Sean Torrebadel said:
I find it extremely frustrating how narrow minded quantum physicists are, to the point where they reject any philosophical perspective as to the foundation of science or even in the consideration of the underlying logic of their model. It's like they have their head in the clouds, electron clouds, and this prevents them from seeing any other alternatives.

I definitely wouldn't want to make it a general statement though to say that all quantum physicists are narrow minded :rolleyes: That's unfair.

I'm sure there are equally narrow minded people in any are of interest. Perhaps it a matter of personality and may be depend on your relation to the subject.

But in any case I see your point.

I recall some excellent teachers of mine who was definitely anything but narrow minded, that impressed by their ability to immediately extract the correct questions in a cloudy context.

/Fredrik
 
  • #18
We already have a self-correcting 'theory'. It's called science.
 
  • #19
Sean, in order to understnad your comment - what is your take on QM foundations? Are you looking for a determinist interpretation?

There is a book "Physics and Philosophy" by W.Heisenberg from 1958, which elaborates some of the philosophy around QM. It was a long time since I read it but I recall it was fairly interesting to see what the philosophical thinking was in the head of one of those who were around at the time when QM was founded. This is a pretty good perspective.

For example, supposed I have issues with QM, then it's interesting to read the words of the founders and try to identify what, in the original context that is wrong. For example in another classic Dirac's "The Principles of Quantum Mechanics" both the founding points and the assumptions that are responsible for the problems are clearly stated, especially in the early chapters. At least from that book, it seems to me Dirac never have second thoughts about the objectivity and identification of a probability space. To me this is one of the key points to mention one thing.

/Fredrik
 
  • #20
Hurkyl said:
We already have a self-correcting 'theory'. It's called science.

Maybe you didn't read all the posts?

IMO, science is not a theory, and in particular not a formalised one. Our effective science is implemented using human communities and human intelligence, which certainly isn't that bad, but I think we can do better.

/Fredrik
 
  • #21
Perhaps I was just venting my frustration about Q physicists, certainly not all. The greatest problem I have with the foundation is the Heisenber Uncertainty principle. The idea that measuring something changes that thing being measured. You see this is a true statement. False logic, however, is to state that because this is true we should give up trying to visualize the electron as a particle with a trajectory. Instead, I am told that I need to see the electron in a linear sense, over a period of time, as a probability cloud.
There are things in nature that reveal themselves, that do not require us to do anything but record what they are. For instance, electrons are involved in the mechanism for generating atomic spectra. In that regard, I do not need to measure the electrons themselves so much as what they have produced. Working backwards, I should like to think that such phenomenon speak for themselves. And therein lies the second problem I have with QT. The spectra of elements is specific, clear, concise and essentially a blueprint or image that can be interpreted to infer the structure of the atoms. Logic dictates that if the spectra are so well structured, that the atoms themselves are also clearly defined. Accordingly I have a hard time visualizing the quantum atom, its probabilities, and such, as a true reflection of atomic structure. Instead, I see it as, exactly what it is, a highly successful theory that approximates the behaviour of electrons. It is in my mind a sophisticated, time consuming, probability theory. So I'm not denying the theory, I just believe that there is a better way...
 
  • #22
I have W. Heisenberg's, 1953 book " Nuclear Physics". I didn't know that he wrote one about " Physics and Philosophy". If its anything like the one I have it will be a good read. I already have a list of books to acquire Re: QM. Dirac's is on the list.
 
  • #23
I enjoy an open-minded discourse. It reminds me of chatting with my friends in 1st year about the speed of light. We were not bound by the confines of any system. We were definitely outside of the box. We challenged each other to develope a mechanical model that could justify how such a thing were possible. Anyways, and herein lies the discord. Some people I have encountered, in this forum, are more motivated by what they have been taught than what they think...

Back to you.
 
  • #24
Fra: have you tried learning anything about statistical modelling and machine learning? I think this might contain a lot of information in which you would be interested.


One particularly relevant topic is overfitting. Models that are too good at adapting are generally worthless for any practical use -- if you try to train the model by feeding it a lot of input/output pairs, the model winds up simply 'memorizing' the training data, and fails miserably when shown new sets of input data. This has been proven both in theory and in practice.


Another relevant point is the need to separate of training and testing data. If you use all of your data to train a model, it's "rigged" to perform well on that data and you have no objective method for evaluating how good it is. In order to evaluate a model, you have to test it on data that it's not allowed to train upon. Again, this has been proven both in theory and in practice.


For a whimsical method of counterexample, observe that a simple computer database is the ultimate "self-adapting" theory. It simply records every bit of experimental data ever performed, and when asked to predict something, it makes a "prediction" by simply looking to see if the experiment has been performed before, and it returns the observed result. If not, it simply makes a random guess. (And, of course, the actual result is stored in the database)
 
Last edited:
  • #25
Sean Torrebadel said:
I find it extremely frustrating how narrow minded quantum physicists are, to the point where they reject any philosophical perspective as to the foundation of science or even in the consideration of the underlying logic of their model. It's like they have their head in the clouds, electron clouds, and this prevents them from seeing any other alternatives.
Quantum physicists (rightly!) reject subjective arguments against their field of study. (e.g. "I refuse to believe nature can be that weird. Therefore, I believe there is a better way.") It is entirely unfair of you to criticize them for defending their theory when others refuse to give it a fair consideration.
 
Last edited:
  • #26
That's some powerful insight Hurkyl. Also 'data dredging' may be of relevance. I know that in banking, people do 'risk analysis' for loans, its all statistical...
 
  • #27
Hurkyl said:
We already have a self-correcting 'theory'. It's called science.
I don't know if I agree with this statement. I'm not trying to be difficult, but science has a track record, a general practice. Specifically, and I quote:

"the act or judgement that leads scientists to reject one paradigm is always simultaneously the decision to accept another, and the judgement leading to
that decision involves the comparison of both paradigms with nature and with
each other" Thomas S. Kuhn, "The Structure of Scientific Revolutions, Ch VIII, p.77

In other words, when faced with a crisis, a phenomenon or new discovery that is inconsistent with current science, scientists will not discard the original theory until it can be replaced. You have to wait. Then again, as a whole science could be seen as self adjusting- over centuries. I think what were after here is an individual model or theory which adjusts itself. Not sure. Kuhn, in the above publication has a good section on the response to scientific crisis. Is there any other way?

From my perspective, a theory which stands even in the face of a contradiction, one that continues to grow, seems even more insurmountable . Is there a point where a theory has grown so large and so abstract that it becomes impossible to overcome?

This leads me to the concept of 'ad hoc'. In the past, scientists who could be considered to be 'data miners' or 'dredgers' produced theories that required 'ad hoc' assumptions. When does an 'ad hoc' assumption become acceptible? Is this type of model adjustment a necessary form of postulation, hypothesis?
 
  • #28
Hurkyl said:
Quantum physicists (rightly!) reject subjective arguments against their field of study. (e.g. "I refuse to believe nature can be that weird. Therefore, I believe there is a better way.") It is entirely unfair of you to criticize them for defending their theory when others refuse to give it a fair consideration.

My apologies Hurkyl, I spoke out of turn, I am the one being stuborn. Respectfully... I do appreciate your perspectives, and I'm certainly not above criticism.
 
  • #29
Hurkyl said:
One particularly relevant topic is overfitting. Models that are too good at adapting are generally worthless for any practical use -- if you try to train the model by feeding it a lot of input/output pairs, the model winds up simply 'memorizing' the training data, and fails miserably when shown new sets of input data. This has been proven both in theory and in practice.

This is an excellent point and an no doubt an important and one I have tried to address, but perhaps not very explicitly in this thread. But this point of course does not invalidate the whole concept of adaption, it just means that the adaptive power needs to be put under control, in order to not adapt to anything. Here enters the concept of inertia of information, and relational capacity. I have done philosophical ramblings about this in a few threads, and also explained that this is something I am personally working on but there is a lot left to do, and the formalism is still under construction.

This is part of my issues with for example the string framework. Generating too many options, and no matching selection strategy clearly stalls progress, and this is IMO related to "overfitting" in a certain sense, or the balance between option generation and option selection.

Your point is well taken, and complicates adaption but does not invalidate it, it just means we needs principles that gives us perfect fit, not over or underfit. What the measure of perfect fit is, is also something that needs to be addressed. It's hard, but not hard enough to be impossible.

Hurkyl said:
Another relevant point is the need to separate of training and testing data. If you use all of your data to train a model, it's "rigged" to perform well on that data and you have no objective method for evaluating how good it is. In order to evaluate a model, you have to test it on data that it's not allowed to train upon. Again, this has been proven both in theory and in practice.

This is a point but it can also be elborated.

If we consider how nature works, in reality, I see no such distinction. Life evolves, there is no "training period". The "training" is ongoing in the form of natural selection and selfcorrection. Of course an organism that is put into a twisted environment will learn howto survive there.

Another point may be also, why one would expect there is exists a objective method to determine what is good? For an organisms, survival is good, but sometimes this is bad for somebody else.

Hurkyl said:
For a whimsical method of counterexample, observe that a simple computer database is the ultimate "self-adapting" theory. It simply records every bit of experimental data ever performed, and when asked to predict something, it makes a "prediction" by simply looking to see if the experiment has been performed before, and it returns the observed result. If not, it simply makes a random guess. (And, of course, the actual result is stored in the database)

One of constraints is that an observer in general does not have infinite memory capacity. So the problem becomes one of remodelling the stored data, and decisionmaking on what parts of the data to release. It seems natural to think that the released data would be what the observer thinks is least important.

If we consider an observer as a self-assembled transciever, it receives and transmits data. This reception and transmission is analogoues to physical interactions. And wouldn't it seems quite natural that what constitutes a stable particle is dependent on the environment? so it is trained/selected against the environment?

The question is still exactly how to do this, and how this can be beneficial. I don't hve the answers but I'm working on it. I interested in opinions in this direction because it seems so many people are working on approaches that does not relate to this issue. I am curious to understand why.

/Fredrik
 
  • #30
What about the immune system-isn't that a self adapting mechanism.
 
  • #31
Sean Torrebadel said:
Perhaps I was just venting my frustration about Q physicists, certainly not all. The greatest problem I have with the foundation is the Heisenber Uncertainty principle. The idea that measuring something changes that thing being measured. You see this is a true statement. False logic, however, is to state that because this is true we should give up trying to visualize the electron as a particle with a trajectory. Instead, I am told that I need to see the electron in a linear sense, over a period of time, as a probability cloud.
There are things in nature that reveal themselves, that do not require us to do anything but record what they are. For instance, electrons are involved in the mechanism for generating atomic spectra. In that regard, I do not need to measure the electrons themselves so much as what they have produced. Working backwards, I should like to think that such phenomenon speak for themselves. And therein lies the second problem I have with QT. The spectra of elements is specific, clear, concise and essentially a blueprint or image that can be interpreted to infer the structure of the atoms. Logic dictates that if the spectra are so well structured, that the atoms themselves are also clearly defined. Accordingly I have a hard time visualizing the quantum atom, its probabilities, and such, as a true reflection of atomic structure. Instead, I see it as, exactly what it is, a highly successful theory that approximates the behaviour of electrons. It is in my mind a sophisticated, time consuming, probability theory. So I'm not denying the theory, I just believe that there is a better way...

I see. I kind of smelled this view from your other comments.

I don't share your view and disagree with a lot of your reasoning, but I still see your perspective.

I think you might want to read Heisenbergs philosophy book. It's a classic book and I'm sure you can find it on amazon or any other large bookshop. It's really an informal philosophical book and from what I remember there isn't a single forumla in it, it discusses the background philosophy and interpretations only.

Still, I have also issues with QM, but from what I can read out of your comments they are of a different nature. I have no problem by leaving the deterministic world of classical mechanics, but given that, there are OTHER issues with QM, which rather suggest that reality is even MORE weird than normal QM does, taking even further away from classical mechanics.

/Fredrik
 
  • #32
Sean Torrebadel said:
I have W. Heisenberg's, 1953 book " Nuclear Physics". I didn't know that he wrote one about " Physics and Philosophy". If its anything like the one I have it will be a good read. I already have a list of books to acquire Re: QM. Dirac's is on the list.

In particular the first chapter (or chapters?) in dirac's book is interesting from the philosophical perspective, because that's the principles and the foundations. Then of course he works out the implications and applications of it in the other chapters.

/Fredrik
 
  • #33
I don' think the above books are necessarily the best pedagogical choice to "learn how to apply QM", their main value IMO is that they are first of all written by the founders of the theory - making it more interesting to analyse the wording, and they elaborate the foundations and philosophy.

/Fredrik
 
  • #34
Sean Torrebadel said:
What about the immune system-isn't that a self adapting mechanism.

With people working on other methods I mainly referred to physics.

There seems to be a small group of interested in these things, but it seems small compared to other groups, like the string group.

I'm no expert on that but normally the immune system is considered to have two parts, the innate and the adaptive immune system. But of course in the big picture, even the innate immune system and the adaptive immune system have been selection during evolution too.

Like you probably knows better than me, this applies I think to many cellular regulations. The regulations take places and several parallell levels. From transcription to various enzyme regulations. There are both short term and long term responses, which each have different pros and cons, so they complement each other. But even the adaptive strategies are adapting, as the adaptive strategy can also adapt.

/Fredrik
 
  • #35
Fra said:
This is part of my issues with for example the string framework. Generating too many options, and no matching selection strategy clearly stalls progress, and this is IMO related to "overfitting" in a certain sense, or the balance between option generation and option selection.
We don't know what the next generation theory of the universe will look like. One general program for the search is to consider vast classes of potential theories, and test if they're suitable. If you can prove "theories that have this quality cannot work", then you've greatly narrowed down the search space. If you fail to prove such a claim, then you have a strong lead.

I know very few actual details about the general programme of string theory, so I cannot comment directly on it.




This is a point but it can also be elborated.

If we consider how nature works, in reality, I see no such distinction. Life evolves, there is no "training period". The "training" is ongoing in the form of natural selection and selfcorrection. Of course an organism that is put into a twisted environment will learn howto survive there.
I don't see how any of this is relevant to the creation and evaluation of physical theories.

Another point may be also, why one would expect there is exists a objective method to determine what is good?
What's wrong with "correctly predicts the results of new experiments"? After all, that is the stated purpose of science, and the method by which science finds application.





One of constraints is that an observer in general does not have infinite memory capacity. So the problem becomes one of remodelling the stored data, and decisionmaking on what parts of the data to release. It seems natural to think that the released data would be what the observer thinks is least important.
Even if the database is only large enough to remember one experiment, it still fulfills the criterion that it adapts to new results. (And perfectly so -- if you ask it the same question twice, you get the right answer the second time!)

Or even better -- maybe it doesn't remember anything at all, but simply changes color every time you ask it a question and show it the right answer. It's certainly adapting...

These are clearly bad theories -- but by what criterion are you going to judge them as bad?


Let's go back to the larger database. I never said it was infinite; just large enough to store every experiment we've ever performed. We could consider a smaller database, but let's consider this one for now.

This database perfectly satisfies your primary criterion; each time it answers a question wrongly, it adapts so that it will get the question right next time. Intuitively, this database is a bad theory. But by what criterion can you judge it?
 
  • #36
Sean Torrebadel said:
In other words, when faced with a crisis, a phenomenon or new discovery that is inconsistent with current science, scientists will not discard the original theory until it can be replaced.
Theories are never discarded. The very fact that a model became a scientific theory means that the model has demonstrated the ability to accurately and consistently predict the results of some class of experiments. When the theory fails to predict the result of a new kind of experiment, that doesn't invalidate the theory's proven success on the old kind of experiment.

For example, pre-relativistic mechanics works wonderfully for most experiments; in fact, its flaw wasn't even (originally) detected through experimental failure!

Yes, we now know that special relativity is a better description of space-time than pre-relativistic mechanics... but that doesn't change the fact that pre-relativistic mechanics gives correct results (within experimental error) for most experiments, and so it is still used in calculation. Furthermore, the success of pre-relativistic mechanics put a sharp constraint on the development in special relativity; in fact, many of the laws of special relativity are uniquely determined by the constraints "must be Lorentz invariant" and "pre-relativistic mechanics is approximately right".
 
  • #37
Hurkyl said:
We don't know what the next generation theory of the universe will look like. One general program for the search is to consider vast classes of potential theories, and test if they're suitable. If you can prove "theories that have this quality cannot work", then you've greatly narrowed down the search space.

What I am looking for is a strategy where the search space is dynamical - therefore I'm looking for a new "probabilistic" platform, this means that we can have both generality and effiency of selection, the tradeoff is time scales, but this we haven't discussed in this thread.

Hurkyl said:
Fredrik said:
This is a point but it can also be elborated.

If we consider how nature works, in reality, I see no such distinction. Life evolves, there is no "training period". The "training" is ongoing in the form of natural selection and selfcorrection. Of course an organism that is put into a twisted environment will learn howto survive there.

I don't see how any of this is relevant to the creation and evaluation of physical theories.

They way I see it there is only one nature. Physics, chemistry and ultimately biology must fit into the same formalism in my vision, or something is wrong.

En envision a model which is scalable in complexity.

I think the imagine evolution of the universe - big bang, particles etc should be described in the same language as the biological evolution. I see no fundamental reason to not have one theory for both. This is exactly why I'm making so much abstractions. My main motivation is physics, but what is physics? where does physics end and biology start? I see no border and I see no reason to create one.

Hurkyl said:
What's wrong with "correctly predicts the results of new experiments"? After all, that is the stated purpose of science, and the method by which science finds application.

The problem of that is that it gives the illusion of objectivity, but when it still contains an implicit observer - whos experiments? and who is making the correlation?

There are no fundamentally objective experiments. For us, many experiments are of course effectively objective, but that is a special case.

Note that with "who" here I am not just referring to "what scientist", I am referring to an arbitrary observer. Human or not, small or large. When two observer are fairly similar it will be far easier to find common references, but like I said I see this is a special case only.

Hurkyl said:
Even if the database is only large enough to remember one experiment, it still fulfills the criterion that it adapts to new results. (And perfectly so -- if you ask it the same question twice, you get the right answer the second time!)

Clearly if the "memory" of the observer is extremely limited, so will it's predictive power be. The whole point is that the task is how a given observer can make the best possible bet, given the information at hand!

To ask a more competent observer that has far more information/training what he would bet is irrelevant. The first observer, is stuck with his reality to try to survive given his limited control.

A "simple" observer, has limits to what it CAN POSSIBLY predict. An observer that has a very small memory, has a bounded predictive power no matter how "smart" his predictive engine is.

My question I set out to find, is to try to find a generic description on the best possible bet. This is by definition a conditional subjective probability.

There are no objective probabilities. This is one of the major flaws (IMO) in the standard formalisms.

The complexity of the observer (which I've called relational capacity, but can also be associated to memory, or energy) implies an inertia, that prevents overfitting in case of an observer that as the relational power to do so - because the new input is not adapted in it's completness, it's _weighted_ against the existing prior information, and the prior information has an intertia that resists changes.

Hurkyl said:
Or even better -- maybe it doesn't remember anything at all, but simply changes color every time you ask it a question and show it the right answer. It's certainly adapting...

These are clearly bad theories -- but by what criterion are you going to judge them as bad?

Something that just changes colour (boolean state) probably doesn't have the relational capacity to do better. Do you have a better idea? If your brain could store just one bit, could you do better? :)

This is really my point.

Also, observers are not necessarily stable, their assembly are also dynamical in my vision, and an observer assembly that fails to learn will be destabilised or adapt! and eventually be replaced by more successful assemblies.

Hurkyl said:
Let's go back to the larger database. I never said it was infinite; just large enough to store every experiment we've ever performed. We could consider a smaller database, but let's consider this one for now.

This database perfectly satisfies your primary criterion; each time it answers a question wrongly, it adapts so that it will get the question right next time. Intuitively, this database is a bad theory. But by what criterion can you judge it?

Note now the discussion a possible path to implementing a self-correcting formalism, has went on to dicuss my personal visions, which is under construction. This wasn't the main point of the thread though. I'm currently using intuition to try to find a new formalism thta satisfies what I consider to be some minimal requirements.

To reiterate the original question: Are we here discussing ways to implement what I called "self correcting models", or are we still discussing the relevance of the problem of finding one?

Needless to say, about the first question, I declare that while I'm working on ideas I have no answer yet.

/Fredrik
 
  • #38
When I've read many other papers and also some of the books of the founders of QM, it's one thing that points our more than anything, and that is how the concept of an objective probability space is adopted, and that the sum of all possibilities are always 100%. The question is how to make a _prior relation_ to a possibility, before we have any input?

I suspect you find some of my comments strange, but if I am to TRY to put the finder what I think is the single most important issue the yields us differing opinion here, that would be the justificaiton of the Kolmogorov probability theory and how it's axiomatized into physics. This is IMO where I think most issues can be traced to, and this is also the origin of my alternative attempts here. It is not just the bayesian vs frequentist issue, it's also the concept of how to relate to possibilities, without prior relations, is doesn't add upp to me, and that's it. But this has been debated before. But I think this is the root of my ramblings here, but I try to develop it and find something better.

/Fredrik
 
  • #39
Fra said:
They way I see it there is only one nature. Physics, chemistry and ultimately biology must fit into the same formalism in my vision, or something is wrong.
...
I think the imagine evolution of the universe - big bang, particles etc should be described in the same language as the biological evolution. I see no fundamental reason to not have one theory for both. This is exactly why I'm making so much abstractions.
But why should philosophy also fit into the same formalism? Why should theories be described by the same language?

Incidentally, "biology" is not synonymous with "biological evolution". Evolution is simply a theory that explains why we see the particular biological structures we see. There are already parallels to this in other fields -- e.g. we see stars because that is a relatively stable configuration of matter that 'beats out' many other configurations that matter could have taken.



I suspect you find some of my comments strange, but if I am to TRY to put the finder what I think is the single most important issue the yields us differing opinion here, that would be the justificaiton of the Kolmogorov probability theory and how it's axiomatized into physics.
There are formulations of QM where probabilities are derived, not axiomatic.



Hurkyl said:
These are clearly bad theories -- but by what criterion are you going to judge them as bad?
My point is that you seem to be lacking a clear aim. (I think) it's clear you want a self-correcting model of something... but what for what purpose? My specific examples are trying to prompt you into making a statement like

"This model is a bad one, becuase it fails to ________________."

My thought being that, if you can make a few statements like this, it might become clear what purpose the self-correcting model is meant to fulfill.



Mainly, in this discussion, I feel like I'm adrift at sea, without any direction.
 
  • #40
Hurkyl said:
But why should philosophy also fit into the same formalism? Why should theories be described by the same language?

It depends on what you mean why.

I can only speak for myself and I am a human, and for me all my thoughts are real, and I wish to understand what I perceive. It will make me more fit, and give me more control of things. Part of understanding reality, is also to understand the understanding itself, at least to me. When you seem repeating patterns in nature, not only patterns in how nature looks like, but also patterns in how nature changes, and possibly how nature has been selected, I as an observer percepting such a sensation have no choice but to deal with it.

But this is my subjective way. You do it your way without contradiction.

My view of how come things are so stable and apparently objective in despite of my believeing in such fundamental mess, is that we have evolved together and been trained together. So it is not a conicidence that we have an effective common ground.

Hurkyl said:
There are formulations of QM where probabilities are derived, not axiomatic.

If you refer to the measurement algebras you mentioned in the other thread I see them as close to equivalent starting points that may looke better but still contains implicit nontrivial assumptions to map reality and onto formalism - it's not really a radical reformulation. Nothing wrong in itself though, because what can we do? I just do not see the world this way, it does not match what I see (for whatever reason).

How can you consider the result of measurements, without consider how these results are stored and processed? The concept of measurement usually represented by the operators are IMO an idealisation (as is everything even my suggestions of course) that I don't find good enough.

There is more than measurement. There is the concept of retaining the results (information) obtained from measurements. What about this? what determines the retention capacity of an observer? What happens if a "measurement" yields more data than the observer can relate to? there are many issues here that I haven't seen resolve in the "standard QM".

Hurkyl said:
My point is that you seem to be lacking a clear aim. (I think) it's clear you want a self-correcting model of something... but what for what purpose? My specific examples are trying to prompt you into making a statement like

"This model is a bad one, becuase it fails to ________________."

My thought being that, if you can make a few statements like this, it might become clear what purpose the self-correcting model is meant to fulfill.

If I understand you right, you want me to give a generic selection criteria for models? Well, that is part of the problem I am trying to solve and if you aren't appreciating the problem from first principles I think I'll have a hard time to convey to you the relevance of the question, until I already have the answer which will prove it's power.

But the rough idea is that the self-organised structures called observers, are themselves a representation of the model, which is exactly why I constrain the mathematical formalisms to more or less map onto the mictrostructure and the state of the selected observer, and what I hope to find is the equations that yields and answer the question to a "bad model" in that it is simply self-destructive.

So the answer to why a model is bad is not a "simple one". It can only (as far as I understnad) be answered in a dynamical context.

/Fredrik
 
  • #41
I should say I appreciate the discussion even though it seems like we disagree!

Hurky, just to get a perspective on your comments I'm curious what's your sort of "main area of interest" and what problems do you focus on?

/Fredrik
 
  • #42
Clairification.

Fra said:
what I hope to find is the equations that yields and answer the question to a "bad model" in that it is simply self-destructive.

What I meant to say is that self-destruction is how I define "bad", "wrong" in this context.

A good model is at minimum self-preserving but also possibly even self-organising and growing.

/Fredrik
 
Back
Top