I Instrumentalism and consistency

  • Thread starter Thread starter Demystifier
  • Start date Start date
Click For Summary
Instrumentalism, or logical positivism, posits that only measurable phenomena are meaningful, rendering concepts like reality and hidden variables meaningless. The discussion raises skepticism about whether any physicist fully adheres to this view without invoking unmeasured concepts. Some participants argue that while instrumentalism may have historical significance, modern physics increasingly emphasizes symmetry rather than strict positivism. The conversation also touches on the challenges of reconciling instrumentalist views with the complexities of quantum mechanics, suggesting that true consistency in this philosophy is difficult to achieve. Overall, the consensus leans towards the idea that positivism is unlikely to regain prominence in the field.
  • #61
We have a very precise "mental image" about the quantum world. It's called quantum theory!
 
Physics news on Phys.org
  • #62
Demystifier said:
Let me use a simple analogy. Suppose that you watch your computer screen on which two windows are open, window 1 and window 2. And suppose that you see only window 1. Why don't you see window 2? A natural intuitive explanation is that window 2 must be behind window 1. And really, when you move window 1, you start to see window 2. There is nothing more intuitive for a human than to think that window 2 was there, behind window 1, even before you moved window 1. On the other hand, if you know something about how computer and monitor really work, you know that it isn't true. In reality, window 2 was "made" on the monitor at the moment when you moved window 1, it was not there before. And yet, even if you are a computer expert, even if you programed the computer that way yourself, you will still find cognitively natural and useful to think that window 2 was there all the time.

That's a hidden variable, or realist, interpretation. Even though there is no window 2 before you see it, you interpret that it is there even when you don't see it. And even though this interpretation is wrong, it is a very useful interpretation for a human.

In the same sense, a hidden variable interpretation of QM, such as Bohmian mechanics, can be useful as a thinking tool, even if Bohmian trajectories don't exist in reality. The point of Bohmian mechanics is not to restore determinism. Its point is to restore realism, that is the view that things are there even when we don't observe them.

Such an instrumentalist view of BM!
 
  • Like
Likes Demystifier
  • #63
vanhees71 said:
We have a very precise "mental image" about the quantum world. It's called quantum theory!
But the problem is that the optimized machinecode that runs on your brain may not execute on other brains.

And i think we have yet to find the universal code that can be interpreted on arbitrary processors, like for example protons.

QM is not quite there yet. Ie. how can we define information, information processing and action in a hardware independent way?

/Fredrik
 
Last edited:
  • Like
Likes Demystifier
  • #64
I don't know, what runs on my brain, but I think all (theoretical ;-)) physicists have a pretty proper understanding of quantum theory, and it's pretty much the same (despite the unphysical interpretational part, which is in my opinion not subject of science but rather to a kind of religious world view).
 
  • #65
vanhees71 said:
I don't know, what runs on my brain, but I think all (theoretical ;-)) physicists have a pretty proper understanding of quantum theory, and it's pretty much the same (despite the unphysical interpretational part, which is in my opinion not subject of science but rather to a kind of religious world view).
I think we can understand it at different levels. First there is the general confusion which is natural to someone who has not even taken a regular QM course. Like maybe you have read a popular book on the topic without a single calculation, trying to describe QM in words. This is a kind of beginners confusion.

But beyond that first obstable, to understand it as a tool, for experimental predictions is one thing. And i think this is what you refer to. But to try to understand it well enough so that you have a feeling for how we can modify this tool or apply it beyond its original domain, to a better tool is another one.

I also think that those interpretations that are "interpretations only", that does not aim to improve the theory, tend to be uninteresting to discuss if they do not add value of extra insight, that has implications for how we can solve the open problems.

But if attacking the foundational problems of physics - such as how to merge this with gravity and howto complete the unification program of HEP - somehow suggest that one way if "interpretting" quantum mechanics makes the next generation of the theory more clear, then that interests me and it think its relevant.

/Fredrik
 
  • #66
Well, of course I'm referring to science and science only. It's a scientific theory, and it is not the aim of scientists to provide anybody with some adequate subjective worldview but to describe the objective facts of nature that can be observed and are reproducible with ever increasing accuracy. There is no way to describe scientific theories in another language than mathematics, and intuition, necessary to work with theories in an inventive way, builds buy applying the formalism to real-world experiments and observations. QT is a huge success story. There is non known contradiction between theory and observations yet. To the contrary, QT passed very hard tests over a huge domain of phenomena (both in system size reaching from the subatomic level to macroscopic condensed-matter systems and energies from ultralow high-precision experiments with "cold neutrons", atomic and molecular physics, condensed-matter physics to the highest available energies at relativistic particle colliders like the LHC). Another aspect is the successful use of the theory in the development of modern technology. Already the laptop, I'm typing this posting, is an amazing result of the application of modern physics!

Of course, there are still very fundamental elementary questions open, like the adequate QT of gravitational interactions, but that's not a metaphysical but a scientific issue, which won't be solved by philosophical speculations but only by ongoing experimental and theoretical research in physics. Most of the interpretational issues discussed here in this forum are, in my opinion, pretty unimportant from the point of view of natural science, although they are, of course, interesting for the philosophy of science.
 
  • #67
I get your main point and largely agree, and indeed QM is a success
story. I also agree than many(not all!) interpretational issues on the
forums are not very "important".

But my impression is that your perspective attempts to simplify rather than elaborate the process of "theoretical research" in a way that is a bit inhibitory. This is why i defend these discussions and think that constructive thinking about these things is a good thing. But indeed, considering the rules of the forum, SOME of these discussions imo belong in the btsm section.

As the situation today is
- lack of a common constructing principles for QM and GR and
- theorists are under starvation of NEW unexplained data to play with

It is very clear that that constructing principles of most of todays
theoretical research is not the "ideal" direct feedback from experiment.
Theoretical research has a long process BEFORE they even get to the
point of producing a real falsifiable statement. In some cases it leads
to what is perceived as pathological excuses to some.

So what is the possible takeaway from this?

1) Theoretical business today has gone unhealthy and lost the connection
to the "scientific method". Just look at string theory - what falsifiable things have these
guys beeing doing for the last 30 years? Let's redirect funding from this
waste to building more powerful accelerators, where we feed "real
science". Then once we see NEW data, we can start feeding the theorists
again.

2) Poppers simplistic abstraction of the scientific process is not
adequate because it puts all focus on the falsification event, and not
elaboration on the method behind hypothesis generation - in CONTEXT of
evolving scientific knowledge.

IMO (1) is in a sense a possible stance, especially from the perspective
of society. After all, the world has many other things to spend money on
that must be leveled against physics research. Climate, world peace etc.
I could respect this view, even if i do not agree with it.

But I personally think (2) is the better takeaway. Note that (2) does
not say that (1) is WRONG, it just claims that its is an
oversimplification, that inhibits efficient hypothesis generation. Today
the COST of hypothesis generation has increased due to the complexity of
the scientific knowledge. In this picture, what you label philosophy is
a necessary PART of "theoretical research".

After all, look at history, and the original writings of founders of
various theories. Einstein, Heisenberg, Bohr etc. They contain PLENTY of
"philosophical elements" that as you can see were necessary parts in the
process of COMING up with their theories. Once the theory is on the
table, it is a different matter to falsify or corroborate it (except
that it takes technical skills, physics engineering and money). But a hard part is the process that lead to the new theories. There the
falsification event, only explains when to kill a theory, it does not
help us to intelligently generate a wise set of hypothesis. In Newtons
days its CLEAR that these things was philosophy, but note that the
status of our understanding today HAS reached the state of "theory of
theory". Theory of theory is really as close to the philosophy of
science you can come, and i insist that not acknowledging this and discouraging discussions about is a mistake.

/Fredrik
 
Last edited:
  • #68
I fully agree. Popper's attempt to clarify the scientific process is indeed oversimplified. However, there's some truth in it: As a theoretical physicist you have to create models (and rarely even theories) that make contact with observations, so that the model or theory can be tested. To create a new model or theory you need experimental input. For me it's pretty clear that only very rarely if not never there was a pure theoretical idea without empirical input that lead to a breakthrough in our understanding. E.g., the success of the Standard Model (Weinberg's paper "A theory of Leptons" has its 50th anniversary in these days; see the November issue of the CERN Courier) is based on both closely following the empirical findings and ingenious use of mathematical concepts underlying perturbative relativistic QFT, including the use of the Higgs mechanism (invented by Andersen in the context of superconductivity), finally leading to 't Hooft's and Veltman's famous proof of the renormalizability of Higgsed and un-Higgsed non-Abelian gauge models.
 
  • Like
Likes zonde
  • #69
vanhees71 said:
I fully agree. Popper's attempt to clarify the scientific process is indeed oversimplified. However, there's some truth in it: As a theoretical physicist you have to create models (and rarely even theories) that make contact with observations, so that the model or theory can be tested. To create a new model or theory you need experimental input. For me it's pretty clear that only very rarely if not never there was a pure theoretical idea without empirical input that lead to a breakthrough in our understanding.
As I see the question is how to encourage experimentalists to look out for observations that can't be explained by established theory.
I believe interpretations can help with that. But the current mess with many half baked interpretations based on a lot of hand waving won't do the trick. It would be better if there were some principles how to evaluate interpretations based on agreement with existing data, self consistency, alignment with scientific approach.
 
  • Like
Likes akvadrako
  • #70
vanhees71 said:
However, there's some truth in it: As a theoretical physicist you have to create models (and rarely even theories) that make contact with observations, so that the model or theory can be tested. To create a new model or theory you need experimental input

Yes of course.

However but as complexity increases the theoretical hypothesis i also what guides us WHICH experimental data to look for. So there is an important connection here.

Here my personal analysis of the situation from my perspective is that only looking for more extreme HEP or more extreme cosmological data seems seems lacking imagination.

We have plenty of complex system interactions whrere order emerges out of chaos in ways we can NOT predict that does not require superaccelerators and supertelescope. What that likely requires otoh is more powerful computers for modelling.

You might think that this is not fundamental physics but i disagree. The insight is that reductionism works only up to a certian complexity limit, where a new way of thinking is needed AND as i conjecture NATURE itself needs a new way of interacting, in ordeer to not see chaos. So i am proposing a connection with chaotical systems self organisation and laws of physics.

After all condensed matter physics IS a field of complexity, wehere analogies exists already. There is more if you look at social and ecnomical systems. If you see it the way i do, we arent talking about accidental analogies, i think it is the SAME fundamental laws but working at different scales.

I myself do not lack data. But wether you can breed on accessible data depends on your understanding and interpretations. HEP itself is the paradigm that the inferential system is always a classical macroscopic laboratory where we also tend to consider perturbations only. This abstraction fails badly if you picture an inside observer or cosmological observations. So no matter how successful qm and qft is, you can't consistenyly apply that paradigm to the genrral case. This was my understanding of the OT.

/Fredrik
 
Last edited:
  • #71
Fra said:
2) Poppers simplistic abstraction of the scientific process is not
adequate because it puts all focus on the falsification event, and not
elaboration on the method behind hypothesis generation - in CONTEXT of
evolving scientific knowledge.

I don't think there needs to be any constraints on hypothesis generation beyond falsifiability.

It may be helpful to articulate approaches that have a track record of fruitfulness, BUT falsifiability (or lack thereof) is really the only thing that can be used to arbitrate between competing hypotheses when hard data is lacking to test them. Things like Occam's Razor (and other razors) are heuristic preferences that may be useful in many contexts, but can't really be applied consistently across different scientific disciplines, because a general and rigorous definition of "simplicity" is lacking. (Is positing multiple universes simpler than one?)

The human mind is a powerfully creative and beautiful thing. There should be no constraints in the scientific method on what it does in the process of generating hypotheses. There only need be constraints on how the method arbitrates between competing ideas.
 
  • #72
Demystifier said:
One approach to deal with interpretations of quantum mechanics is instrumentalism, known also as logical positivism. According to this doctrine, only measurable things are meaningful and therefore physical. All other concepts such as reality, ontology, hidden variables or many worlds, i.e. things which are supposed to be there even if we don't measure them, are not measurable and hence are meaningless.

There are many physicists who claim to think that way, but is there any living physicist who really thinks that way? In other words, is there any physicist who is consistent in such a way of thinking, without ever thinking in terms of concepts which are supposed to be there even when we don't measure them? In my experience, there is no such physicist.

I take care to make a distinction between how someone operates when doing science and how someone operates when arriving at conclusions and making decisions in other areas of life. If leftovers in the fridge smell bad, a logical posivist may well toss them out as likely spoiled without proof positive that the leftovers really are spoiled, because the risks of eating them exceed the benefits.

There are other areas where conclusive proof (at the level of positivism) may be unlikely to ever be available. This is my view (as a ballistics expert who has studied the matter carefully and peer-reviewed scholarly papers on the subject) on the Kennedy assassination regarding whether Oswald acted alone. Because there is not convincing evidence of co-conspirators, a preference for Occam's razor tends to suggest a lone actor. However, since negatives are hard to prove in historical events (no conspirators) I can't say all the conspiracy theories are convincingly disproven. But there is enough unexplained evidence that I'm not sure I could criticize a logical positivist as being inconsistent if they favored a conspiracy theory, because I regard logical positivism as applying to science (natural law) rather than history (what happened in the past).

Finally, even within the bounds of science, my view is that logical positivism only requires consistency with regards to what is considered to be "true" according to the scientific method. There is no need to reject constructs which may simply be regarded as computational conveniences while maintaining neutrality regarding whether those constructs represent physical reality. Thinking about ideas as computational conveniences does not violate logical positivism, only regarding these ideas as representing physical reality raises the bar on the evidence needed.
 
  • #73
When I read or hear such subjects, I don't see so much opposite views, but opposite clans. Perhaps, physics needs all of them.

Fra said:
The insight is that reductionism works only up to a certian complexity limit, where a new way of thinking is needed AND as i conjecture NATURE itself needs a new way of interacting, in ordeer to not see chaos.
The reductionists postulate that any study of chaos consists of spliting. Implicitely, one of the resulting parts will be a measure device and expectations of its states will give a new theory linking the other data to the measures. Not the converse. This approach is universal. Is it trivial? No, at all, solid maths behind, beginning ie from the categories theory. Is it universally efficient ? probably, no. But this is another tool for diggers. We can imagine a theory of spliting parts which will become dependent in the sense where each one may under conditions measures the other ... It seems that I already know a famous one ...
 
  • #74
We might misunderstand each other here i am not sure, as this is subtle. I am also not just talking about human imagination, am suggesting this also reflects how nature works.
Dr. Courtney said:
I don't think there needs to be any constraints on hypothesis generation beyond falsifiability.
In a sense i agree, but in another senso i do not.

The constraints i envision are not fundamental constraints, they are emergent constraints. I see them as observer dependent, they emerge with the scale of self-organisiation. You can also see the constraints so that investing in testing all hypothesis, must be done according the the expected benefit. If this is not done, one can easily see that as complexity increases - the random walker will simply gert lost, and never find its way back home. So what happend? Well, the random walkre is using a MAP that is not scaled properly. The map is too large!

Just to make an analogy here. Suppose we want to understand the logic and origin of life: What you say is that the only necessary trait of an organism is its mortality. Ie. all we required from a lifeform is its ability to be vulnerable and die.

Well, to understand evolution we need a little more than that. Organisms must be able to mutate, and so in a controlled manner to preserve stability. Mutations does take place randomly but not at the lowest reductionst layer but relative to its own prior state. This is why we have stability. An evolutionary mechanism that fails to explain stability of development, is not of much use.

There is s similary view in the inferetial perspective that i apply to physics and evolution of physical law. (which i envision as dual to evolution of physical speices - ie standard model particles for example).

So falsifibility alone is not "wrong", it's just inefficient, and i am sure does not reflect how nature organises at low energy, I think we neeed to take intout account the additional emergent structures that in my view, encode the hypothesis generators.

/Fredrik
 
  • #75
Fra said:
The constraints i envision are not fundamental constraints, they are emergent constraints. I see them as observer dependent, they emerge with the scale of self-organisiation. You can also see the constraints so that investing in testing all hypothesis, must be done according the the expected benefit. If this is not done, one can easily see that as complexity increases - the random walker will simply gert lost, and never find its way back home. So what happend? Well, the random walkre is using a MAP that is not scaled properly. The map is too large!

Aah, I see. You want a method to discern which hypotheses are most worthy of being tested.

I doubt the possibility of constructing a truly objective method of picking more likely candidates for winning hypotheses prior to testing them. Right now, it's a hodge podge approach of experimenter discretion and funding processes. This approach has served science well for hundreds of years.

A lot of my success as an experimentalist has been in reading the menu right to left - testing the hypotheses not on the basis of their worthiness by some objective criteria, but more based on my ability as an experimenter to test them with available resources.

For example, of all the fish in the sea (literally) in which one could test the hypothesis of magnetoreception, colleagues and I picked four species. Whether or not they were likely to demonstrate magnetoreception was a secondary consideration. The primary consideration was that we had easy access to large numbers of these species so that a statistically significant experiment could be completed in a reasonable time period with available resources. The hypothesis of magnetoreception was supported in 3 of the 4 species. Other, more interesting hypotheses relating to specific physical mechanisms of magnetoreception in each species are harder to test. These hypotheses are probably more worthy of being tested, but being experimentally considerably more difficult, no one has done them yet.

So even if there was an objective criteria regarding the relative worthiness of a large group of hypotheses, real experimenters will always be exercising subjective judgments to balance the relative worthiness against the relative costs. My colleagues and I have learned to keep our eyes open for interesting hypotheses that fall into our laps or appear as low hanging fruit. But the low hanging fruit to one experimenter may not be low hanging fruit to another.
 
  • Like
Likes Boing3000
  • #76
Fra said:
I myself do not lack data. But wether you can breed on accessible data depends on your understanding and interpretations. HEP itself is the paradigm that the inferential system is always a classical macroscopic laboratory where we also tend to consider perturbations only. This abstraction fails badly if you picture an inside observer or cosmological observations. So no matter how successful qm and qft is, you can't consistenyly apply that paradigm to the genrral case. This was my understanding of the OT.

/Fredrik
Sure, you need theory to know what (and also how!) to measure interesting things. It's always an interrelation between theory and experiment that is much more complicated than the oversimplified view that scientists have a hypothesis that is falsified by experiments. It's however also important to remember to build theories that make predictions that can in principle be falsified. If the experiments confirm your theory, it's of course also great ;-)).

It's also true that in HEP it's not always the ever higher energies that promise the most interesting progress but also ever higher precision!

What I don't understand is the above quoted paragraph. Of course, you need a scientific (not philosophical!) interpretation of the theories you want to apply to describe a given observational situation. In the case of QT that's for me the minimal interpretation, i.e., after having formulated the mathematical structure it's Born's rule, and only Born's rule. In my opinion there is no other "meaning" of the quantumtheoretical state (represented by a statistical operator in the formalism) than the probabilistic one formulated in Born's rule. There's no scientific content in answering the question, "what's behind it". You may build some metaphysics or even religious believes in the sense of a worldview, but that's not science but personal belief.

Not only in HEP but in all of physics after all we use macroscopic apparati to observe much tinier (quantum) but also much larger (astro/cosmology) objects, because that's what we finally can observe. Nevertheless the macroscopic apparati also obey the rules of quantum theory. Classical behavior is an emergent phenomonon of multidimensional many-body systems, and it's the classical aim of condensed-matter physics to understand this behavior from the microscopic theory, and this program has been very successfull in the last 80 years or so. Nowadays the observational tools are that much refined that one can also study quantum behavior on macroscopic systems. There's no sharp "micro/macro" cut depending on system size.

It's also not clear to me what you mean by "inside observer". The observer doesn't play much of a role in an experiment. Usually, nowadays it's a physicist (or rather large collaborations of very many physicists) who analyze data stored by the various devices which have more or less automatically taking the data. That's true for HEP as well as astronomy and cosmology. E.g., the LIGO/VIRGO gravitational waves have been found by analysing data in a huge data storage but not by direct interaction of the 1000 or so physicists who perform these experiments. So there is no whatsoever direct interference between these physicists with the signal itself nor with the detectors when these data are taken, and to speculate about the influence of the consciousness of any human (or Bell's famous amoeba) on the meaning of data is at best a good science-fiction story but no science!
 
  • #77
My hunch is that part of the disagreement is simply misunderstandings.

One of the problems here is that its hard to describe this in words, and that problem is indeed mine! as i am the one trying to convey some strange multilayer thinking, and indeed its easliy misunderstood. (And at this point I have no published work on this - I wish i had! But the ambition is that it will come, but i decided long time ago that i would not even consider publishing something that is immature enough to be misunderstood, this is why at minimum i would like to have explicit results before anyone would care to read it. I do not have the problem of having to publish papers continously, so i will publish if an only if its up to my own standards.)

But there is plenty of disuss still as there are a lot of research by others that fringes upon this from different angles. But its not the inconstencies between these ideas we should focus on but on the abstractions that are the common denominator.
Dr. Courtney said:
Aah, I see. You want a method to discern which hypotheses are most worthy of being tested.
Thats part of the story yes. But WHY do i want to do this? this is the question!

It is not because i want to change research politics, it is because i see a much deeper point here. It is the key to explanatory power of internal interactions. But we can apply this to difference scales of complexity, i see this as an illustration, but it can easily be mixed up.

Note that sometimes we talk about inferences on human scientist level or social interaction level, and soemtimes at least I talk about inferences on physical (subatomic) level. sometimes on intermediate complex system level, ie. complex physical systems, but no humans.

I see now that this is confusing. But my ambition here is to highlight that we can find a scale invariant soft constructing principle here, that also is a source of intuition and insight. There is a common abstraction is the SAME in all cases. And if you see this as inferences, the inferences are the same regardless of underlying system, and its subject to the same evolutionary and self-organising mechanisms.

Dr. Courtney said:
I doubt the possibility of constructing a truly objective method of picking more likely candidates for winning hypotheses prior to testing them.
I fully agree. My point was not to find an OBJECTIVE explicit method. Objectivity is emergent only. I even claim the opposite, that an objective method is not inferrable and therefore has no place in the arguments.

Dr. Courtney said:
But the low hanging fruit to one experimenter may not be low hanging fruit to another.

Now we talk about human scientist level:

This is exactly my point! No real disagreements here! What is the important and rational action is that each reserachers will typically act according to this emergent constraint. The constraints is simply the betting rule. If the cost for reaching high hanging fruit exceeds the probable benefit, then going for the low hanging fruit is the rational choice.

And this - from another point of view - EXPLAINS why a particular researchers acts irrational from antoher perspective. In the rational player model, there exists no objective measure. Instead each player has its own measure, and the INTERACTION between them is what causes objective consensus.

Now I switch to talking about physical leve inferences:

See what I am getting at? The subjectively emergent constraints, encoded in a subsystem, has the potential (i say potential here just to be humble;) to EXPLAINS the exact nature of interactions, as seen from an external observer!

Example, to get back to HEP. The external observers is the classical laboratory with a bunch of scientists with no hair ;) The subsystems are subatomic entities "interacting". Thus for the laboratory observer to "understand/explain/infer" the interactions (read standard model) he needs to probe the system by perturbing it by prepared say "test observers" and see they interact. From the interactions mechanism are inferred. But to connect back to OT - i suggeste that taking the instrumentallist approach to its extreme, and attaching it to an arbitrary part of the universe and not JUST humans - we see an amazing possibility to understand unification at a deeper level. This also adresses many things such as emergence of intertia and mass. As in the inferntial perspective "stability" must be explains by intertia of underlying structures, this is also why the randomness are modulated by intertia, and why mutations that are NECESSARY for developemtn, does NOT destabilise the process.

To just reply to one more expected objection: So do i thinkn atoms compute probabilities and choose their actions consiously? NO. computations are just random permutations in its own internal structure, but the key is to understand the structure in such as way that we can see that it is in fact a very rational and clever betting devices. All this also suggests that calculuts needs to be scaled down to these computuers. I have not completed this thinking yet but i think the elementa here must be cosntructed from distunguishalbe states and that will become a discrete space. So the continuum models have no place in here, for this reason there will also be no divergenes. I always felt that in terms of information, the continuum embedding is confusing. Continuum physics will have to be explained as a large complexity limit. So in a way time evolutin is like a random computation process that self-organises, and i na way that there exists no immutable external view.

Do you see the vision? and the essence of my "interpretation"

If not, i think it have said more than enough already. I also learn how differentl we all think, which is a good thing, but this is yet another arugment against premature publications. As a reasoning gets complicated enough, its very hard for anyone to follow, this is why only the results matters. Only once the result is clear, interest for how and why will come. This is also natural, i work the same way. This is why i am moderatly interested in string theory for example.

/Fredrik
 
  • #78
In addition to explanations in previous post...
vanhees71 said:
What I don't understand is the above quoted paragraph. Of course, you need a scientific (not philosophical!) interpretation of the theories you want to apply to describe a given observational situation.
Agreed! :)

But we arent doing the actual science in this thread, we are just discussing things, like ways of reasoning, interpretations and approaches. And like i wrote to DrCourtney the fact that i was having several complexity scales in mind at a time causes confusion. My inability to explain this clearly is my fault. But ambition was to phrase it so that it applies both to the scientific process and the the physical intearaction process, as my conjecture is that they are the some - except there a couple of orders of magnitudes of difference in complexity. But common abstractions are acting upon uncertain information. This leads to an information processing and gambling game picture that i conjecture applies both to science and (more importantly) to particle physics.
vanhees71 said:
There's no scientific content in answering the question, "what's behind it". You may build some metaphysics or even religious believes in the sense of a worldview
You mean like other metaphysical geometrical interpretations: such as geometrical interpretation of GR and other gauge interactions?

I do not agree, except in one sense: It is not established science today, so in that way you are right. Your arguments sounds like that that of a archetype of an engineer, which uses established scientific knowledge as tools. My perspective and focus is much different. I am more interested in the tools themselves, and the process of creating scientific consensus from an inferential perspective.

What is your opinion of the scientific content of the geometrical models of physics? Imagine that Einsteins might have conjectured that the force of gravity is to be explained by curvature of spacetime, in the context of a "geometric interpretation", but couldn't first fogure out HOW to "couple" geometry to matter? This is not unlike what we discuss here. But indeed it is not "established" science until this strange idea it proves to agree with predictions. But by similar arguments, there is not scientific meaning in interpreting things as geometry. Nevertheless this has been a successful guide to understand more interactions.

In analogy i am just conjecturing that ALL forces can be explained by interaction between subjective processing agents (these are the inside observers), and these processing agents can be particles. What encodes the particular interaction, is the WAY the agents process and acts upon information.

The reason i talk about different complexity levels is that the same abstraction also applies to human interactions. This is the beauty of the "interpretation". Just like the beauty of geometrical interpretations, is that totally differentn systems can be described by the SAME geometry, and THUS its interaction properties can be learned from studying OTHER systems with same abstractions.

But again, we are not doign the real science in this thread. Yes there is underlying mathematical models for this, and yes there should come out predictions. But they arent likely to be posted in the thread here.
vanhees71 said:
It's also not clear to me what you mean by "inside observer". The observer doesn't play much of a role in an experiment.
See other post. There are "inside observers" at many scales.

An Earth based lab/detector doing cosmological observations is an inside observer.
An electron interacting with the nucleus is an inside observet of hte atom.
A quark interacting with other quarks is an inside observer.

A human based laboratory observing all the apparatous in a accelerator is for all practicaul purposes NOT an inside observer, In a way one can think of the accelerated particles as template inside observers fired into the target, but the process is still observers from a dominant environment. This is why we can with certainly objectively measure any scattering processes. This is also WHY you righfully say that the observer does not play a role - because the description is incomplete. You can not have an outside description, but observers can described other observers from ANOTHER inside view.

See the assymmetry? You might not be a fan of Lee Smolin, but check out this "cosmological fallacy" and failure of "Newtonian paradigm", and the arguments that explains the "unreasonable effectiveness of mathematics". Note that in despite of the name, even QFT fits into the Newtoninan paradigm.

All this talk is just in order to get clues to the open questions in physics. If we are happy with understanding atomic physics then this is moot. But I am motivated to understand the workings of the universe, at ALL complexity levels. And some of my insights in the failure of reductionsim comes from trying to understand also biological systems. Some explanatory models, immediately get lost in chaotical dynamical systems, so either no more progress is possible, or a new explanatory method is needed. If you want to model the chemical processes in s single cell, you end up realizig that you need to model the whole yeast population as the environemnt of the cells is MADE up by OTHER cells.

The analogy to physics is IMO striking.

Just to stay on topic, I feel i have typed too much! I sincerely think that this interpretation i have is like an extermal form of scientific instrumentalism, no steroids maybe, and this is why i started to mention this. From my perspective i think MY interpretation is the most minimal one, as i insist on a consistent inferential perspective. And a classical observers actually immediately breaks this.

/Fredrik
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 220 ·
8
Replies
220
Views
22K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
198
Views
14K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 19 ·
Replies
19
Views
2K