I What Makes Ontology Easy for Kids but Challenging for Quantum Physicists?

  • #151
vanhees71 said:
In physics all you can decide is which observable(s) you want to observe and how to construct a measurement device to do so. Then you can model this setup within QT and test the probabilistic predictions against your experimental data on ensembles.
Are you saying that a physicist uses QT predictions to set up an experiment that proves the QT predictions are correct?
 
Physics news on Phys.org
  • #152
Of course, to test QT you need to use QT to set up a corresponding experiment. If you want to test one theory against another, in our context most interestingly any local deterministic hidden-variable theory a la Bell against Q(F)T, you have to use both theories to build up your experiment. It's the great merit of Bell's idea that it enables scientists rather than philosophers to decide between these two possible descriptions of Nature by observational facts rather than prejudices of some random philosophers. The result clearly is in favor of Q(F)T rather than the prejudices of EPR (where E didn't like this unjustly famous paper himself very much).
 
  • Like
Likes Lord Jestocost
  • #153
vanhees71 said:
you decide to gamble at the casino or not, but this has nothing to do with physics. In physics all you can decide is which observable(s) you want to observe and how to construct a measurement device to do so. Then you can model this setup within QT and test the probabilistic predictions against your experimental data on ensembles.
I appreciate your clean approach, but the problem I see, is that the prerequisites/machinery required for constructing a classical measurement device and collect solid ensembles of data, fails for the cosmological inside perspective. Ie. the data processing resources (both in terms of memory and time) are larger than the time scale where the environment changes.

This is why QT is fine for describing subatomic physics, from the lab perspective. But not for cosmology or perhaps some unification questions. As long as we talk about interpretations of QM or QFT as as description of subatomic events, then all is fine, except for a delicate fine tuning problem. The problem is that when science is no longer descriptive, but becomes an actor in the world, it relates to self organisation, and even the scientific process contains elements of gambling. Such as choosingn the best abduction from input. There is no right choice, you can only try, and the winning agens gets to persists.

/Fredrik
 
  • #154
CelHolo said:
As far as I can tell all progress has been away from classical-like theories like BM.
Not at all. In the area of foundations of quantum theory, there was only one widely accepted key success - Bell's theorem and the experimental falsification of the Bell inequalities. This progress was based on BM (Bell was at that time almost the only defender of BM).

Progress in the domain of realistic interpretations of QT, in particular Caticha's entropic dynamics, has been completely ignored.

What else is there? The SM is essentially phenomenology, all what was fundamentally new there goes back to QED. The fundamental insight was Wilsonian understanding of the meaning of renormalization, which removed the pretense of the SM of being fundamental and left SM + GR as effective field theories.
CelHolo said:
I really think all indications are that progress will continue on these lines, as we see already in QG research with black hole complementarity, dualities and so on.
I think this line is a dead end.
 
  • Like
Likes gentzen and Demystifier
  • #155
AndreiB said:
They use a Bayesian concept of probability. It's an agent's degree of belief that event X will happen.
There are two variants. Your is the subjective one. I prefer the objective one. It is the belief which is justified in a rational way given the information which is available.

There are important differences between the two. Namely, if you have no information about a dice, subjective probability is free to postulate something completely arbitrary, objective probability has only the choice 1/6 for each number, given that there is no information which makes a difference between the different numbers.

In general the state of no information requires the maximal entropy distribution. So, based on this one can justify as thermodynamics, as (in Caticha's entropic dynamics) quantum theory.
AndreiB said:
I think the term "ontic" is not used here in the sense that the theory has some ontology. "Ontic" interpretations are those where the quantum state itself is part of the ontology.
Not necessarily. Caticha's entropic dynamics has a well-defined ontology (the configuration space) so I would name it ontic, but the wave function is not part of the ontology but epistemic. QBism or Copenhagen have, instead, no ontology.
 
  • #156
Sunil said:
Not at all. In the area of foundations of quantum theory, there was only one widely accepted key success - Bell's theorem and the experimental falsification of the Bell inequalities. This progress was based on BM (Bell was at that time almost the only defender of BM).
Bell's results and the experiments are independent of the interpretation. He may have come to it thinking in terms of BM, but the results are independent from it. As far as I can tell that still is the only usefull thing about BM.
Sunil said:
Progress in the domain of realistic interpretations of QT, in particular Caticha's entropic dynamics, has been completely ignored.
May be there is a reason for that.
Sunil said:
What else is there? The SM is essentially phenomenology, all what was fundamentally new there goes back to QED. The fundamental insight was Wilsonian understanding of the meaning of renormalization, which removed the pretense of the SM of being fundamental and left SM + GR as effective field theories.
 
  • #157
martinbn said:
Bell's results and the experiments are independent of the interpretation. He may have come to it thinking in terms of BM, but the results are independent from it. As far as I can tell that still is the only usefull thing about BM.
Its the main progress in fundamental physics. In comparison with zero coming from other interpretations not that bad.
martinbn said:
May be there is a reason for that.
No, it is simply ignorance. All this psi-ontology community proves theorems about its impossibility, I don't that bad about them that they would continue to proof such theorems if they would know an explicit counterexample.
 
  • #159
Sunil said:
Its the main progress in fundamental physics. In comparison with zero coming from other interpretations not that bad.
My point is that this is a result that is independent of interpretations. It's like saying that since D. Deutsch thinks in terms of a many worlds interpretation, all of his work on quantum computing is to the credit of the many worlds interpretation. Not that bad.
Sunil said:
No, it is simply ignorance. All this psi-ontology community proves theorems about its impossibility, I don't that bad about them that they would continue to proof such theorems if they would know an explicit counterexample.
I am sure you have done this before, but can you give the reference again. I am curious now.
 
  • #161
martinbn said:
My point is that this is a result that is independent of interpretations.
Sure, but the same can also be said about all other important results that were obtained from a standard Copenhagen/orthodox/statistical-ensemble/non-ontic point of view. So it all boils down to the question which interpretation makes thinking easier. And of course, this question does not have a universal answer. It depends on the specific problem, but also on the personality of the physicist. It may be true that most physicists find a non-ontic way of thinking easier than an ontic one, but it does not necessarily mean that the non-ontic way of thinking is "better" or "closer to truth".
 
Last edited:
  • #163
martinbn said:
My point is that this is a result that is independent of interpretations.
The main intention of the proof was to get rid of the non-locality argument against BM, by showing that all reasonable interpretations will have to be non-local. Looks a little bit closer than your example:
martinbn said:
It's like saying that since D. Deutsch thinks in terms of a many worlds interpretation, all of his work on quantum computing is to the credit of the many worlds interpretation.
But I agree with your main point, the result itself is interpretation-independent.
martinbn said:
I am sure you have done this before, but can you give the reference again. I am curious now.
Caticha, A. (2011). Entropic Dynamics, Time and Quantum Theory, J. Phys. A 44 , 225303, arxiv:1005.2357

Schmelzer's unpublished variant arXiv:1906.00956 maybe of interest because it addresses the conflict with psi-ontology theorems directly. Moreover, he identifies the other degrees of freedom Y which are left unspecified by Caticha with the usual configuration space outside the system, so that this reduces the ontology even more, to simply the standard classical configuration space.
 
  • #164
martinbn said:
Or which part was first done with BM, before it was done with orthodox QM?
In my recent work https://arxiv.org/abs/2010.07575 I first solved the problem intuitively with BM and then translated the results into the orthodox form. But in the paper itself we presented the logic in the reverse order.
 
  • #165
Demystifier said:
Ontology is the easiest and the hardest concept in the field of quantum foundations.
...
The question for everybody: How to explain the meaning of the word "ontology" such that even a mature orthodox quantum physicist can understand it?
I don't think it's a case of some people understanding the term 'ontology' and others not. I think it's more a disagreement about what is meaningful and what is not; about what the purview of scientific inquiry is and what is not.

Something I believe everyone can agree on is that existence is self-evident. Ontology then is simply the nature of existence or the nature of that which exists.

Ontology is what was present at the big bang and the formation of stars and galaxies, before there were observers.
 
  • #166
Sunil said:
Not at all. In the area of foundations of quantum theory, there was only one widely accepted key success - Bell's theorem and the experimental falsification of the Bell inequalities. This progress was based on BM (Bell was at that time almost the only defender of BM).

Progress in the domain of realistic interpretations of QT, in particular Caticha's entropic dynamics, has been completely ignored.

What else is there? The SM is essentially phenomenology, all what was fundamentally new there goes back to QED. The fundamental insight was Wilsonian understanding of the meaning of renormalization, which removed the pretense of the SM of being fundamental and left SM + GR as effective field theories.

I think this line is a dead end.
But Bell's work is completely independent of Bohmian mechanics. It's a mathematical theorem showing that there is a property of the probabilities predicted by QT (in any interpretation, which doesn't change the physics content, i.e., the predictions from minimally interpreted QT) and the probabilities predicted by any local deterministic hidden-variable theory.

Of course the SM is strictly based on phenomenology. It has been discovered as a paradigmatic example for the interplay between theory and experiments. That's why it is so successful. Speculations of a philosophical kind never have brought much progress in our understanding of Nature.

Bell's work is another paradigmatic example for this: All the quibbles about purely philosophical issues like the EPR paper and Bohr's answer to it haven't brought any progress until Bell found a way to formulate it in a clear cut scientifically decidable question, i.e., whether or not the Bell inequality was violated in the real world or not. As is well known, it is violated (and to an amazing confidence level!) and the predictions of QT are confirmed (at the same amazing confidence level). Indeed, that's the only progress all this philosophical has brought for science. Admittedly it was a great one, leading to the development of all the most current "quantum technology" putting us in the midst of the "2nd quantum evolution" ;-)).
 
  • #167
CelHolo said:
As far as I can tell all progress has been away from classical-like theories like BM.
There has been progress? :oops:

/Fredrik
 
  • #168
vanhees71 said:
But Bell's work is completely independent of Bohmian mechanics. It's a mathematical theorem showing that there is a property of the probabilities predicted by QT (in any interpretation, which doesn't change the physics content, i.e., the predictions from minimally interpreted QT) and the probabilities predicted by any local deterministic hidden-variable theory.
Nobody without an interest in the foundations of quantum theory, that means in that philosophy which you despise, would have proven such a theorem.
vanhees71 said:
Of course the SM is strictly based on phenomenology. It has been discovered as a paradigmatic example for the interplay between theory and experiments. That's why it is so successful. Speculations of a philosophical kind never have brought much progress in our understanding of Nature.
It is, indeed, a paradigmatic example of phenomenological research. It works after those much more interested in philosophy have established the base - QED - and if there is some technological progress which allows to reach better experimental results.
 
  • Like
Likes vanhees71, gentzen and Demystifier
  • #169
Sunil said:
QBism or Copenhagen have, instead, no ontology.
I disagree. In order for any theory to make predictions and provide explanations it needs to postulate "something". The QM postulates speak about a "system" that evolves in agreement with Schrodinger's equation. So, that system has to exist, otherwise the theory is useless. It also speaks about measurements, so appropriate apparatuses need to exist as well. So, the theory has an ontology. I agree however, that this ontology is not clearly spelled out.

QBism also postulates rational agents that can have experiences, memories and so on. They also more ar less postulate an external world which the agent can probe. But, just like in the case of orthodox QM, this ontology is not clearly spelled out. They refuse to say anything of substance about this external world even if they should.

I think this ambiguity in presenting the ontology helps them avoid falsification, so they are reluctant to clarify it.
 
  • #170
AndreiB said:
I disagree. In order for any theory to make predictions and provide explanations it needs to postulate "something".
I agree that in a general, philosophical sense Copenhagen and QBism also presuppose some reality. They specify some probabilities of outcomes of measurements, and these outcomes are really outcomes, something really existing. But they don't give a description what really exists. That means, they don't have a well-defined ontology.
 
  • Like
Likes gentzen and Demystifier
  • #171
Sunil said:
I agree that in a general, philosophical sense Copenhagen and QBism also presuppose some reality. They specify some probabilities of outcomes of measurements, and these outcomes are really outcomes, something really existing. But they don't give a description what really exists. That means, they don't have a well-defined ontology.
Agreed. But they should define it in order for those interpretations to be logically coherent.
 
  • #172
Sunil said:
I think this line is a dead end.
Are you saying that AdS/CFT is a dead end? If yes, can you explain why?
 
  • #173
Sunil said:
Nobody without an interest in the foundations of quantum theory, that means in that philosophy which you despise, would have proven such a theorem.
Sure, there can come good science from thinking philosophical problems. It shows that if there are philosophical quibbles about science the only way to solve them is to first translate the gibberish to a clear cut scientific question decidable by experiment in a unique way. This has been done for the EPR quibbles (or rather Einstein's much better formulated quibble about "inseparability" clarifying what he really wanted to say in the EPR paper in 1948). For me these quibbles are solved now for good. From a scientific point of view there's nothing more that can be said, because it's clearly decided in favor of Q(F)T rather than local deterministic hidden-variable theories.
 
  • #174
@Demystifier: AdS/CFT may be nice mathematics, but what else? My position is similar to the one against string theory in general: A lot of very intelligent people working there many years, and what are the results?

AndreiB said:
Agreed. But they should define it in order for those interpretations to be logically coherent.
Copenhagen is quite vague about this. There is some classical part, where usual common sense works, but what is what is real for the quantum system is not clear at all.

vanhees71 said:
Sure, there can come good science from thinking philosophical problems. It shows that if there are philosophical quibbles about science the only way to solve them is to first translate the gibberish to a clear cut scientific question decidable by experiment in a unique way. This has been done for the EPR quibbles (or rather Einstein's much better formulated quibble about "inseparability" clarifying what he really wanted to say in the EPR paper in 1948). For me these quibbles are solved now for good. From a scientific point of view there's nothing more that can be said, because it's clearly decided in favor of Q(F)T rather than local deterministic hidden-variable theories.
That's funny, given that nobody cared about QFT or deterministic hidden variable theories (there was none to be considered). The very point, namely that nonlocality is not an argument against BM, has not been accepted even today.
 
  • Like
Likes gentzen and Demystifier
  • #175
Sunil said:
I agree that in a general, philosophical sense Copenhagen and QBism also presuppose some reality. They specify some probabilities of outcomes of measurements, and these outcomes are really outcomes, something really existing. But they don't give a description what really exists. That means, they don't have a well-defined ontology.
For me, "agent" is just a label for the abstraction of an "inside observer" that seems like a minimal and a mandatory and modest central starting point in an inference centered approach. The "agent" thus both encodoes and puts constraints in both memories and inferences.

About the question, what "substance" is the agent made of? It seems obvious that it must be made of normal matter. But the question one attempts to answer is rather HOW "normal matter" is constructed, and how it interacts with other matter. Ie. to explain interactions and classify them. Here the agent notion is an abstraction only, similar to abstractions such as geometry.

In order to solve the problem of the starting point, or ontology, the only solution I have found is to release oneself from the preconception that there has to be a fundamental ontology, from which all else is explained. There is a problem with that view. Instead, perhaps we can imagine emergent and evolving relations, where one can identify at best "effective ontologies". This thinking IMO unifies ontolgoy and epistemolgoy, none of them are fundamental, they are rather entangled and evolving. Thus the "ontology" is similar to an initial condition. What is important is how only ontolgoy evolves onto the next?

I agree this is fuzzy but I do not think it's going to get it simpler. I do not see any reason to expect there is a fundamental ontology at all. But there ARE effective ontolgoies, and we need them.

/Fredrik
 
  • #176
Fra said:
For me, "agent" is just a label for the abstraction of an "inside observer" that seems like a minimal and a mandatory and modest central starting point in an inference centered approach. The "agent" thus both encodoes and puts constraints in both memories and inferences.
Usually this is unproblematic. The observer is simply big and complex enough.
Fra said:
But the question one attempts to answer is rather HOW "normal matter" is constructed, and how it interacts with other matter. Ie. to explain interactions and classify them. Here the agent notion is an abstraction only, similar to abstractions such as geometry.
And, moreover, it is not even necessary except as an emergent object in statistical theories.
Fra said:
In order to solve the problem of the starting point, or ontology, the only solution I have found is to release oneself from the preconception that there has to be a fundamental ontology, from which all else is explained.
I don't get this point. If the theory is realistic, it starts with an ontology, so the starting point is the ontology. It works nicely without observers. If it is able, say, to predict how planets behave, this is already enough for having empirical tests by observing the planets. No need for having observers described by the theory, or having a developed psychology or so.
Fra said:
There is a problem with that view. Instead, perhaps we can imagine emergent and evolving relations, where one can identify at best "effective ontologies". This thinking IMO unifies ontolgoy and epistemolgoy, none of them are fundamental, they are rather entangled and evolving.
This seems to create problems out of nothing. The ontology is fundamental, epistemology is a secondary, non-fundamental human problem.

Fra said:
Thus the "ontology" is similar to an initial condition. What is important is how only ontolgoy evolves onto the next?
Also not a problem. Some scientist develops a new theory, with a new ontology, and derives a classical limit which allows him to recover approximately the successful predictions of the old theory.
Fra said:
I agree this is fuzzy but I do not think it's going to get it simpler. I do not see any reason to expect there is a fundamental ontology at all.
The actual standard position is much simpler. So you need a quite serious justification to reject it.
 
  • #177
I am guessing our approaches are so different, so we might not reach an agreement.. but just to comment.
Sunil said:
Usually this is unproblematic. The observer is simply big and complex enough.
If you by usually, refers to subatomic interactions, from the perspective of a classical lab, then I agree, except for subtle questions of fine tuning that arise during unification.

But generally, I think this "usually" is not good enough for many open questions, and then I think about unification of GR+QM, comsological models and unification.
Sunil said:
And, moreover, it is not even necessary except as an emergent object in statistical theories.
I think the necessity only relates to the desire for increased explanatory value, reduce the number of free parameters and get rid of fine tuning (which is an ugly trait).

Sunil said:
No need for having observers described by the theory, or having a developed psychology or so.

Sunil said:
epistemology is a secondary, non-fundamental human problem.
If you start associating these terms to their meaning for humans, I think we are missing the idea. In a way - everything we talk about is "human problems", but that is missing the point. This is just like people who object to CI and think the human observers are required. (Of course human observers are required in a very superficial sense to write down equations etc, but I think we understand that there is another layer, let's not confuse ourselves).

Sunil said:
The actual standard position is much simpler. So you need a quite serious justification to reject it.
The standard paradigm to me seems improper for an intrinsic inference approach. That is serious enough for me at least. (So the question why I think the optimal approach is the one of intrinsic inference? that is a separate subquestion we need not bring up here, and relates to the philosophy of science and evolutionary learning.)

/Fredrik
 
  • #178
What do the human observers do except observing the spacetime reality (including one's brain process) event by event?
 
  • #179
AlexCaledin said:
What do the human observers do except observing the spacetime reality (including one's brain process) event by event?
When several interconnected neurons fire without apparent outside stumuli, you call that a dream(they do fire - usually when you are sleeping and your senses are less active or shut down). The brain interprets the random firing in a seemingly random way. This is why most often dreams are disjoint and make little sense. When the same neurons fire in connection with apparent outside stimuli, you call that 'reality'(normally, when you are awake). Sometimes they fire without an apparent outside stimulus and this brings up imaginary things during waking hours. Or 'imagination'... Intelligent people use this brain feature to invent new ideas during contemplation and deliberation. Stupid people use this brain feature to concoct theories about vaccine microchips, chemtrails poisoning, reptilians and what not. With respect to brain neurology, you are not going to get much farther than this.
 
  • Like
Likes Demystifier
  • #180
AlexCaledin said:
What do the human observers do except observing the spacetime reality (including one's brain process) event by event?
They react and take actions based on their best understanding, for its own benefit?
This helps FORM the reality for all other humans. So even a human is an actor.
Science brings us technology, that helps us exploit nature more effiently! We just can't help it ;-)

So the interacting agent analog can be applied to humans as well, but not at the level of physics. The presumed version of "agent" in QM that I have in mind, put feedback to the environment via physical interactions. But this is a more extreme version, and as far as I can tell from qbist advocates, a lot of qbists would probably not share this very radical view.

About the rationality, that is also "relative". Obviously humans may seem irrational in many ways, but what may seem natural for the inside agent, may turn out irrational judged from a different perspective. This is not a contradiction and does not make the idea invalid. I do not subscribe to some rationality constraint, on the contrary do I think the evolution will solve this. All we need is variation and competition.

People also have simiarly objections to economical theory of rational consumers. But for the individual, disregarding emotions is not natural. Of course emotions influence decisions. This is also a form of "rationality" in a natural sense AFAIK. In human descition even emotions have a survival value. Emotions can guide you when time or contemplation is not at hand. Like when a lion is coming at you. The response here is not drive by analysis, it's driven by fear. And this is in fact "rational" in the evolutioanry perspective.

/Fredrik
 

Similar threads

Replies
204
Views
12K
Replies
147
Views
11K
  • · Replies 49 ·
2
Replies
49
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 25 ·
Replies
25
Views
5K
Replies
15
Views
2K
  • · Replies 48 ·
2
Replies
48
Views
4K
  • · Replies 30 ·
2
Replies
30
Views
2K
  • · Replies 25 ·
Replies
25
Views
6K
  • · Replies 4 ·
Replies
4
Views
2K