Quantization isn't fundamental

In summary, the paper "Are Particles Self-Organized Systems?" by Manasson V. discusses the idea that elementary particles can be described as self-organized dynamical systems, and that their properties such as charge and action quantization, SU(2) symmetry, and the coupling constants for strong, weak, and electromagnetic interactions can be derived from first principles. The author also suggests that quantum theory may be a quasi-linear approximation to a deeper theory describing the nonlinear world of elementary particles. While the specific model presented in the paper may have some flaws, the approach of reformulating the axioms of quantum theory based on identifying its mathematical properties is thought-provoking and warrants further exploration.
  • #36
stevendaryl said:
Bell's theorem doesn't make any assumptions about whether the dynamics is self-organizing, or not.
Bell's theorem assumes the absence of superdeterminism. I wonder, could perhaps self-organization create some sort of superdeterminism? In fact, I think that the 't Hooft's proposal can be understood that way.
 
  • Like
Likes eloheim and Auto-Didact
Physics news on Phys.org
  • #37
This article's train of thought regarding 1/2 -> 1 -> 2 spin particles and their coupling leads to a prediction that graviton's coupling should be ~4 times stronger than color force. This is obviously not the case.

Just observing that families of particles seem to "bifurcate" when we look at their various properties seems to be a too tenuous reason to apply dissipative reasoning.
 
  • #38
Auto-Didact said:
QM itself can essentially be viewed as a non-local theory, this is what Bell's theorem shows.

Bell’s theorem states that in a situation which involves the correlation of measurements on two spatially separated, entangled systems, no “local realistic theory” can predict experimental results identical to those predicted by quantum mechanics. The theorem says nothing about the character of quantum theory.
 
  • #39
DarMM said:
Fine tuning has long been used for both initial condition tuning and parameter tuning, I don't think parameter tuning has any special claim on the phrase. Besides it's standard usage in Quantum Foundations to refer to this as "Fine-Tuning" and I prefer to use terms as they are used in the relevant fields.
I don't doubt that, but I think you are missing the point that the other usage of fine tuning is old, centuries old. Newton himself even used the same fine tuning argument to argue that the three body problem was insoluble due to infinite complexity and that therefore the mechanistic universe must be the work of God. The same arguments were and are still being used in biology since Darwin to this very day.

In any case, I will grant your usage of this unfortunate standard terminology in the novel and relatively secluded area of research that is the foundations of QM.
DarMM said:
Well a simple toy model that shows a huge amount of quantum mechanical features result purely from a fundamental epistemic limit is here:
https://arxiv.org/abs/quant-ph/0401052

It's just a toy model, there are much more developed ones, but you can see the basic fact of how easy it is to replicate a huge amount of QM, except for entanglement. Which is why entanglement is the key feature one has to explain.
I understand that this toy model is or may just be some random example, but I seriously think a few key points are in order. I will start by making clear that my following comments are regarding mathematical models in scientific theories of empirical phenomenon, but I digress.

I do hope you realize that there is an enormous qualitative difference between these kind of theoretical models and a theoretical model like Manasson's. This can be seen at multiple levels:
- First, the easiest way to spot this difference is to compare the underlying mathematics of the old and new models: the mathematics of this new model (causal discovery analysis, a variant of root cause analysis) is very close to the underlying mathematics of QM, while the mathematics underlying Manasson's model is almost diametrically opposite to the mathematics underlying QM.
- The second point is the focus of a new model - due to the underlying mathematics - on either accuracy or precision: similar underlying mathematics between models tends to lead quickly to good precision without necessarily being accurate, while a novel model based in completely different mathematics - and still being capable of reproducing things of an older model - initially has to focus on accuracy before focusing on precision.
- The third - and perhaps most important - point is the conceptual shift required to go between the old and the new model; if apart from the mathematics, the conceptual departure from old to new isn't radical, then the new model isn't likely to be able to go beyond the old. This is actually a consequence of the first and second point, because a small difference with high precision is easily fully constructed, implying low accuracy and therefore easily falsified. On the other hand, it is almost impossible that huge differences will lead to similar consequences, meaning both models are accurate with the older being typically more precise than the newer, at least until the newer matures and either replaces the old or gets falsified.

To illustrate these points even further we can again use the historical example of going from Newtonian gravity to Einsteinian gravity; all three points apply there quite obviously; I won't go into that example any further seeing there are tonnes of threads and books on this topic, i.e. MTW Gravitation.

What I do need to say is that the above mentioned differences are important for any new mathematical model of some empirical phenomenon based in scientific reasoning, not just QM; I say this because there is another way to create a new mathematical model of an empirical phenomenon, namely by making an analogy based on similar mathematics. A (partially) successful new model using an analogy based on similar mathematics usually tends to be only incrementally different or evolutionary, while a successful new model based on scientific reasoning tends to be revolutionary.

Evolution of a model merely requires successful steps of cleverness, while revolution requires nothing short of genius and probably a large dose of luck, i.e. being in the right place at the right time. This is the problem with all psi-epistemic models; they are practically all incrementally different or a small evolution in terms of being mathematically cleaner than the old model - which is of course why they are available a dime a dozen. It takes hardly any mathematical insight or scientific creativity at all to make one. For new QM models, this is because such models tend to be based in probability theory, information theory, classical graph theory and/or linear algebra. These topics in mathematics are in comparison with say geometry or analysis relatively "sterile" (not quantitatively in applications but qualitatively in mathematical structure).

All of these critique points w.r.t. theorisation of empirically based scientific models do not merely apply to the toy model you posted, but to all psi-epistemic models of QM. This is also why we see so much of such models and practically none of the other; making psi-epistemic models is a low-risk/low-payout strategy, while making psi-ontic models is a high-risk/high-payout strategy.

When I said earlier, that I've never seen a new model which wasn't obviously wrong or completely unbelievable, I wasn't even counting such incrementally different models because they tend to be nowhere near even interesting enough to consider seriously as a candidate that will possibly supersede QM. Sure, such a model may even almost directly have way more applications; that however is frankly speaking completely irrelevant w.r.t. foundational issues. W.r.t. the foundations of QM, this leaves us with searching for psi-ontic models.

Make no mistake; the foundational goal of reformulating QM based on another model is not to find new applications but to go beyond QM; based on all psi-ontic attempts so far this goal is extremely difficult. On the other hand, as I have illustrated, finding a reformulation of QM based on a psi-epistemic model tends to be neither mathematically challenging nor scientifically interesting for any (under)grad student with sufficient training; one can almost literally blindly open any textbook on statistics, decision theory, operation research and/or data science and find some existing method which one could easily strip down to its mathematical core and try to construct an incrementally different model of QM.

So again, if you do know of some large collection of new psi-ontic (toy) models which do not quickly fall to fine-tuning and aren't obviously wrong, please, some references would be nice.
 
  • #40
nikkkom said:
This article's train of thought regarding 1/2 -> 1 -> 2 spin particles and their coupling leads to a prediction that graviton's coupling should be ~4 times stronger than color force. This is obviously not the case.
It actually need not imply such a thing at all. The article doesn't assume that gravity needs to be quantized.
nikkkom said:
Just observing that families of particles seem to "bifurcate" when we look at their various properties seems to be a too tenuous reason to apply dissipative reasoning.
Bifurcating particle taxonomy isn't the reason to apply dissipative reasoning, instead virtual particles based in the Heisenberg uncertainty principle is.

The very concept of virtual particles implies an open i.e. dissipative system, and therefore perhaps the necessity of a non-equilibrium thermodynamics approach a la [URL='https://www.physicsforums.com/insights/author/john-baez/']John Baez.[/URL]
Lord Jestocost said:
Bell’s theorem states that in a situation which involves the correlation of measurements on two spatially separated, entangled systems, no “local realistic theory” can predict experimental results identical to those predicted by quantum mechanics. The theorem says nothing about the character of quantum theory.
Your conclusion is incorrect. If local hidden variables can not reproduce QM predictions, non-local hidden variables might still be able to, i.e. Bell's theorem also clearly implies that non-locality may reproduce QM's predictions, implying again that QM - or a completion of QM - is itself in some sense inherently non-local. This was indeed Bell's very own point of view.

None of this is nothing new, it is well-known in the literature that entanglement is or can be viewed as a fully non-local phenomenon. Moreover, as you probably already know, there is actually a very well-known explicitly non-local hidden variable theory, namely Bohmian mechanics (BM) which fully reproduces the predictions of standard QM; in terms of QM interpretation, this makes BM a psi-ontic model which actually goes beyond QM.
 
  • #41
Auto-Didact said:
I don't doubt that, but I think you are missing the point that the other usage of fine tuning is old, centuries old...

In any case, I will grant your usage of this unfortunate standard terminology in the novel and relatively secluded area of research that is the foundations of QM.
The other usage is centuries old as well, going back to at least Gibbs and Boltzmann and it's used in Statistical Mechanics and Cosmology as well. So both usages are prevalent in modern physics and centuries old. I don't know which is older, but I also don't see why this point matters if both are in common usage now and have been for used for centuries.

Auto-Didact said:
I understand that this toy model is or may just be some random example, but I seriously think a few key points are in order. I will start by making clear that my following comments are regarding mathematical models in scientific theories of empirical phenomenon, but I digress.

I do hope you realize that there is an enormous qualitative difference between these kind of theoretical models and a theoretical model like Manasson's. This can be seen at multiple levels:
You're treating this like a serious proposal, remember the context in which I brought this up. This toy model isn't intended to be a scientific advance. It's intended to show how simple it is to replicate all the features of QM except for entanglement, i.e. post-classical correlations. The model isn't even remotely realistic and is mathematically trivial and it can still replicate them.

The reason I brought up such toy models was to focus on the fact that things like quantised values, superposition, solving the measurement problem, etc can be done quite easily and this model is just the simplest such model demonstrating that (more complex ones exist).

What isn't easy is replicating breaking of the Bell inequalities and any model that really attempts to explain QM should focus on that primarily, as the toy model (and others) show that the other features are easy.

Auto-Didact said:
All of these critique points w.r.t. theorisation of empirically based scientific models do not merely apply to the toy model you posted, but to all psi-epistemic models of QM. This is also why we see so much of such models and practically none of the other; making psi-epistemic models is a low-risk/low-payout strategy, while making psi-ontic models is a high-risk/high-payout strategy.
There are less psi-epistemic models though, they are very hard to construct, especially now in light of the PBR theorem. I really don't understand this.

Auto-Didact said:
When I said earlier, that I've never seen a new model which wasn't obviously wrong or completely unbelievable, I wasn't even counting such incrementally different models because they tend to be nowhere near even interesting enough to consider seriously as a candidate that will possibly supersede QM. Sure, such a model may even almost directly have way more applications; that however is frankly speaking completely irrelevant w.r.t. foundational issues. W.r.t. the foundations of QM, this leaves us with searching for psi-ontic models.
I didn't present the toy model as a candidate to replace QM, but as a demonstration of how easily all non-entanglement features can be replicated.

Auto-Didact said:
Make no mistake; the foundational goal of reformulating QM based on another model is not to find new applications but to go beyond QM; based on all psi-ontic attempts so far this goal is extremely difficult. On the other hand, as I have illustrated, finding a reformulation of QM based on a psi-epistemic model tends to be neither mathematically challenging nor scientifically interesting for any (under)grad student with sufficient training
Again this is counter to virtually everything I've read in quantum foundations. Making Psi-Epistemic models is extremely difficult in light of the PBR theorem.

Auto-Didact said:
one can almost literally blindly open any textbook on statistics, decision theory, operation research and/or data science and find some existing method which one could easily strip down to its mathematical core and try to construct an incrementally different model of QM.
I don't think so, again not in light of the PBR theorem.

Auto-Didact said:
So again, if you do know of some large collection of new psi-ontic (toy) models which do not quickly fall to fine-tuning and aren't obviously wrong, please, some references would be nice.
This is what I am saying:
  1. Replicating non-entanglement features of Quantum Mechanics is very simple as all one needs is a classical theory with an epistemic limit. The toy model presented is an example of how simple this is.
  2. Hence something that replicates QM should explain how it replicates entanglement first, as the other aspects are easy
  3. However we already know that realist models will encounter fine-tuning from the Wood-Spekkens and Pusey-Leifer theorems.
One of the points in my previous posts tells you that I can't give you what you're asking for here because it has been proven not to exist, all realist models require fine tunings. That's actually one of my reasons for being skeptical regarding these sort of models, we already know they will develop unpleasant features. People present these models as if they will escape what they don't like about Bohmian Mechanics, however we know now that these features of Bohmian Mechanics are general to all such models.

The only really different theories would be superdeterministic, retrocausal or Many-Worlds, but all of those have fine tunings as well.

Acausal models might be different (i.e. where physics concerns multiscale 4D constraints), but they are truly different theories with little analysis on them as of now.
 
Last edited:
  • Like
Likes dextercioby and Auto-Didact
  • #42
Auto-Didact said:
> This article's train of thought regarding 1/2 -> 1 -> 2 spin particles and their coupling leads to a prediction that graviton's coupling should be ~4 times stronger than color force. This is obviously not the case.

It actually need not imply such a thing at all. The article doesn't assume that gravity needs to be quantized.

It did this for color force, here:

w.png


Why the same should not apply to "the next next level" of gravitons?
 

Attachments

  • w.png
    w.png
    15.5 KB · Views: 1,053
  • #43
nikkkom said:
It did this for color force, here:

View attachment 233220

Why the same should not apply to "the next next level" of gravitons?
The question is 'why should it'? You seem to be reading this particular bit without controlling for your cognitive expectation bias, i.e. you are assuming based on the fact that quantization of gravity is a standard hypothesis in many models, that it is therefore also a hypothesis of this model.

It is pretty clear that this model is compatible with either hypothesis w.r.t. gravitation. That is to say this model is completely independent of the hypothesis whether or not gravity should be quantized in the same manner as the rest of the forces in physics i.e. following the standard form of quantization for particle physics.

This is bolstered by the fact that this is a phenomenological model i.e. it is constructed upon only empirically observed phenomenon. The form of quantization this model is attempting to explain is precisely the form known from experimental particle physics; no experiment has ever suggested that gravity is also quantized in this manner.

In contrast to common perception, both the mathematical physics and quantum gravity phenomenology literature actually respectively, give very good mathematical arguments and empirical arguments to believe that this hypothesis is actually false to begin with; this wouldn't necessarily mean that gravitation is not quantized at all, but that if it is, it is probably not in quantized in exactly the same manner as the other forces, making any conclusions that it probably is at worst completely misguided and at best highly premature because it is non-empirical.
 
  • #44
Auto-Didact said:
Your conclusion is incorrect. If local hidden variables can not reproduce QM predictions, non-local hidden variables might still be able to, i.e. Bell's theorem also clearly implies that non-locality may reproduce QM's predictions, implying again that QM - or a completion of QM - is itself in some sense inherently non-local.

Bell's theorem might imply that a “non-local realistic theory” might predict the correlations of measurements on entangeld systems. Regarding QM, there are other options.
 
  • #45
Lord Jestocost said:
Bell's theorem might imply that a “non-local realistic theory” might predict the correlations of measurements on entangeld systems. Regarding QM, there are other options.
Non-local hidden variable theories are a subset of non-local realistic theories, i.e. this discussion is moot.

The non-locality of QM - i.e. the non-local nature of entanglement - has been in the literature since Schrodinger himself.
Aspect concluded in 2000 that there is experimental support for the non-locality of entanglement, saying
Alain Aspect said:
It may be concluded that quantum mechanics has some nonlocality in it, and that this nonlocal character is vindicated by experiments [45]. It is very important, however, to note that such a nonlocality has a very subtle nature, and in particular that it cannot be used for faster-than-light telegraphy. It is indeed simple to show [46] that, in a scheme where one tries to use EPR correlations to send a message, it is necessary to send complementary information (about the orientation of a polarizer) via a normal channel, which of course does not violate causality. This is similar to the teleportation schemes [47] where a quantum state can be teleported via a nonlocal process provided that one also transmits classical information via a classical channel. In fact, there is certainly a lot to understand about the exact nature of nonlocality, by a careful analysis of such schemes [48].

When it is realized that this quantum nonlocality does not allow one to send any useful information, one might be tempted to conclude that in fact there is no real problem and that all these discussions and experimental efforts are pointless. Before rushing to this conclusion, I would suggest an ideal experiment done in the following way is considered (Fig. 9.17): On each side of the experiment of Fig. 9.1, using variable analysers, there is a monitoring system that registers the detection events in channels + or -with their exact dates. We also suppose that the orientation of each polarizer is changed at random times, also monitored by the system of the corresponding side. It is only when the experiment is completed that the two sets of data, separately collected on each side, are brought together in order to extract the correlations. Then, looking into the data that were collected previously and that correspond to paired events that were space-like separated when they happened, one can see that indeed the correlation did change at the very moment when the relative orientation of the polarizers changed.

So when one takes the point of view of a delocalized observer, which is certainly not inconsistent when looking into the past, it must be acknowledged that there is nonlocal behaviour in the EPR correlations. Entanglement is definitely a feature going beyond any space time description a la Einstein: a pair of entangled photons must be considered to be a single global object that we cannot consider to be made of individual objects separated in spacetime with well-defined properties.
Referenced sources are:
[45] J .S. Bell, Atomic cascade photons and quantum-mechanical nonlocality, Comm. Atom. M01. Phys. 9, 121 (1981)

[46] A. Aspect, Expériences basées sur les inégalités de Bell, J . Phys. Coll. C 2, 940 (1981)

[47] CH. Bennet, G. Brassard, C. Crépeau, R. Josza, A. Peres, W.K. Wootters, Phys. Rev. Lett. 70, 1895 (1993)
D. Bouwmeester, J .-W. Pan, K. Mattle, M. Eibl, H. Weinfurter, A. Zeilinger, Experimental quantum teleportation, Nature 390, 575 (1997)
D. Boschi, S. Branca, F. De Martini, L. Hardy, S. Popescu, Experimental realization of teleporting an unknown pure quantum state via dual classical and Einstein-Podolsky-Rosen channels, submitted to Phys. Rev. Lett. (1997) A. Furusawa, J .L. Sorensen, S.L. Braunstein, C.A. Fuchs, H.J. Kimble, E.S. Polzik, Unconditional quantum teleportation, Science 282, 706 (1998)

[48] S. Popescu, Bell’s inequalities versus teleportation: what is non-locality? Phys. Rev. Lett. 72, 797 (1994)
 
Last edited:
  • #46
Every theory can be reproduced by a non-local model, but that doesn't mean every theory is non-local. Say you have a computer which measures the temperature once a second and outputs the difference from the previous measurement. You can build a non-local model for this phenonema by storing the previous measurement at a remote location, which must be accessed on each iteration.

Does that make this a non-local phenomena? Clearly not - since you can also model it by storing the previous measurement locally. To show that QM is non-local, you need to show that it can't be reproduced with any local model, even one with multiple outcomes. Bell's theorem doesn't do that; it requires additional assumptions.

There is a very confusing thing some physicists do which is to use the phrase "non-locality" to mean something called "Bell non-locality", which isn't the same thing at all.
 
  • Like
Likes Auto-Didact
  • #47
As Alain Aspect says (A. Aspect, “To be or not to be local,” Nature (London), 446, 866 (2007)):

"The experimental violation of mathematical relations known as Bell’s inequalities sounded the death-knell of Einstein’s idea of ‘local realism’ in quantum mechanics. But which concept, locality or realism, is the problem?"
 
  • #48
Lord Jestocost said:
As Alain Aspect says (A. Aspect, “To be or not to be local,” Nature (London), 446, 866 (2007)):

"The experimental violation of mathematical relations known as Bell’s inequalities sounded the death-knell of Einstein’s idea of ‘local realism’ in quantum mechanics. But which concept, locality or realism, is the problem?"
As I mentioned up thread it's not really between locality or realism, but:
  1. Single Outcomes
  2. Lack of super-determinism
  3. Lack of retrocausality
  4. Presence of common causes
  5. Decorrelating Explanations (combination of 4. and 5. normally called Reichenbach's Common Cause principle)
  6. Relativistic causation (no interactions beyond light cone)
You have to drop one, but locality (i.e. Relativistic Causation) and realism (Decorrelating Explanations) are only two possibilities.
 
  • #49
DarMM said:
The other usage is centuries old as well, going back to at least Gibbs and Boltzmann and it's used in Statistical Mechanics and Cosmology as well. So both usages are prevalent in modern physics and centuries old. I don't know which is older, but I also don't see why this point matters if both are in common usage now and have been for used for centuries.
Newton lived in the 1600s, he was literally the first classical theoretical physicist - as well as first serious mathematical physicist - practically initiating the entire subject of physics as we know it today. Boltzmann and Gibbs lived much later (1800s until early 1900s). But let's not turn this into a measuring contest any further than it already is lol.

In any case, as I said before, if that is the standard terminology of the field, then you are correct to use it, no matter how unfortunate I or anyone else may find the terminology. This paper that you linked however defines fine-tuning on page 9 again exactly as parameter fine-tuning, i.e. the same definition that I am using...
DarMM said:
You're treating this like a serious proposal, remember the context in which I brought this up. This toy model isn't intended to be a scientific advance. It's intended to show how simple it is to replicate all the features of QM except for entanglement, i.e. post-classical correlations. The model isn't even remotely realistic and is mathematically trivial and it can still replicate them.

The reason I brought up such toy models was to focus on the fact that things like quantised values, superposition, solving the measurement problem, etc can be done quite easily and this model is just the simplest such model demonstrating that (more complex ones exist).

What isn't easy is replicating breaking of the Bell inequalities and any model that really attempts to explain QM should focus on that primarily, as the toy model (and others) show that the other features are easy.
Yes, you are correct, I'm approaching the matter somewhat seriously, it is a topic I am truly passionate about and one I really want to see an answer found for. This is for multiple reasons, most importantly:

I) following the psi-ontic literature for the last few years, I have come across a few mathematical schemes which seem to be 'sectioned off' parts of full theories. These mathematical schemes (among others, twistor theory and spin network theory) themselves aren't actually full physical theories - exactly like AdS/CFT isn't a full theory - but simply possibly useful mathematical models of particular aspects of nature based in experimental phenomenology, i.e. these schemes are typically models based in phenomenology through the use of very particular not-necessarily-traditional mathematics for physicists.

II) these schemes all have in common that they are - taken at face value - incomplete frameworks of full physical theories. Being based mostly in phenomenology, they therefore tend to be consistent with the range of experiments performed so far at least and yet - because of their formulation using some particular nonstandard mathematics - they seem to be capable of making predictions which agree with what is already known but might disagree with what is still unknown.

III) to complete these theories - i.e. what needs to be added to these mathematical schemes in order to transform them into full physical theories - what tends to be required is the addition of a dynamical model which can ultimately explain some phenomenon using dynamics. QM in the psi-ontic view is precisely such a mathematical scheme which requires completion; this is incidentally what Einstein, Dirac et al. meant by saying QM - despite it's empirical success - cannot be anything more than an incomplete theory and therefore ultimately provisional instead of fundamental.

IV) there actually aren't that many psi-ontic schemes which have been combined with dynamic models transforming them into completed physical theories. Searching for the correct dynamical model - which isn't obviously incorrect (NB: much easier said than done) - given some particular scheme therefore should be a productive Bayesian strategy for identifying new promising dynamical theories and hopefully ultimately finding a more complete novel physical theory.

I cannot stress the importance of the above points - especially point III and IV - enough; incidentally Feynman vehemently argued for practicing theory (or at least that he himself practiced theory) in this way. This is essentially the core business of physicists looking for psi-ontic foundations of QM.
DarMM said:
I didn't present the toy model as a candidate to replace QM, but as a demonstration of how easily all non-entanglement features can be replicated.
I recently made this very argument in another thread, so I'll just repost it here: There is a larger theme in the practice of theoretical science here where theoretical calculations done using highly preliminary models of some hypothesis, prior to any experiment being done/possible, leading to very strong claims against some particular hypothesis.

These strong claims against the hypothesis then often later turn out to be incorrect due to them resting on - mathematically speaking - seemingly trivial assumptions, which actually are conceptually - i.e. if understood correctly in physical terms - clearly unjustifiable. The problem is then that a hypothesis can incorrectly be discarded prematurely due to taking the predictions of toy models of said hypothesis at face value; i.e. a false positive falsification if you will.

This seems to frequently occur when a toy model of some hypothesis is a particular kind of idealization which is actually a completely inaccurate representation of the actual hypothesis, purely due to the nature of the particular idealization itself.
DarMM said:
There are less psi-epistemic models though, they are very hard to construct, especially now in light of the PBR theorem. I really don't understand this.
W.r.t. the large amount of psi-epistemic models, scroll down and see point 1).
DarMM said:
Again this is counter to virtually everything I've read in quantum foundations. Making Psi-Epistemic models is extremely difficult in light of the PBR theorem.
It is only difficult if you want to include entanglement, i.e. non-locality. Almost all psi-epistemic models don't do this, making them trivially easy to construct. I agree that psi-ontic models, definitely after they have passed the preliminary stage, need to include entanglement.

In either case, a general remark on these no-go theorems is in order: Remember that these "proofs" should always be approached with caution - recall how von Neumann's 'proof' literally held back progress in this very field for decades until Bohm and Bell showed that his proof was based on (wildly) unrealistic assumptions.

The fact of the matter is that the assumptions behind the proofs of said theorems may actually be unjustified when given the correct conceptual model, invalidating their applicability as in the case of von Neumann. (NB: I have nothing against von Neumann, I might possibly even be his biggest fan on this very forum!)
DarMM said:
I don't think so, again not in light of the PBR theorem.
Doesn't the PBR theorem literally state that any strictly psi-epistemic interpretation of QM literally contradicts the predictions of QM? This implies that a psi-ontic interpretation of QM is actually a necessity! Can you please rephrase the PBR theorem in your own words?
DarMM said:
This is what I am saying:
  1. Replicating non-entanglement features of Quantum Mechanics is very simple as all one needs is a classical theory with an epistemic limit. The toy model presented is an example of how simple this is.
  2. Hence something that replicates QM should explain how it replicates entanglement first, as the other aspects are easy
  3. However we already know that realist models will encounter fine-tuning from the Wood-Spekkens and Pusey-Leifer theorems.
1) The ease of replicating QM without entanglement seems to only hold for psi-epistemic models, not for psi-ontic models.
2) Fully agreed if we are talking about psi-epistemic models. Disagree or do not necessarily agree for psi-ontic models, especially not in the case of Manasson's model which lacks a non-local scheme.
3) Based on this paper, the critiques from those theorems seem to apply not to realist models but to a psi-epistemic interpretation of QM.
Even worse; even if they did apply to realistic models (i.e. psi-ontic models) they would only apply to a certain subset of all possible realist models, not all possible realist models. To then assume based on this that all realist models are therefore unlikely is to commit the base rate fallacy; indeed, the very existence of Bohmian Mechanics makes such an argument extremely suspect.
DarMM said:
One of my the points in my previous posts tells you that I can't give you what you're asking for here because it has been proven not to exist, all realist models require fine tunings. That's actually one of my reasons for being skeptical regarding these sort of models, we already know they will develop unpleasant features. People present these models as if they will escape what they don't like about Bohmian Mechanics, however we know now that these features of Bohmian Mechanics are general to all such models.
I understand your reservations and that it may seem strange that I seem to be arguing against what seems to be most likely. The thing is I am actually - in contrast to how most physicists seem to usually judge likelihood of correctness of a theory - just both arguing and judging using a very different interpretative methodology to the one popular in the practice of physics - one in which (priorly-assumed-to-be) low probability events can actually become more likely, given the conditional adherence to certain criteria. In other words, I am consciously using Bayesian reasoning - instead of frequentist reasoning - to evaluate the likelihood that particular theories are or aren't likely (more) correct, because I have realized that these probabilities are actually degrees of belief not statistical frequencies.

I suspect that approaching the likelihood of the correctness of a theory w.r.t. open problems with very little empirics using frequentist reasoning is highly misleading and possibly itself a problematic phenomenon - literally fueling the bandwagon effect among theoretical physicists. This characterization seems to apply to most big problems in the foundations of physics; among others, the problem of combining QM with GR, the measurement problem and the foundational problems of QM.

While foundational problems seem to be able to benefit strongly by adapting a Bayesian strategy for theory construction, open problems in non-foundational physics on the other hand do tend to be easily solveable using frequentist reasoning. I suspect that this is precisely where the high confidence in frequentist reasoning w.r.t. theory evaluation among most physicists stems from: that is the only method of practical probablistic inference they have learned in school.

That said, going over your references as well as my own it seems to me that you have seriously misunderstood what you have read in the literature, but perhaps it is I who is the one who is mistaken. You definitely wouldn't be the first (I presume) physicist I have met who makes such interpretative errors when reading long complicated texts; it is as if subtlety is somehow shunned or discarded at every turn in favor of explicitness. I suspect that this might be due to the fact that most physicists today do not have any training in philosophy or argumentative reasoning at all (in stark contrast to the biggest names such as Newton, Einstein and the founders of QM).

In my view, you seem to be a) frequently confusing (psi-)ontology with (psi-)epistemology, b) interpreting certain extremely subtle arguments at face value and therefore incorrectly (e.g. Valentini's argument for BM, on the face of it goes against accepted wisdom in contemporary physics, this in no way invalidates his argument, his argument is a logically valid argument), c) interpreting no-go theorems possibly based on shaky assumptions as actual definitive demonstrations and d) attributing concepts deemed impossible within contemporary physics (e.g. superdeterminism, superluminality, retrocausation) as effects of fine-tuning based arguments in the form of c).

This is made clearer when you automatically generalize the validity of proofs using concepts defined in a certain context as if the proof covers all contexts - seemingly purely because it is a theorem - something you have no doubt learned to trust from your experience of using/learning mathematics. This should become clearer through the following example: Superdeterminism, superluminality and retrocausation would only necessarily be effects of fine-tuning given that causal discovery analysis is sufficient to explain the violation of Bell inequalities; Wood and Spekkens clearly state that this is false, i.e that causal discovery analysis is insufficient to understand QM! (NB: see abstract and conclusion of this paper). Most important to understand is that they aren't actually effects of fine-tuning in principle!

Furthermore, Wood and Spekkens are through the same paper (page 27) clearly trying to establish a (toy model) definition of causality independent of temporal ordering - just like what spin network theory or causal dynamical triangulations already offer; this is known as background independence, something which I'm sure you are aware Smolin has argued for for years.

And as I argued before, Hossenfelder convincingly argues that finetuning isn't a real problem, especially w.r.t. foundational issues. The only way one can interpret the Wood-Spekkens paper as arguing against psi-ontic models is to argue against parameter finetuning and take the accepted wisdom of contemporary physics at face value - which can be interpreted as using Occam's razor. I will argue every time again that using Einstein's razor is the superior strategy.
DarMM said:
The only really different theories would be superdeterministic, retrocausal or Many-Worlds, but all of those have fine tunings as well.
I'm pretty well aware of these ideas themselves being problematic taken at face value, which is exactly why I selectively exclude such ideas during preliminary theoretical modelling/evaluating existing models using Bayesian reasoning. I should say again though that retrocausality is only problematic if we are referring to matter or information, not correlation; else entanglement itself wouldn't be allowed either.
DarMM said:
Acausal models might be different (i.e. where physics concerns multiscale 4D constraints), but they are truly different theories with little analysis on them as of now.
All theories derived based on the Wheeler-deWitt equation are acausal in this sense, as are models based on spin networks or twistors. I suspect some - or perhaps many - models which seem retrocausal may actually be reformulated as acausal or worse, were actually acausal to begin with and just misinterpreted as retrocausal due to some cognitive bias (a premature deferral to accepted wisdom in contemporary physics).

Btw I'm really glad you're taking the time to answer me so thoroughly, this discussion has truly been a pleasure. My apologies if I come off as rude/offensive, I have heard that I tend to argue in a somewhat brash fashion the more passionate I get; to quote Bohr: "Every sentence I utter must be understood not as an affirmation, but as a question."
 
  • #50
Auto-Didact said:
Newton lived in the 1600s, he was literally the first classical theoretical physicist - as well as first serious mathematical physicist - practically initiating the entire subject of physics as we know it today. Boltzmann and Gibbs lived much later (1800s until early 1900s). But let's not turn this into a measuring contest any further than it already is lol.

In any case, as I said before, if that is the standard terminology of the field, then you are correct to use it, no matter how unfortunate I or anyone else may find the terminology. This paper that you linked however defines fine-tuning on page 9 again exactly as parameter fine-tuning, i.e. the same definition that I am using...
Genuinely I really don't get this line of discussion at all. I am not saying initial condition fine tuning is an older concept (I know when Newton or Boltzman lived) or that in Quantum Foundations they exclusively use fine tuning to mean initial condition fine tuning.

I am saying that fine-tuning has long been used to mean both in theoretical physics and Quantum Foundations like many other areas, uses fine tuning to mean both.

In that paper I linked they explicitly mean both as "causal parameters" includes both initial conditions and other parameters if you look at how they define it.

I really don't understand this at all, I'm simply using a phrase the way it has been used for over a century and a half in theoretical physics. What does it matter if using it on a subset of its current referents extends back further?

Auto-Didact said:
Doesn't the PBR theorem literally state that any strictly psi-epistemic interpretation of QM literally contradicts the predictions of QM? This implies that a psi-ontic interpretation of QM is actually a necessity! Can you please rephrase the PBR theorem in your own words?
No, it says that any Psi-Epistemic model obeying the onotological framework axioms and the principle of Preparation Independence for two systems cannot model QM.

Auto-Didact said:
The ease of replicating QM without entanglement seems to only hold for psi-epistemic models, not for psi-ontic models.
That's explicitly not true, coming up with Psi-Ontic models that model the non-entanglement part of QM is simple, even simpler than modelling it with Psi-Epistemic models. In fact Psi-Ontic models end up naturally replicating all of QM, you don't even have the blockade with modelling entanglement that you have with Psi-Epistemic models.

Auto-Didact said:
Based on this paper, the critiques from those theorems seem to apply not to realist models but to a psi-epistemic interpretation of QM.
That's not what the theorem demonstrates, it holds for both Psi-Ontic and Psi-Epistemic models. The class of models covered includes both.

Auto-Didact said:
Even worse; even if they did apply to realistic models (i.e. psi-ontic models) they would only apply to a certain subset of all possible realist models, not all possible realist models. To then assume based on this that all realist models are therefore unlikely is to commit the base rate fallacy; indeed, the very existence of Bohmian Mechanics makes such an argument extremely suspect.
Bohmian Mechanics needs to be fine tuned (Quantum Equilibrium hypothesis), it is known that out of equilibrium Bohmian Mechanics has superluminal signalling. In the Wood-Spekkens paper they are trying to see if that kind of fine-tuning is unique to Bohmian Mechanics or a general feature of all such theories.
It turns out to be a general feature of all Realist models. The only type they don't cover is Many-Worlds. However the Pusey-Leifer theorem then shows that Many-Worlds has fine-tuning.

Hence all Realist models have fine-tunings.

What one can now do is attempt to show the fine-tuning is dynamically generated, but you can't avoid the need for it.

Auto-Didact said:
That said, going over your references as well as my own it seems to me that you have seriously misunderstood what you have read in the literature, but perhaps it is I who is the one who is mistaken. You definitely wouldn't be the first (I presume) physicist I have met who makes such interpretative errors when reading long complicated texts; it is as if subtlety is somehow shunned or discarded at every turn in favor of explicitness. I suspect that this might be due to the fact that most physicists today do not have any training in philosophy or argumentative reasoning at all
I don't need a psychoanalysis or rating of what I do or do not understand. Tell me what I have misunderstood in the Pusey-Leifer or Wood-Spekkens papers. I've gone through the proofs and then rederived them myself to ensure I understood them, as well as seen the conclusion "All realist theories are fine-tuned" explicitly acknowledged in talks by Quantum Foundations experts like Matt Leifer.

See point nine of this slide:
http://mattleifer.info/wordpress/wp-content/uploads/2009/04/FQXi20160818.pdf

It's very easy to start talking about me and my comprehension, have you read the papers in depth yourself?
 
  • #51
The discussions here was interesting, as they made me realize more how differently we all think about these foundational issues.
DarMM said:
Hence all Realist models have fine-tunings.

What one can now do is attempt to show the fine-tuning is dynamically generated, but you can't avoid the need for it.
In the extended meaning i used before, even the standard model as it stands encodes a realism of symmetries. And these symmetries are used as deductive constraints when we construct theories. This is the poweful methods the theoretical framwork of QFT rests on. But my perspective is that this power is deceitful. As the choice of the constraints can be seen as a fine tuning in theory space. So we do not only have the fine tuning of initial conditions, we have also the fine tuning of laws. This is a big problem i see, and dynamical fune tunings could then not follow a timeless law, as that is the metalaw dilemma Smolin talks about.

Instead some kind of evolution, that does not obey dynamical LAW, seems needed. And this way of phrasing it naturally unifies initial states, and the state of law. As I see if none of them should be identified with ontic states. So I think these realis ontologies already lead us into trouble, even if we do not involve HV realist models. So even those that reject bohmian mechanics, but embrace the theoretical paradigm of standard model are still IMO in trouble.

As has been mentioned already, these finetunings are alreadyt solved by nature, if physicists only would learn from biology. The state space in biology is now timeless fixed, its evolving, but not according to physical law. The one critique one can have about this at first is; so what, how can we get more predictive from this insight? That is the question I ask. And the reason why Smoling mentions ins CNS, so just set an example of showing that one prediction is that the insight means we can use the evolutionary traits such as survival, reproduction and self-organisation as soft sub-constraints to replace the HARD deductive constraints of timeless symmetries. And try to reconstruct the measurement theory as per this. Here the deductive machinery of an observer, is necessarily an evolved inference system which is more abductive, NOT deductive. But compressed sensing also means that even the inference systems itself is truncated, and when you truncate a soft inference, it looks like more exact like a deductive system, because you discarded the insiginificant doubts from reflections.

The discussions on here made me realize exactly how much headache the entanglement and nature of non-commutative observables causes. If we can not find a conventional "realist model", we need to find another plausible common sense way of understanding htis. And i think that is possible.

/Fredrik
 
  • Like
Likes Auto-Didact
  • #52
DarMM said:
Genuinely I really don't get this line of discussion at all. I am not saying initial condition fine tuning is an older concept (I know when Newton or Boltzman lived) or that in Quantum Foundations they exclusively use fine tuning to mean initial condition fine tuning.

I am saying that fine-tuning has long been used to mean both in theoretical physics and Quantum Foundations like many other areas, uses fine tuning to mean both.

In that paper I linked they explicitly mean both as "causal parameters" includes both initial conditions and other parameters if you look at how they define it.

I really don't understand this at all, I'm simply using a phrase the way it has been used for over a century and a half in theoretical physics. What does it matter if using it on a subset of its current referents extends back further?
All I am saying is that having one phrase which can mean two distinct things is unnecessarily confusing, hence me calling it unfortunate. Based on a careful reading of that paper, it seems the newer secondary usage in the literature might even seems to be correlated with and therefore reducible to the older primary usage.

This is of course assuming that a) the usage in this paper is a secondary usage and b) typical and therefore representative of the secondary usage in the literature. If readers will equivocate the effects (eg. superluminality) to the causes (parameter fine-tuning) this would naturally lead to a correlation between these terms and an evolution of this secondary usage among scientists in this subfield.

The same thing I just described tends to occur in many other academic fields and subfields. I suspect the same may be happening here, but of course I could just be mistaken.
DarMM said:
That's explicitly not true, coming up with Psi-Ontic models that model the non-entanglement part of QM is simple, even simpler than modelling it with Psi-Epistemic models. In fact Psi-Ontic models end up naturally replicating all of QM, you don't even have the blockade with modelling entanglement that you have with Psi-Epistemic models.

That's not what the theorem demonstrates, it holds for both Psi-Ontic and Psi-Epistemic models. The class of models covered includes both.
It is either you or I who is thoroughly confused - or worse, perhaps it is even both of us. This is nothing to be ashamed of in these complicated matters. These matters are immensely complicated and have literally stumped the best minds in science for over a century including Einstein himself. In no way would I even rank myself close to such individuals. Complicated mathematics such as QFT or GR calculations are trivially simple in comparison with what we are discussing here.
DarMM said:
Bohmian Mechanics needs to be fine tuned (Quantum Equilibrium hypothesis), it is known that out of equilibrium Bohmian Mechanics has superluminal signalling. In the Wood-Spekkens paper they are trying to see if that kind of fine-tuning is unique to Bohmian Mechanics or a general feature of all such theories.
It turns out to be a general feature of all Realist models. The only type they don't cover is Many-Worlds. However the Pusey-Leifer theorem then shows that Many-Worlds has fine-tuning.

Hence all Realist models have fine-tunings.

What one can now do is attempt to show the fine-tuning is dynamically generated, but you can't avoid the need for it.
As I said before, there is absolutely nothing wrong with having or requiring parameter fine-tuning itself. This is reflected in the practice of bifurcation analysis of dynamical systems, wherein parameter fine-tuning is the essential strategy to identify the values of some parameter which leads to bifurcations in parameter space, i.e. to second order phase transitions and related critical phenomena. Usually in physics this is done by some kind of stability criteria exactly analogous to Valentini's Quantum Equilibrium Hypothesis; Manasson does this through an extremum principle in his paper.

W.r.t. these 'physically illegal' ideas - including many worlds - the possibility that novel schemes/models will result which display these possibilities can actually be removed a priori by explicitly choosing particular mathematics which can not model such a phenomenon and then constructing the scheme/model based on such mathematics. A theorist who realizes this can obviously take advantage of this when constructing or searching for a new model.

The same thing can also be done in reverse, i.e. if one wants to construct a scheme or model which intrinsically has some particular conceptual property eg. non-computability, one can choose to construct such a model using non-computable mathematics, such as non-periodically tiling the plane using shapes (Penrose tiling). Any resulting scheme/model based on such mathematics will then be - if successfully constructed - inherently non-computable, i.e. fine-tuned with non-computability as a resulting effect.

It is important to understand that concepts such as non-computability, superdeterminism, superluminality and retrocausality aren't themselves logically incoherent. They are instead 'illegal' w.r.t. our current conceptual understanding of physical theory based on experimental phenomenology; there is however absolutely no guarantee that our current conceptual understanding of fundamental physical theories will not be modified or replaced by some superior theories in the future, meaning it could turn out either way.

It goes without saying that this is exactly what physicists working in the foundations are trying to figure out. The related issue of whether 'physically illegal' ideas (such as superdeterminism, retrocausality and superluminality) could result from some kind of parameter fine-tuning is therefore frankly speaking completely irrelevant. Just because identifying fine-tuning is a useful strategy in order to exclude ideas in the practice of high energy theoretical particle physics doesn't mean it is useful outside of that context; as Hossenfelder argued, it isn't.
DarMM said:
I don't need a psychoanalysis or rating of what I do or do not understand. Tell me what I have misunderstood in the Pusey-Leifer or Wood-Spekkens papers. I've gone through the proofs and then rederived them myself to ensure I understood them, as well as seen the conclusion "All realist theories are fine-tuned" explicitly acknowledged in talks by Quantum Foundations experts like Matt Leifer.

See point nine of this slide:
http://mattleifer.info/wordpress/wp-content/uploads/2009/04/FQXi20160818.pdf

It's very easy to start talking about me and my comprehension, have you read the papers in depth yourself?
As I have said at the end of my other post I mean you no offense whatsoever. I'm just trying to analyze what may be the cause of these disagreements which are holding us back from coming to a resolution. If I'm actually wrong, I'd be extremely happy if you or anyone else could show me why using good arguments; optimistically it may even lead to a resolution of these devilish misunderstandings which have plagued this field for almost a century now, but I digress.

Yes, I have read the papers in depth (which is why I tend to take so long to respond). It is not that there is a mistake in the argument or that you have made a mistake in reproducing the argument; I am instead saying that to generalize (using induction) the conclusion of the argument from the particular case wherein the proof is given - based on these particular assumptions and premises - to the general case isn't itself a logically valid step. This is why these no-go theorems aren't actually intratheoretical theorems of QM or even physical theory, but merely atheoretical logical theorems about QM.

To actually make a theorem which speaks about the general case - what you and others seem to be trying to do - would require much more premises and assumptions, i.e. all the conceptual properties necessary for the mathematical construction of a theory of which QM would be a limiting case, given that such a thing exists; if you could construct such a theorem, that theorem would actually essentially be an undiscovered law of physics.

Essentially, this is exactly what I am trying to do: reasoning backwards from conceptual properties which have survived no-go theorems and then use nonstandard mathematics to construct a novel theory based on said remaining concepts. There is no guarantee such a strategy will work, but generally speaking it is a highly promising reasoning strategy which is often used to identify the correct mathematical description (usually in the form of differential equations) when dealing with black box systems.
 
  • #53
Auto-Didact said:
All I am saying is that having one phrase which can mean two distinct things is unnecessarily confusing, hence me calling it unfortunate. Based on a careful reading of that paper, it seems the newer secondary usage in the literature might even seems to be correlated with and therefore reducible to the older primary usage.
As an illustration that it means initial condition fine tuning, the quantum equilibrium hypothesis in Bohmian Mechanics are initial conditions. This is included in the type of fine-tuning discussed in the paper.

Auto-Didact said:
It is either you or I who is thoroughly confused - or worse, perhaps it is even both of us.
The proof in the paper takes place in a generalisation of the ontological models framework, defined by Spekkens himself, which explicitly includes both Psi-Ontic and Psi-Epistemic models. Psi-Ontic models are simply the case where the state space of the theory ##\Lambda## takes the form ##\Lambda = \mathcal{H} \times \mathcal{A}## with ##\mathcal{H}## the quantum Hilbert Space and ##\mathcal{A}## some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.

Auto-Didact said:
Valentini's Quantum Equilibrium Hypothesis
This doesn't affect your argument, but just to let you know it isn't Valentini's Hypothesis it goes back to Bohm, without it Bohmian Mechanics doesn't replicate QM.

Auto-Didact said:
It is important to understand that concepts such as non-computability, superdeterminism, superluminality and retrocausality aren't themselves logically incoherent. They are instead 'illegal' w.r.t. our current conceptual understanding of physical theory based on experimental phenomenology; there is however absolutely no guarantee that our current conceptual understanding of fundamental physical theories will not be modified or replaced by some superior theories in the future, meaning it could turn out either way.
Certainly, I am simply saying they must be fine-tuned. However it could be the case, for example, that the world involves fine-tuned retrocausal processes. I'm really only discussing what we can narrow down the explanation of QM to.

Auto-Didact said:
It goes without saying that this is exactly what physicists working in the foundations are trying to figure out. The related issue of whether 'physically illegal' ideas (such as superdeterminism, retrocausality and superluminality) could result from some kind of parameter fine-tuning is therefore frankly speaking completely irrelevant. Just because identifying fine-tuning is a useful strategy in order to exclude ideas in the practice of high energy theoretical particle physics doesn't mean it is useful outside of that context; as Hossenfelder argued, it isn't.
Well it's not so much that they result from fine-tuning, but proving that they require fine-tuning. Also this isn't high-energy particle physics. Quantum Foundations deals with QM as a whole, not specifically its application to high-energy particle collisions.

Auto-Didact said:
As I have said at the end of my other post I mean you no offense whatsoever
My apologies, you clearly are conducting this in good faith, my fault there. :smile:

Auto-Didact said:
the conclusion of the argument from the particular case wherein the proof is given - based on these particular assumptions and premises - to the general case isn't itself a logically valid step
What specific assumptions of the ontological framework are in doubt, i.e. what assumption do you think is invalid here?

Auto-Didact said:
To actually make a theorem which speaks about the general case - what you and others seem to be trying to do - would require much more premises and assumptions
If by the "general case" you mean "all possible physical theories" neither I nor the quantum foundations community are doing that. Most of these proofs take place in the ontological model framework or an extension there of. So something can evade the no-go results by moving outside that framework. However if a presented theory lies within it, we automatically know that it will be subject to various constraints from no-go theorems. If you aren't familiar with the onotolgical models framework I can sum it up quickly enough and perhaps you can say where your doubts are. I can also sum up some theorems that take place in one of its generalisations.
 
  • Like
Likes Auto-Didact
  • #54
DarMM said:
If by the "general case" you mean "all possible physical theories" neither I nor the quantum foundations community are doing that. Most of these proofs take place in the ontological model framework or an extension there of. So something can evade the no-go results by moving outside that framework.
From my inference perspective, the ontological framework with an ontic sample space for "the black box" seems like a very strange ansatz to start with that i do not see as a general ontological model to psi, except if you secretly just tries to get behind the observational filters nature gives us to restore realism.

After all, i assume the "ontic model of psi" does not really mean the same thing as realist model. The former phrase is a general understanding of what psi is.

First objection is that it does not seem to even reflect over the relational nature of things between observer and the system. First, who says that the ontological model of psi, is about an ontic space associated to the the system (ie the black box)? It might as well reflect the observers ontological state of information of the black box; irrespective of what is "right or wrong". Often this is ignored or called the psi-epistemtic, because its easy to jump into the conclusion that this somehow involves a human observer. It could possibly also refer to the observers actual physical state (following from interaction history, and then it does not matter if you label it measurements or preparation, it falls into the same category). This then coincides with a observer bayesian interpretation of probabilities. We need no ensembles then, just the retained information and how its been processed. However the notion of observer needs to be generalized beyond classical measurement devices, to make this idea viable. For example, two observers can even be entangled with each other, truly making the information "hidden". There are some ways to hide information without sticking to the old realist models i think. Information inside a black hole is also possible hidden, yet it can be entangled with things outside the black hole. Susskind has been talking alout about this when he used the headlines "ER = EPR" or even "GR = QM", where he argues that entanglement and the makeup of spacetime are related.

Here in lies the distinction between the psi-epistemic view within what i think you refer to as the standard ontological model framework? and what i think of as the interpretation that the only justified "ontic states" are exactly the observers physical state; and this encodes expectations of its own environment. This "ontological model" does as far as i can see not fall into the "standard ontological model framework", because the whole ansatz of the ontic sample space is alien to the constructing principles.

As I see it, a sound framework should make use only of things that are organised and recoded from possible observations; and i want to know how the ontic sample space got there in the first place, if its not due to the secrect dreams of old times. It seems to me the ontic space is not needed, it adds only confusion doesn't it?

So what I think, is that i think the viable ontic model for psi we need (not for removing MWI nightmares, but in order to make progress in unification and quantum gravity) may not be in that standard framework? So in this sense, i agree that the scope of the no-go theoriems are limited. That's not to say they areny important of course! They are a tool to dismiss candidate theories.

/Fredrik
 
  • Like
Likes Auto-Didact
  • #55
Well the quantum state being a result of the system's state and the observer/measuring device's state is actually handled in the ontological models framework, via what are called the response functions.

Plus if you actually think about it, the observer's state being a factor in determining the result doesn't help much in cases such as entanglement.
 
  • #56
DarMM said:
Well the quantum state being a result of the system's state and the observer/measuring device's state is actually handled in the ontological models framework, via what are called the response functions.

How are the response functions, and the structure of the ontic space supposed to be inferred(abduced) by the observer? It seems to be they arent inferrable? So which observer is supposed to be doing this inference?

As we know, assuming a random microstructure and they say apply an ergodic hypothesis is not innocent, as you can by the choice of conditioned partitioning sort of bias your probabilistic conclusions. This is why i object to introducing probability spaces such as a sample space that is not constructable from the observers perspective.

I think QM needs and inference ontology, not a hidden space ontology.
DarMM said:
Plus if you actually think about it, the observer's state being a factor in determining the result doesn't help much in cases such as entanglement.

That depends on what you make out of it I think.

You are right that we are getting nowhere if we just stop and say that its all just expectations of the observer. Nothing interesting happens until we allow these observers to interaction; or communicate. But this communication is a competitive game, that is also about survival. This is like the difference between QM in curved space, and QG. Its first when we try to account for the real backreaction of the environment, to the observers expectations that we get the real thing.

First, I will admit the obvious that I do not have a ripe model yet with, so perhaps I should just be silent. But the line of reasoning that i have in mind is this:

The observer is interaction with its environment, an in the general case the environment is the black box.
But what is the environment? By conjecture its abstractly populated by fellow observers.

And the conjecture here is that they are all understood as information processing agents, that follow the same rules of inference.

If we see the the action of an observer, as a guided random walk in its own state space, and the backreaction of the environment as a guided random walk it ITS statespace, what we end up with are two coupled interacting information processing agents. And and evolution will take place, that evolves the theory implicitly encoded in both sides, and the random walked gets improved guidance as the theory evolves. If not, the agent will destabilise and give up its complexions tothe environment (ie dissipate or radiate away).

So in entanglement - I envision the superposition is not seen as a property of the system(the entangle particle), but as the state of the environment. And then note that we arent just talking about Alice and Bob, but of the whole experimental setup, including slits or polarizers or whatever is in there. Ie the whole environment, encodes and thus BACKREACTS on the particle just as if it IS in superposition. And this is not challenged unless the entanglement is broken by a measurement. And if we assume that this gives the same dynamics as if the superposition was soley due to Alice and Bobs ignorance, this will give a different result, because its not how it works. Its not about Alice and Bobs ignorance, its about the entire environments support or the superposition. This is not the same thing as a hidden variable.

One can understand this conceptually by a game theoretic analogy. As long as ALL other players are convinced about something, it does not matter if its a lie, because the backreaction of the environment "supports the lie". And in its extreme, there is no way for a player to tell a stabilized lie from reality. Ultimately this means that in the inference perspective, boolean states are not objective. True or false are as deceiptive as is old time realism.

These idea are what i am taking seriously, and i think that these constraints will guide us to predict WHICH information processing structures are likely to appear in this game, if we start from zero comlpexity and built from there. Ie. this reasoning starts at highest possible energy at big bang, and then we ask ourselve which mathematical inference systems are most likely to survive if we implement these ideas? And can we harvest the various known phenomenolgoy as we reduce temperature (and this increase complexity) of the observers?

/Fredrik
 
  • #57
Fra said:
How are the response functions, and the structure of the ontic space supposed to be inferred(abduced) by the observer? It seems to be they arent inferrable? So which observer is supposed to be doing this inference?
It doesn't really matter, I mean it's not as if the form of the Ontic space affects Bell's theorem does it? You have to be more crazy (in the sense of Bohr's "not crazy enough") than this to escape the onotological models framework.

Fra said:
If we see the the action of an observer, as a guided random walk in its own state space, and the backreaction of the environment as a guided random walk it ITS statespace, what we end up with are two coupled interacting information processing agents. And and evolution will take place, that evolves the theory implicitly encoded in both sides, and the random walked gets improved guidance as the theory evolves. If not, the agent will destabilise and give up its complexions tothe environment (ie dissipate or radiate away).

So in entanglement - I envision the superposition is not seen as a property of the system(the entangle particle), but as the state of the environment.
All of this makes sense, nothing wrong with it, but it falls within the ontological models framework, so it will have to be nonlocal, retrocausal, superdeterministic or involve Many-Worlds and in addition be fine-tuned. In fact from the rest of your post what you are talking about seems to me to be superdeterminism driven by environmental dynamics.
 
  • #58
DarMM said:
It doesn't really matter, I mean it's not as if the form of the Ontic space affects Bell's theorem does it?

We don't need details,but the main point is that mere existence of the ontic space, and the conditional probability measures that connect ontic state to epistemic state and preparation, and the response functions contains non-trivial information about the matter. And this is what is used in the theorem.

Its the fact that the notic space exists with the mentioned conditional probability measures that encodes information used in the theorem. If this information does not exist, the premise of the theorem is lost.

What i suggested is that i do not see a clear justification for the ansatz. The ansatz is obvious if your mindset is tuned in on classical thinking. But if you release yourself from this, and instead think of inferences, i am not sure how you can justify the ansatz?
DarMM said:
All of this makes sense, nothing wrong with it, but it falls within the ontological models framework, so it will have to be nonlocal, retrocausal, superdeterministic or involve Many-Worlds and in addition be fine-tuned. In fact from the rest of your post what you are talking about seems to me to be superdeterminism driven by environmental dynamics.

Surely the explanatory burden is all on me to explain my reasoning, sorry!

But i don't see how you get this impression. Btw, the "rules of inference" i refer to, are NOT deductions, I actually think of them as evolved random processes. Their non-random nature are self-organised, and not left for ad hoc fine tunings. This should be as far from superdeterminism as you can get? As far as locality goes, what i suggest is explicitly local in information space, non-locality is possible only as evolved correlations. I will try to write more later, or we can drop it here. The main point was not to try to explain everything of this in detail anyway, I just do not see that this idea fits into the ontological model framework. I would insist that competitive models to QM, by no means are exhausted by that framework? To proove it explicitly i suppose nothing less than completeing it will do. So let's just leave my objection for the record ;)

/Fredrik
 
  • #59
Fra said:
We don't need details,but the main point is that mere existence of the ontic space, and the conditional probability measures that connect ontic state to epistemic state and preparation, and the response functions contains non-trivial information about the matter. And this is what is used in the theorem.
A minor technical point, I would say "this is what is used in the framework", i.e. the Ontic models framework in general and all proofs that take place in it, not just Bell's theorem.

Indeed this is what is used, as the theorem attempts to constrain Realist theories. If you reject ##\Lambda##, then you have a non-Realist interpretation, which the framework isn't about. Indeed you could see the no-go results as suggesting moving to a non-Realist interpretation/model, it's not meant to also argue against them.

Fra said:
What i suggested is that i do not see a clear justification for the ansatz
I think you might be missing the point of the framework as I discussed above. It's not to constrain all interpretations, just Realist ones, so the ansatz is justified as an encoding of the the framework's targets, i.e. Realist theories.

It would be like somebody setting out to see what constraints apply to discrete models and then objecting to their use of ##\mathbb{Z}^{d}##

Fra said:
But i don't see how you get this impression. Btw, the "rules of inference" i refer to, are NOT deductions, I actually think of them as evolved random processes.
I didn't understand then, I'd need something more concrete in order to say anything sensible, perhaps some mathematics.
 
  • #60
DarMM said:
so it will have to be nonlocal

All indications are that nonlocality is the first reason for QM, local effects are byproducts in a similar fashion to how "virtual particles" paradigm works.
 
  • Like
Likes Auto-Didact
  • #61
DarMM said:
As an illustration that it means initial condition fine tuning, the quantum equilibrium hypothesis in Bohmian Mechanics are initial conditions. This is included in the type of fine-tuning discussed in the paper.
Ahh, now I see, you were referring to initial conditions fine-tuning all the time! We are in far more agreement than it seems from the earlier discussion. The controversial nature of initial condition fine-tuning depends again on the formulation of the theory; the question is - just like with parameter fine-tuning - whether or not the initial conditions are determined by a dynamical process or just due to randomness implying issues of (un)naturalness again; this is actually a genuine open question at the moment.

Having said that, the initial conditions in question i.e. the initial conditions of our universe is precisely an area where QM is expected to break down and where some deeper theory like quantum gravity seems to be necessary in order to make more definitive statements. The degrees of freedom predicted by standard QM - standard QM being time-symmetric - is far, far larger than what we seem to see in actuality. In particular, from CMB measurements we can conclude - being a blackbody radiation curve - that there was a state of maximum entropy and that is was therefore random, but more important to note is that there seemed to be no active gravitational degrees of freedom!

We can infer this from the entropy content of the CMB. Therefore we can conclude that in our own universe, the initial conditions were in fact extremely fine-tuned compared to what standard QM (due to time-invariance) would have us believe was allowed to be ascribed to maximum entropy i.e. to randomness, this huge difference being due to no active gravitational degrees of freedom i.e. a vanishing Weyl curvutare. The question then is: what was the cause of there being no gravitational degrees of freedom active during the Big Bang?
DarMM said:
The proof in the paper takes place in a generalisation of the ontological models framework, defined by Spekkens himself, which explicitly includes both Psi-Ontic and Psi-Epistemic models. Psi-Ontic models are simply the case where the state space of the theory ΛΛ\Lambda takes the form Λ=H×AΛ=H×A\Lambda = \mathcal{H} \times \mathcal{A} with HH\mathcal{H} the quantum Hilbert Space and AA\mathcal{A} some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.
I'm very glad to announce that this is the source of our disagreement. Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
DarMM said:
This doesn't affect your argument, but just to let you know it isn't Valentini's Hypothesis it goes back to Bohm, without it Bohmian Mechanics doesn't replicate QM.
Thanks for the notice!
DarMM said:
Certainly, I am simply saying they must be fine-tuned. However it could be the case, for example, that the world involves fine-tuned retrocausal processes. I'm really only discussing what we can narrow down the explanation of QM to.
Okay, fair enough.
DarMM said:
Well it's not so much that they result from fine-tuning, but proving that they require fine-tuning. Also this isn't high-energy particle physics. Quantum Foundations deals with QM as a whole, not specifically its application to high-energy particle collisions.
I know that this isn't hep-th, I'm just presuming that the anti-'fine-tuning stance' probably originated there and then spilled over from physicists who perhaps began working in hep-th (or were influenced there during training) and then ended up working in quantum foundations.
DarMM said:
My apologies, you clearly are conducting this in good faith, my fault there. :smile:
:)
DarMM said:
What specific assumptions of the ontological framework are in doubt, i.e. what assumption do you think is invalid here?
...
If by the "general case" you mean "all possible physical theories" neither I nor the quantum foundations community are doing that. Most of these proofs take place in the ontological model framework or an extension there of. So something can evade the no-go results by moving outside that framework. However if a presented theory lies within it, we automatically know that it will be subject to various constraints from no-go theorems. If you aren't familiar with the onotolgical models framework I can sum it up quickly enough and perhaps you can say where your doubts are. I can also sum up some theorems that take place in one of its generalisations.
To avoid misunderstanding, restated: all the premises and assumptions which go into proving this theorem (and most of such no-go theorems) are not general enough to prove a theorem which is always true in physics regardless of context; an example of a theorem which is always true in physics regardless of context is the work-energy theorem. "The general case" does not precisely refer to all possible physical theories (since this would also include blatantly false theories), but rather all physical theories that can be consistent with experiment.

But as I have said above, Spekkens' definition of psi-ontology is an incorrect technical simplification. I can see where his definition is coming from but it seems to me to clearly be a problem of operationalizing a difficult concept into a technical definition, which doesn't fully capture the concept but only a small subset of instantiations of said concept, and then prematurely concluding that it does. All of this is done just in order to make concrete statements; this problem, i.e. a premature operationalization, arises when it is assumed that the operationalization is comprehensive and therefore definitive - instead of tentative i.e. a hypothesis.

These kinds of premature operationalizations of difficult concepts are rife in all of the sciences; recall the conceptual viewpoint of what was necessarily absolutely true in geometry prior to Gauss and Lobachevski. Von Neumann's proof against hidden variable theories is another such example of premature operationalization which turned out to be false in practice as shown by Bell. Here is another example by Colbeck and Renner which is empirically blatantly false, because there are actually theories which are extensions of QM with different predictions, eg. standard QM being a limiting case with the limit ##m \ll m_{\mathrm {Planck}}##; such theories can be vindicated by experiment and the issue is therefore an open question.

I do understand why physicists would (prematurely) operationalize a concept into a technical definition, hell I do it myself all the time; this is afterall, how progress in science made. However, here it seems that physics has much to learn from other sciences, namely that such operationalizations are almost always insufficient or inadequate to characterize some phenomenon or concept in full generality; this is why most sciences couch such statements in doubt and say (almost like clockwork) that more research is needed to settle the matter.

With physics however, we often see instead an offering of a kind of (false) certainty. For example, we saw this with Newton w.r.t. absolute space and time, we saw it with von Neumann w.r.t. hidden variables and we see it with Colbeck and Renner above. I suspect that this is due to the nature of operationalizations in physics i.e. using (advanced) mathematics. Here again physicists could learn from philosophy, namely that mathematics - exactly like logic (which philosophers of course absolutely adore) - can be - due to its general extremely high applicability and assumed trustworthiness - a blatant source of deception; this occurs through idealization, simplification and worse of all, by hiding subjectivities behind the mathematics within the very axioms. All of this needs to be controlled for as factors of cognitive bias of the theorist.

I should also state that these matters do not apply generally to the regular mathematics of physics - i.e. analysis, differential equations, geometry and so on - because the normal practice of physics, i.e. making predictions and doing experiments, doesn't concern the making of formal mathematics arguments utilizing proof and axiomatic reasoning; almost all physicists working in the field should be able to attest to this. This is why most physicists and applied mathematicians tend to be relatively bad at axiomatic reasoning, while formal mathematicians, logicians and philosophers excel at this type of reasoning being simultaneously relatively bad at regular 'physical' reasoning.
 
  • Like
Likes Fra and Buzz Bloom
  • #62
Auto-Didact said:
I'm very glad to announce that this is the source of our disagreement. Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
To structure the response to your post in general, could I just know what you mean by saying Speekens has misunderstood what ##\psi##-ontic means? That definition is the one used in Quantum Foundations, so I want to understand how it is wrong. It would really surprise me as Spekkens invented the term.

Currently I'm trying to understand the rest of your post, you are saying the framework has limits/doesn't apply in general, am I right? Isn't that what I said, i.e. the framework has specific rules and you can ignore the no-go theorems if you reject the rules?

Auto-Didact said:
I'm just presuming that the anti-'fine-tuning stance' probably originated there and then spilled over from physicists who perhaps began working in hep-th (or were influenced there during training) and then ended up working in quantum foundations.
Unlikely, there aren't many. Plus it isn't anti-fine tuning it's just saying it is present. Many simply accept the fine-tuning.
 
Last edited:
  • #63
DarMM said:
A minor technical point, I would say "this is what is used in the framework", i.e. the Ontic models framework in general and all proofs that take place in it, not just Bell's theorem.
...
Indeed this is what is used, as the theorem attempts to constrain Realist theories. If you reject ##\Lambda##, then you have a non-Realist interpretation, which the framework isn't about. Indeed you could see the no-go results as suggesting moving to a non-Realist interpretation/model, it's not meant to also argue against them.
...
I think you might be missing the point of the framework as I discussed above. It's not to constrain all interpretations, just Realist ones, so the ansatz is justified as an encoding of the the framework's targets, i.e. Realist theories.

I see, then our disagreements here are mainly a matter of definition of "ontology for QM". My reaction was against that somewhere earlier in the thread I got the impression that bells theorem was supposed to be an sweeping argument against the validity of the explanatory value of understanding particles as self-organised systems in a chaotic setting. I think that is wrong and misguided, and risks dumbing down idea which may turn out be interesting. I was assuming we talked about ontological understanding of QM in general, not the narrowed down version of realist models. IMO ontology is not quite the same as classical realism?

/Fredrik
 
  • Like
Likes DarMM
  • #64
DarMM said:
To structure the response to your post in general, could I just know what you mean by saying Speekens has misunderstood what ##\psi##-ontic means? That definition is the one used in Quantum Foundations, so I want to understand how it is wrong. It would really surprise me as Spekkens invented the term.
Psi-ontic simply means treating the wavefunction as an ontological object, i.e. as a matter of existence. This is directly opposed to psi-epistemic which simply means treating the wavefunction as an epistemological object, i.e. as a matter of knowledge.

Spekkens may have popularized the usage of these terms in foundations based on his specific operationalization, but he certainly did not invent these terms (perhaps only the shorthand 'psi-ontic/epistemic' opposed to 'psi is ontological/epistemological').

These terms have been used in the foundations literature since Bohr, Einstein, Heisenberg et al. and they have of course already been standard terminology in philosophy (metaphysics) for millenia.
DarMM said:
Currently I'm trying to understand the rest of your post, you are saying the framework has limits/doesn't apply in general, am I right? Isn't that what I said, i.e. the framework has specific rules and you can ignore the no-go theorems if you reject the rules?
Yes, basically. I apologize for my somewhat digressive form of writing; I'm speaking not just to you, but to everyone who may be reading (including future readers!).
 
  • Like
Likes DarMM
  • #65
Auto-Didact said:
Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
I wouldn't want to be so harsh as to claim Spekkens "misunderstood" anything but I get your point, and incidently the simplification is also power. After all, its hard to make computations on concepts until there is a mathematical model for it.

This reminds me also on one of Smolins notes on Wigners query about the unreasonable effectiveness of mathematics.

"The view I will propose answers Wigner’s query about the ”unreasonable effectiveness of mathematics in physics” by showing that the role of mathematics within physics is reasonable, because it is limited."
-- L.Smolin, https://arxiv.org/pdf/1506.03733.pdf

This is in fact related to how i see how deductive logic is emergent from general inference such as induction and abduction, by compressed sensing. To be precise, you sometimes need to take the risk of beeing wrong, and not account for all the various subtle concerns that are under the FAPP radar.

/Fredrik
 
  • Like
Likes Auto-Didact
  • #66
Auto-Didact said:
Psi-ontic simply means treating the wavefunction as an ontological object, i.e. as a matter of existence.
And what aspect of this does the ontological framework miss out on/misunderstand?
 
  • #67
Fra said:
I was assuming we talked about ontological understanding of QM in general, not the narrowed down version of realist models
The no-go theorems refer to the latter. Self-organising chaotic models not relating to an underlying ontic space would not be covered.

Fra said:
IMO ontology is not quite the same as classical realism?
It's certainly not, but it is important to show that classical realism is heavily constrained by QM as many will reach for it, hence the ontological models framework.
 
  • #68
DarMM said:
And what aspect of this does the ontological framework miss out on/misunderstand?
Ontology being fully equivalent and therefore reducible to a state space treatment (or any other simplified/highly idealized mathematical treatment for that matter), whether that be for the ontology of the wavefunction of QM or for ontology of some (theoretical) object in general.

To say that having an ontology of psi is equivalent to a state space treatment, is to say that no other possible mathematical formulation of an ontology of psi is possible which isn't equivalent to the state space one.

This is a hypothesis which is easily falsified, namely by constructing another mathematical formulation based on a completely different conceptual basis which can also capture the ontology of psi.

Perhaps this would end up being completely equivalent to the state space formulation, but that would have to be demonstrated. Moreover, there already seem to be models which treat psi as ontological and which aren't equivalent to the state space formulation, implying the hypothesis has already been falsified.

To give another example by analogy: Newtonian mechanics clearly isn't the only possible formulation of mechanics despite what hundreds/thousands of physicists and philosophers working in the foundations of physics argued for centuries and regardless of the fact that reformulations such as the Hamiltonian/Lagrangian ones were fully equivalent to it while sounding conceptually completely different.
 
Last edited:
  • #69
Auto-Didact said:
To say that having an ontology of psi is equivalent to a state space treatment, is to say that no other possible mathematical formulation of an ontology of psi is possible which isn't equivalent to the state space one.
##\psi## is a solution to the Schrodinger equation and it has a state space, Hilbert space, what would it mean for a theory in which ##\psi## is a real object for it not to have a state space formulation?

Auto-Didact said:
Moreover, there already seem to be models which treat psi as ontological and which aren't equivalent to the state space formulation, implying the hypothesis has already been falsified.
This might help, can you give an example?
 
  • #70
DarMM said:
##\psi## is a solution to the Schrodinger equation and it has a state space, Hilbert space, what would it mean for a theory in which ##\psi## is a real object for it not to have a state space formulation?
Of course, I am not saying that it doesn't have a state space formulation, but rather that such a formulation need not capture all the intricacies of a possible more completed version of QM or theory beyond QM wherein ##\psi## is taken to be ontological. To avoid misunderstanding: by a 'state space formulation of the ontology of ##\psi##' I am referring very particularly to this:
DarMM said:
Psi-Ontic models are simply the case where the state space of the theory ##\Lambda## takes the form ##\Lambda = \mathcal{H} \times \mathcal{A}## with ##\mathcal{H}## the quantum Hilbert Space and ##\mathcal{A}## some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.
This ##\Lambda## seems to be a very particular kind of fiber bundle or symplectic manifold. You are calling it a state space, but can you elaborate on what kind of mathematical space this is exactly?
DarMM said:
This might help, can you give an example?
Some (if not all) wavefunction collapse schemes, whether or not they are supplanted with a dynamical model characterizing the collapse mechanism. The proper combination of such a scheme and a model can produce a theory beyond QM wherein ##\psi## is ontological.
 

Similar threads

  • Beyond the Standard Models
Replies
16
Views
4K
Replies
2
Views
2K
  • Beyond the Standard Models
Replies
1
Views
151
  • Beyond the Standard Models
Replies
2
Views
2K
  • Quantum Interpretations and Foundations
Replies
0
Views
1K
  • Beyond the Standard Models
Replies
14
Views
3K
  • Beyond the Standard Models
Replies
11
Views
2K
  • Quantum Physics
Replies
4
Views
2K
  • Beyond the Standard Models
Replies
5
Views
2K
Back
Top