A Quantization isn't fundamental

446
265
As an illustration that it means initial condition fine tuning, the quantum equilibrium hypothesis in Bohmian Mechanics are initial conditions. This is included in the type of fine-tuning discussed in the paper.
Ahh, now I see, you were referring to initial conditions fine-tuning all the time! We are in far more agreement than it seems from the earlier discussion. The controversial nature of initial condition fine-tuning depends again on the formulation of the theory; the question is - just like with parameter fine-tuning - whether or not the initial conditions are determined by a dynamical process or just due to randomness implying issues of (un)naturalness again; this is actually a genuine open question at the moment.

Having said that, the initial conditions in question i.e. the initial conditions of our universe is precisely an area where QM is expected to break down and where some deeper theory like quantum gravity seems to be necessary in order to make more definitive statements. The degrees of freedom predicted by standard QM - standard QM being time-symmetric - is far, far larger than what we seem to see in actuality. In particular, from CMB measurements we can conclude - being a blackbody radiation curve - that there was a state of maximum entropy and that is was therefore random, but more important to note is that there seemed to be no active gravitational degrees of freedom!

We can infer this from the entropy content of the CMB. Therefore we can conclude that in our own universe, the initial conditions were in fact extremely fine-tuned compared to what standard QM (due to time-invariance) would have us believe was allowed to be ascribed to maximum entropy i.e. to randomness, this huge difference being due to no active gravitational degrees of freedom i.e. a vanishing Weyl curvutare. The question then is: what was the cause of there being no gravitational degrees of freedom active during the Big Bang?
The proof in the paper takes place in a generalisation of the ontological models framework, defined by Spekkens himself, which explicitly includes both Psi-Ontic and Psi-Epistemic models. Psi-Ontic models are simply the case where the state space of the theory ΛΛ\Lambda takes the form Λ=H×AΛ=H×A\Lambda = \mathcal{H} \times \mathcal{A} with HH\mathcal{H} the quantum Hilbert Space and AA\mathcal{A} some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.
I'm very glad to announce that this is the source of our disagreement. Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
This doesn't affect your argument, but just to let you know it isn't Valentini's Hypothesis it goes back to Bohm, without it Bohmian Mechanics doesn't replicate QM.
Thanks for the notice!
Certainly, I am simply saying they must be fine-tuned. However it could be the case, for example, that the world involves fine-tuned retrocausal processes. I'm really only discussing what we can narrow down the explanation of QM to.
Okay, fair enough.
Well it's not so much that they result from fine-tuning, but proving that they require fine-tuning. Also this isn't high-energy particle physics. Quantum Foundations deals with QM as a whole, not specifically its application to high-energy particle collisions.
I know that this isn't hep-th, I'm just presuming that the anti-'fine-tuning stance' probably originated there and then spilled over from physicists who perhaps began working in hep-th (or were influenced there during training) and then ended up working in quantum foundations.
My apologies, you clearly are conducting this in good faith, my fault there. :smile:
:)
What specific assumptions of the ontological framework are in doubt, i.e. what assumption do you think is invalid here?
...
If by the "general case" you mean "all possible physical theories" neither I nor the quantum foundations community are doing that. Most of these proofs take place in the ontological model framework or an extension there of. So something can evade the no-go results by moving outside that framework. However if a presented theory lies within it, we automatically know that it will be subject to various constraints from no-go theorems. If you aren't familiar with the onotolgical models framework I can sum it up quickly enough and perhaps you can say where your doubts are. I can also sum up some theorems that take place in one of its generalisations.
To avoid misunderstanding, restated: all the premises and assumptions which go in to proving this theorem (and most of such no-go theorems) are not general enough to prove a theorem which is always true in physics regardless of context; an example of a theorem which is always true in physics regardless of context is the work-energy theorem. "The general case" does not precisely refer to all possible physical theories (since this would also include blatantly false theories), but rather all physical theories that can be consistent with experiment.

But as I have said above, Spekkens' definition of psi-ontology is an incorrect technical simplification. I can see where his definition is coming from but it seems to me to clearly be a problem of operationalizing a difficult concept into a technical definition, which doesn't fully capture the concept but only a small subset of instantiations of said concept, and then prematurely concluding that it does. All of this is done just in order to make concrete statements; this problem, i.e. a premature operationalization, arises when it is assumed that the operationalization is comprehensive and therefore definitive - instead of tentative i.e. a hypothesis.

These kinds of premature operationalizations of difficult concepts are rife in all of the sciences; recall the conceptual viewpoint of what was necessarily absolutely true in geometry prior to Gauss and Lobachevski. Von Neumann's proof against hidden variable theories is another such example of premature operationalization which turned out to be false in practice as shown by Bell. Here is another example by Colbeck and Renner which is empirically blatantly false, because there are actually theories which are extensions of QM with different predictions, eg. standard QM being a limiting case with the limit ##m \ll m_{\mathrm {Planck}}##; such theories can be vindicated by experiment and the issue is therefore an open question.

I do understand why physicists would (prematurely) operationalize a concept into a technical definition, hell I do it myself all the time; this is afterall, how progress in science made. However, here it seems that physics has much to learn from other sciences, namely that such operationalizations are almost always insufficient or inadequate to characterize some phenomenon or concept in full generality; this is why most sciences couch such statements in doubt and say (almost like clockwork) that more research is needed to settle the matter.

With physics however, we often see instead an offering of a kind of (false) certainty. For example, we saw this with Newton w.r.t. absolute space and time, we saw it with von Neumann w.r.t. hidden variables and we see it with Colbeck and Renner above. I suspect that this is due to the nature of operationalizations in physics i.e. using (advanced) mathematics. Here again physicists could learn from philosophy, namely that mathematics - exactly like logic (which philosophers of course absolutely adore) - can be - due to its general extremely high applicability and assumed trustworthiness - a blatant source of deception; this occurs through idealization, simplification and worse of all, by hiding subjectivities behind the mathematics within the very axioms. All of this needs to be controlled for as factors of cognitive bias of the theorist.

I should also state that these matters do not apply generally to the regular mathematics of physics - i.e. analysis, differential equations, geometry and so on - because the normal practice of physics, i.e. making predictions and doing experiments, doesn't concern the making of formal mathematics arguments utilizing proof and axiomatic reasoning; almost all physicists working in the field should be able to attest to this. This is why most physicists and applied mathematicians tend to be relatively bad at axiomatic reasoning, while formal mathematicians, logicians and philosophers excel at this type of reasoning being simultaneously relatively bad at regular 'physical' reasoning.
 

DarMM

Science Advisor
Gold Member
1,305
534
I'm very glad to announce that this is the source of our disagreement. Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
To structure the response to your post in general, could I just know what you mean by saying Speekens has misunderstood what ##\psi##-ontic means? That definition is the one used in Quantum Foundations, so I want to understand how it is wrong. It would really surprise me as Spekkens invented the term.

Currently I'm trying to understand the rest of your post, you are saying the framework has limits/doesn't apply in general, am I right? Isn't that what I said, i.e. the framework has specific rules and you can ignore the no-go theorems if you reject the rules?

I'm just presuming that the anti-'fine-tuning stance' probably originated there and then spilled over from physicists who perhaps began working in hep-th (or were influenced there during training) and then ended up working in quantum foundations.
Unlikely, there aren't many. Plus it isn't anti-fine tuning it's just saying it is present. Many simply accept the fine-tuning.
 
Last edited:

Fra

3,055
134
A minor technical point, I would say "this is what is used in the framework", i.e. the Ontic models framework in general and all proofs that take place in it, not just Bell's theorem.
...
Indeed this is what is used, as the theorem attempts to constrain Realist theories. If you reject ##\Lambda##, then you have a non-Realist interpretation, which the framework isn't about. Indeed you could see the no-go results as suggesting moving to a non-Realist interpretation/model, it's not meant to also argue against them.
...
I think you might be missing the point of the framework as I discussed above. It's not to constrain all interpretations, just Realist ones, so the ansatz is justified as an encoding of the the framework's targets, i.e. Realist theories.
I see, then our disagreements here are mainly a matter of definition of "ontology for QM". My reaction was against that somewhere earlier in the thread I got the impression that bells theorem was supposed to be an sweeping argument against the validity of the explanatory value of understanding particles as self-organised systems in a chaotic setting. I think that is wrong and misguided, and risks dumbing down idea which may turn out be interesting. I was assuming we talked about ontological understanding of QM in general, not the narrowed down version of realist models. IMO ontology is not quite the same as classical realism?

/Fredrik
 
446
265
To structure the response to your post in general, could I just know what you mean by saying Speekens has misunderstood what ##\psi##-ontic means? That definition is the one used in Quantum Foundations, so I want to understand how it is wrong. It would really surprise me as Spekkens invented the term.
Psi-ontic simply means treating the wavefunction as an ontological object, i.e. as a matter of existence. This is directly opposed to psi-epistemic which simply means treating the wavefunction as an epistemological object, i.e. as a matter of knowledge.

Spekkens may have popularized the usage of these terms in foundations based on his specific operationalization, but he certainly did not invent these terms (perhaps only the shorthand 'psi-ontic/epistemic' opposed to 'psi is ontological/epistemological').

These terms have been used in the foundations literature since Bohr, Einstein, Heisenberg et al. and they have of course already been standard terminology in philosophy (metaphysics) for millenia.
Currently I'm trying to understand the rest of your post, you are saying the framework has limits/doesn't apply in general, am I right? Isn't that what I said, i.e. the framework has specific rules and you can ignore the no-go theorems if you reject the rules?
Yes, basically. I apologize for my somewhat digressive form of writing; I'm speaking not just to you, but to everyone who may be reading (including future readers!).
 

Fra

3,055
134
Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
I wouldnt want to be so harsh as to claim Spekkens "misunderstood" anything but I get your point, and incidently the simplification is also power. After all, its hard to make computations on concepts until there is a mathematical model for it.

This reminds me also on one of Smolins notes on Wigners query about the unreasonable effectiveness of mathematics.

"The view I will propose answers Wigner’s query about the ”unreasonable effectiveness of mathematics in physics” by showing that the role of mathematics within physics is reasonable, because it is limited."
-- L.Smolin, https://arxiv.org/pdf/1506.03733.pdf

This is in fact related to how i see how deductive logic is emergent from general inference such as induction and abduction, by compressed sensing. To be precise, you sometimes need to take the risk of beeing wrong, and not account for all the various subtle concerns that are under the FAPP radar.

/Fredrik
 

DarMM

Science Advisor
Gold Member
1,305
534
Psi-ontic simply means treating the wavefunction as an ontological object, i.e. as a matter of existence.
And what aspect of this does the ontological framework miss out on/misunderstand?
 

DarMM

Science Advisor
Gold Member
1,305
534
I was assuming we talked about ontological understanding of QM in general, not the narrowed down version of realist models
The no-go theorems refer to the latter. Self-organising chaotic models not relating to an underlying ontic space would not be covered.

IMO ontology is not quite the same as classical realism?
It's certainly not, but it is important to show that classical realism is heavily constrained by QM as many will reach for it, hence the ontological models framework.
 
446
265
And what aspect of this does the ontological framework miss out on/misunderstand?
Ontology being fully equivalent and therefore reducible to a state space treatment (or any other simplified/highly idealized mathematical treatment for that matter), whether that be for the ontology of the wavefunction of QM or for ontology of some (theoretical) object in general.

To say that having an ontology of psi is equivalent to a state space treatment, is to say that no other possible mathematical formulation of an ontology of psi is possible which isn't equivalent to the state space one.

This is a hypothesis which is easily falsified, namely by constructing another mathematical formulation based on a completely different conceptual basis which can also capture the ontology of psi.

Perhaps this would end up being completely equivalent to the state space formulation, but that would have to be demonstrated. Moreover, there already seem to be models which treat psi as ontological and which aren't equivalent to the state space formulation, implying the hypothesis has already been falsified.

To give another example by analogy: Newtonian mechanics clearly isn't the only possible formulation of mechanics despite what hundreds/thousands of physicists and philosophers working in the foundations of physics argued for centuries and regardless of the fact that reformulations such as the Hamiltonian/Lagrangian ones were fully equivalent to it while sounding conceptually completely different.
 
Last edited:

DarMM

Science Advisor
Gold Member
1,305
534
To say that having an ontology of psi is equivalent to a state space treatment, is to say that no other possible mathematical formulation of an ontology of psi is possible which isn't equivalent to the state space one.
##\psi## is a solution to the Schrodinger equation and it has a state space, Hilbert space, what would it mean for a theory in which ##\psi## is a real object for it not to have a state space formulation?

Moreover, there already seem to be models which treat psi as ontological and which aren't equivalent to the state space formulation, implying the hypothesis has already been falsified.
This might help, can you give an example?
 
446
265
##\psi## is a solution to the Schrodinger equation and it has a state space, Hilbert space, what would it mean for a theory in which ##\psi## is a real object for it not to have a state space formulation?
Of course, I am not saying that it doesn't have a state space formulation, but rather that such a formulation need not capture all the intricacies of a possible more completed version of QM or theory beyond QM wherein ##\psi## is taken to be ontological. To avoid misunderstanding: by a 'state space formulation of the ontology of ##\psi##' I am referring very particularly to this:
Psi-Ontic models are simply the case where the state space of the theory ##\Lambda## takes the form ##\Lambda = \mathcal{H} \times \mathcal{A}## with ##\mathcal{H}## the quantum Hilbert Space and ##\mathcal{A}## some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.
This ##\Lambda## seems to be a very particular kind of fiber bundle or symplectic manifold. You are calling it a state space, but can you elaborate on what kind of mathematical space this is exactly?
This might help, can you give an example?
Some (if not all) wavefunction collapse schemes, whether or not they are supplanted with a dynamical model characterizing the collapse mechanism. The proper combination of such a scheme and a model can produce a theory beyond QM wherein ##\psi## is ontological.
 

DarMM

Science Advisor
Gold Member
1,305
534
This ##\Lambda## seems to be a very particular kind of fiber bundle or symplectic manifold. You are calling it a state space, but can you elaborate on what kind of mathematical space this is exactly?
A measurable set, to be honest it needn't even be a space (i.e. have a topology). In the ontic case it has the additional constraint that it can be decomposed as a Cartesian product with one element the Hilbert space. It doesn't have to be a fiber bundle or a symplectic manifold.
Things like GRW are covered within the general ontological models framework, unless you make other assumptions that exclude them (which some theorems do, but not all).

The ##\psi##-ontic model would have to break out of the framework to escape many results, by breaking some of the axioms, so called "exotic" ontic models. However even these (e.g. Many-Worlds) still have ##\Lambda = \mathcal{H} \times \mathcal{A}##. The only way you would escape this is if there were additional variables beyond the wavefunction that had a non-measurable state space.
 

Fra

3,055
134
As we keeping thinking differently, and this is a repeating theme in various disguises, I think it could be worth noting that IMO there is a difference between uncertainty in the general case and what we call ignorance.

Both can be treated within a probability framework, but their origin and logical properties when we start to talk about conjunctions etc, are very different.

Uncertainty that are originating from non-commutative information.
- This the the typical HUP uncertainty relation between conjugate variables. This uncertainty is not be interpreted as "ignorance", it is rather a structural constraint, and there is no "cure" for this" but adding "missing informaiton".
- One can OTOH, ask WHY nature seems to "complicate matters" by encoding conjunctions of non-commutative information? My own explanatory model is that its simply a evolutionary selected compressed sensing. Ie this "more complicated logic" is more efficient. [Is to be proven in context though]

In a way, i reject the "ontic space model" on generic grounds, simply because its for the above reason doomed to fail. I would even think its irrational to try to model general uncertainty by means of ignorance in this sense. This can be expectated from the logic i think, even without bothering with theorems. That is, give or take nonlocality or superdeterminism etc. I find it pathological to start with.

Uncertainty that are originating from incomplete information,
Even though this is more closely to "classical statstical" uncertainty, there is another twist here that makes things very interesting. Its one think to think about "ignorance" in the sense of "could have known" but dont't beacuse i was not informed, or i "lost the information" etc.

But there can also be (this is a conjecture of mine) physical constrainst in the observers structure, that fundamentally limints the AMOUNT of information it can encode. This is actually another "non-classical" uncertainty in the sense that when considering models where the action DEPENDS on summing over probabilities, then this actually changes the game! Because the "path integral" or what version we use is getting a self-imposed regularization, that is associated with the observers, say mass or informartion capacity (details here are an open question). This latter "uncertainty" is the reason also for the significant of compression sensing.

So I would say there are at least THREE types of uncertaint here, and ALL three are IMO at play in a general model.

This kind of model is what i am personally working on, and this is obviously fundamental as it not only reconstructs spacetime, it reconstructs the mathematical inference logic for physics. It aims to expalin the emergence of quantum logic, and to understand also how it incorporates gravity. But it does NOT aim todo so in terms of a simple ontic model, that uses only one of the THREE type of uncertainty. This is why i keep calling it general inference, because its a generalisation with goes beyond both kolmogorov probability and quantum logic.

/Fredrik
 
446
265
A measurable set, to be honest it needn't even be a space (i.e. have a topology). In the ontic case it has the additional constraint that it can be decomposed as a Cartesian product with one element the Hilbert space. It doesn't have to be a fiber bundle or a symplectic manifold.
Doesn't sound conceptually right as a foundation at all. This is because all state spaces in the canon of physics - definitely all I have ever seen in physics and applied mathematics w.r.t. other sciences - are always most naturally formulated as symplectic manifolds, fibre bundles or some other analogous highly structured mathematical object.

Being, as you say, in essence free from a natural conceptual formulation in terms of mathematics as some space would make this a very atypical foundational object outside of pure mathematics proper - i.e. an artificially constructed (a posteriori) mathematicized object completely based in axiomatics. This means the mathematical identification or construction of the object was purely a matter of being defined into existence by axiomatic reasoning instead of naturally discovered - and therefore almost certainly outside of physics proper.

Such 'artificial mathematical objects' are rife outside the exact sciences, e.g. defined operationalizations of phenomenon which only strenously bare any resemblences to the phenomena they are meant to reflect on the real world. Usually such objects are based on an extrapolation of some (statistical) data, i.e. a (premature) operationalization of a concept into a technical mathematical definition.

It seems to me that the exact same thing is going on here, since it is as you say a measurable set i.e. an axiomatic probability-esque object. Practically all mathematical objects with such properties are (or tend to be) purely epistemological in nature, directly implying that ##\psi## is actually being treated epistemologically instead of ontologically, the epistemic nature carefully hidden behind axiomatic gymnastics.
The only way you would escape this is if there were additional variables beyond the wavefunction that had a non-measurable state space.
I see absolutely no reason to exclude such spaces a priori, especially given Manasson's model's mathematical basis in non-linear dynamical systems. One only need recall that practically all objects from non-linear dynamical systems were just a few decades ago universally regarded by mathematicians as nothing more than 'pathological' mathematical notions which were meant to be excluded by hand.
 

DarMM

Science Advisor
Gold Member
1,305
534
Doesn't sound conceptually right as a foundation at all. This is because all state spaces in the canon of physics - definitely all I have ever seen in physics and applied mathematics w.r.t. other sciences - are always most naturally formulated as symplectic manifolds, fibre bundles or some other analogous highly structured mathematical object.
It includes those as special cases, so anything proven for the general ##\Lambda## holds for them.

It seems to me that the exact same thing is going on here, since it is as you say a measurable set i.e. an axiomatic probability-esque object. Practically all mathematical objects with such properties are (or tend to be) purely epistemological in nature, directly implying that ##\psi## is actually being treated epistemologically instead of ontologically, the epistemic nature carefully hidden behind axiomatic gymnastics.
I don't follow this to be honest. It only being required to be a measurable set is simply to make it very general. It also includes symplectic manifolds, fiber bundles, etc as special cases. I don't really see how that makes it secretly epistemic, otherwise anything supporting integration is secretly epistemic.

I see absolutely no reason to exclude such spaces a priori, especially given Manasson's model's mathematical basis in non-linear dynamical systems. One only need recall that practically all objects from non-linear dynamical systems were just a few decades ago universally regarded by mathematicians as nothing more than 'pathological' mathematical notions which were meant to be excluded by hand.
There's no a priori reason to exclude them and I think this is where the point is being missed.

I think both you and @Fra are taking the ontological foundations framework as some kind of claim of a "final argument" or something. For example thinking they are "rejecting the possibility of non-measurable spaces". Nobody is doing that, it's simply a very general framework so one can analyse a broad class of theories, nobody is arguing that the truth "must" lie within the framework. It's just that we have no-go results for anything that does, which is useful to have. It has allowed us to see what list of things you have to reject in order to get a decent theory that explains quantum phenomena and what set of theories are either hopeless or will end up with unnatural fine-tunings.

As I said to @Fra :
It would be like somebody setting out to see what constraints apply to discrete models of some theory and then objecting to their use of ##\mathbb{Z}^{d}##
And to come back to a point you made earlier:
In my view, you seem to be a) frequently confusing (psi-)ontology with (psi-)epistemology, b) interpreting certain extremely subtle arguments at face value and therefore incorrectly (e.g. Valentini's argument for BM, on the face of it goes against accepted wisdom in contemporary physics, this in no way invalidates his argument, his argument is a logically valid argument), c) interpreting no-go theorems possibly based on shaky assumptions as actual definitive demonstrations and d) attributing concepts deemed impossible within contemporary physics (e.g. superdeterminism, superluminality, retrocausation) as effects of fine-tuning based arguments in the form of c).
(a) I don't think so, I know what the terms mean, they're fairly simple
(b) Where am I doing this?
(c) I am explicitly not doing this, as I know the ontological framework axioms and their exact limits, so I know when they don't hold.
(d) This is backwards, some of the theorems show they have fine-tunings, nothing says they are the result of fine-tunings.

This is made clearer when you automatically generalize the validity of proofs using concepts defined in a certain context as if the proof covers all contexts - seemingly purely because it is a theorem - something you have no doubt learned to trust from your experience of using/learning mathematics.
Again where have I done this? I know the ontological framework axioms, so I know where they apply and don't.

Can either of you two state what the ontological framework axioms are in your understanding, considering you have been arguing against it?
 
Last edited:

DarMM

Science Advisor
Gold Member
1,305
534
In a way, i reject the "ontic space model" on generic grounds, simply because its for the above reason doomed to fail. I would even think its irrational to try to model general uncertainty by means of ignorance in this sense. This can be expectated from the logic i think, even without bothering with theorems. That is, give or take nonlocality or superdeterminism etc. I find it pathological to start with.
Well the ontological models framework show certain such models have pathologies in a rigorous way, one can easily say something is "expected" according to your personal intuition, quite another to actually prove it.
 
  • Like
Reactions: Fra

DarMM

Science Advisor
Gold Member
1,305
534
All indications are that nonlocality is the first reason for QM, local effects are byproducts in a similar fashion to how "virtual particles" paradigm works.
I don't understand the link with virtual particles.
 
446
265
It includes those as special cases, so anything proven for the general ##\Lambda## holds for them.
I understand that that is the intention, but it is actually a quite subtle point I am trying to make. I understand that the goal is to make scientific progress, my point is that axiomatics isn't a proper basis for reasoning in (the foundations of) physics: research in foundations needs to lead to experimentally falsifiable statements, not axiomatic theorems. There is simply too much uncertainty regarding the actual range of validity of the known laws of physics, way too much uncertainty in order that one can make definitive axiomatic statements (theorems) without deluding oneself.
I don't follow this to be honest. It only being required to be a measurable set is simply to make it very general. It also includes symplectic manifolds, fiber bundles, etc as special cases. I don't really see how that makes it secretly epistemic, otherwise anything supporting integration is secretly epistemic.
The problem with reification of some mathematics as an actual physical concept, especially w.r.t. an overt generalization such as the measurable set - a mathematical construction, which just like ZFC and other standard foundations of mathematics, was purely constructed a posteriori in order to do axiomatics - is that the underlying physical concept originally under research can become divorced from its conceptual properties and so lost during this abstraction process; I am arguing that the ontology vanishes as well leaving one with nothing but epistemology.
There's no a priori reason to exclude them and I think this is where the point is being missed.
I understand that and I am glad that you realize that, however I'm not so sure other physicists who read and cite foundations literature do realize this as well. In my experience they tend to take statements - especially theorems - at face value as either non-empirical evidence or definitive mathematical proof; this goes for physicists at all levels, from (under)grad students up to professors.
I think both you and @Fra are taking the ontological foundations framework as some kind of claim of a "final argument" or something. For example thinking they are "rejecting the possibility of non-measurable spaces". Nobody is doing that, it's simply a very general framework so one can analyse a broad class of theories, nobody is arguing that the truth "must" lie within the framework. It's just that we have no-go results for anything that does, which is useful to have. It has allowed us to see what list of things you have to reject in order to get a decent theory that explains quantum phenomena and what set of theories are either hopeless or will end up with unnatural fine-tunings.

As I said to @Fra :
The goal of foundations is to provide exactly such definitive statements; the problem is that axiomatic statements such as the no-go theorems, and in fact axiomatic reasoning itself, has historically never belonged to the toolbox of foundations of physics, but instead to the toolbox of mathematical physics. It is paramount to understand axiomatics, being essentially a form of deductive logic, cannot go beyond what is defined. As Poincaré said:
Poincaré said:
We have confined ourselves to bringing together one or other of two purely conventional definitions, and we have verified their identity; nothing new has been learned. Verification differs from proof precisely because it is analytical, and because it leads to nothing. It leads to nothing because the conclusion is nothing but the premisses translated into another language. A real proof, on the other hand, is fruitful, because the conclusion is in a sense more general than the premisses.
Historically, the goal of foundations of physics has always been to challenge accepted concepts which are deemed fundamental, by looking for mathematical reformulations which enable a natural synthesis (NB: not natural in the sense of naturalness but in the classical sense, i.e. 'spontaneous' or the opposite of artificial) between conflicting concepts often directly paired with novel experimental predictions.

Once some theory becomes too entrenched or embedded, dogmatically taken as necessarily (absolutely) true above other theories, things start to go awry. As Poincaré brilliantly pointed out a century ago - and echoed by Feynman decades later - axiomatic reasoning, being purely deductive, cannot offer a resolution to foundational issues in physics, because physical theory is incomplete: only hypotheses checkable by experiment can go beyond what is already known.

Having no-go results of uncertain validity is therefore actually of very questionable utility in the field of foundations, especially given the danger of premature closure and therefore promotion of cognitive biases among theoreticians. The fact of the matter is that foundations is a small branch in the practice of physics; everyone has benefit to avoid it becoming little more than an echo chamber, which sadly is definitely a possibility as we have seen in the practice of physics over the last century.
And to come back to a point you made earlier:

(a) I don't think so, I know what the terms mean, they're fairly simple
(b) Where am I doing this?
(c) I am explicitly not doing this, as I know the ontological framework axioms and their exact limits, so I know when they don't hold.
(d) This is backwards, some of the theorems show they have fine-tunings, nothing says they are the result of fine-tunings.
Apart from a) which I have elaborated further upon, including in this very post, I agree you aren't doing b), c) and d). The problem is those less familiar with foundations of physics will almost certainly do b), c) and d) - especially if (self proclaimed) experts openly do a) as they in fact regularly do seem to do since foundations started adopting axiomatics starting with von Neumann.
Again where have I done this? I know the ontological framework axioms, so I know where they apply and don't.

Can either of you two state what the ontological framework axioms are in your understanding, considering you have been arguing against it?
I have stated it already in post #64, and moreover argued at length why I believe the definition as given is a mischaracterization i.e. an incorrect operationalization of the concept that ##\psi## is ontological into a technical definition. Even stronger, I believe the strategy for some rigorous definition based on axioms at this stage is itself a misguided quest; we need preliminary mathematical models, not highly rigorous axiomatics.

W.r.t. QM foundations, I believe that the immediate focus should be theoretical i.e. a qualitative reformulation until conflicting concepts become consistent, leading to a resolution in which this new conceptual formulation can be restated using existing (but possibly non-standard) concrete mathematics, leading to experimental predictions; it is only after experimental verification that the mathematical physicists should try to find rigorous mathematical definitions.

Incidentally, Lucien Hardy essentially argues for this strategy for solving the problems in QM foundations as well, as seen in this thread, see my post there as well.
 
Last edited:

Fra

3,055
134
Well the ontological models framework show certain such models have pathologies in a rigorous way, one can easily say something is "expected" according to your personal intuition, quite another to actually prove it.
Agreed. But I guess my point was that - even in despite proofs, it does not seem to prevent people keep to looking for loopholes, and in this perspective i argue that there is an easier way to argue with yourself against using the type of uncertainty implicitiy in "ignorance" as i defined it above, as universal explanations.

/Fredrik
 
678
63
I always thought self-similarity as seen in fractal sets was a way to accommodate Bell's non-local correlations - in an "all structure is emergent structure" approach.
Any two measurements must differ at minimum by some index on an operation (maybe an evolutionary one) that walked them far enough apart to make them stereoscopically observable. Maybe Bell and the rest of the SM (i.e. all conservation laws including super-selection) are just pointing out degrees of similarity in emergent (dissipative) structure.

The problem with fractal sets is that they are not well behaved spaces as manifolds go, to put it mildly.
It's hard to imagine any notion of calculus and algebra on sets that can have have all manner of wild mechanisms of smooth-roughness, continuity (with dis-continuity included) and connections that are truly not-local.

Maybe someday Ai will be able to help with this, discovering brute force what derivative and integral operators do in a big zoo of multi-fractal generated spaces.
 

Want to reply to this thread?

"Quantization isn't fundamental" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top