A Quantization isn't fundamental

  • #61
DarMM said:
As an illustration that it means initial condition fine tuning, the quantum equilibrium hypothesis in Bohmian Mechanics are initial conditions. This is included in the type of fine-tuning discussed in the paper.
Ahh, now I see, you were referring to initial conditions fine-tuning all the time! We are in far more agreement than it seems from the earlier discussion. The controversial nature of initial condition fine-tuning depends again on the formulation of the theory; the question is - just like with parameter fine-tuning - whether or not the initial conditions are determined by a dynamical process or just due to randomness implying issues of (un)naturalness again; this is actually a genuine open question at the moment.

Having said that, the initial conditions in question i.e. the initial conditions of our universe is precisely an area where QM is expected to break down and where some deeper theory like quantum gravity seems to be necessary in order to make more definitive statements. The degrees of freedom predicted by standard QM - standard QM being time-symmetric - is far, far larger than what we seem to see in actuality. In particular, from CMB measurements we can conclude - being a blackbody radiation curve - that there was a state of maximum entropy and that is was therefore random, but more important to note is that there seemed to be no active gravitational degrees of freedom!

We can infer this from the entropy content of the CMB. Therefore we can conclude that in our own universe, the initial conditions were in fact extremely fine-tuned compared to what standard QM (due to time-invariance) would have us believe was allowed to be ascribed to maximum entropy i.e. to randomness, this huge difference being due to no active gravitational degrees of freedom i.e. a vanishing Weyl curvutare. The question then is: what was the cause of there being no gravitational degrees of freedom active during the Big Bang?
DarMM said:
The proof in the paper takes place in a generalisation of the ontological models framework, defined by Spekkens himself, which explicitly includes both Psi-Ontic and Psi-Epistemic models. Psi-Ontic models are simply the case where the state space of the theory ΛΛ\Lambda takes the form Λ=H×AΛ=H×A\Lambda = \mathcal{H} \times \mathcal{A} with HH\mathcal{H} the quantum Hilbert Space and AA\mathcal{A} some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.
I'm very glad to announce that this is the source of our disagreement. Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
DarMM said:
This doesn't affect your argument, but just to let you know it isn't Valentini's Hypothesis it goes back to Bohm, without it Bohmian Mechanics doesn't replicate QM.
Thanks for the notice!
DarMM said:
Certainly, I am simply saying they must be fine-tuned. However it could be the case, for example, that the world involves fine-tuned retrocausal processes. I'm really only discussing what we can narrow down the explanation of QM to.
Okay, fair enough.
DarMM said:
Well it's not so much that they result from fine-tuning, but proving that they require fine-tuning. Also this isn't high-energy particle physics. Quantum Foundations deals with QM as a whole, not specifically its application to high-energy particle collisions.
I know that this isn't hep-th, I'm just presuming that the anti-'fine-tuning stance' probably originated there and then spilled over from physicists who perhaps began working in hep-th (or were influenced there during training) and then ended up working in quantum foundations.
DarMM said:
My apologies, you clearly are conducting this in good faith, my fault there. :smile:
:)
DarMM said:
What specific assumptions of the ontological framework are in doubt, i.e. what assumption do you think is invalid here?
...
If by the "general case" you mean "all possible physical theories" neither I nor the quantum foundations community are doing that. Most of these proofs take place in the ontological model framework or an extension there of. So something can evade the no-go results by moving outside that framework. However if a presented theory lies within it, we automatically know that it will be subject to various constraints from no-go theorems. If you aren't familiar with the onotolgical models framework I can sum it up quickly enough and perhaps you can say where your doubts are. I can also sum up some theorems that take place in one of its generalisations.
To avoid misunderstanding, restated: all the premises and assumptions which go into proving this theorem (and most of such no-go theorems) are not general enough to prove a theorem which is always true in physics regardless of context; an example of a theorem which is always true in physics regardless of context is the work-energy theorem. "The general case" does not precisely refer to all possible physical theories (since this would also include blatantly false theories), but rather all physical theories that can be consistent with experiment.

But as I have said above, Spekkens' definition of psi-ontology is an incorrect technical simplification. I can see where his definition is coming from but it seems to me to clearly be a problem of operationalizing a difficult concept into a technical definition, which doesn't fully capture the concept but only a small subset of instantiations of said concept, and then prematurely concluding that it does. All of this is done just in order to make concrete statements; this problem, i.e. a premature operationalization, arises when it is assumed that the operationalization is comprehensive and therefore definitive - instead of tentative i.e. a hypothesis.

These kinds of premature operationalizations of difficult concepts are rife in all of the sciences; recall the conceptual viewpoint of what was necessarily absolutely true in geometry prior to Gauss and Lobachevski. Von Neumann's proof against hidden variable theories is another such example of premature operationalization which turned out to be false in practice as shown by Bell. Here is another example by Colbeck and Renner which is empirically blatantly false, because there are actually theories which are extensions of QM with different predictions, eg. standard QM being a limiting case with the limit ##m \ll m_{\mathrm {Planck}}##; such theories can be vindicated by experiment and the issue is therefore an open question.

I do understand why physicists would (prematurely) operationalize a concept into a technical definition, hell I do it myself all the time; this is afterall, how progress in science made. However, here it seems that physics has much to learn from other sciences, namely that such operationalizations are almost always insufficient or inadequate to characterize some phenomenon or concept in full generality; this is why most sciences couch such statements in doubt and say (almost like clockwork) that more research is needed to settle the matter.

With physics however, we often see instead an offering of a kind of (false) certainty. For example, we saw this with Newton w.r.t. absolute space and time, we saw it with von Neumann w.r.t. hidden variables and we see it with Colbeck and Renner above. I suspect that this is due to the nature of operationalizations in physics i.e. using (advanced) mathematics. Here again physicists could learn from philosophy, namely that mathematics - exactly like logic (which philosophers of course absolutely adore) - can be - due to its general extremely high applicability and assumed trustworthiness - a blatant source of deception; this occurs through idealization, simplification and worse of all, by hiding subjectivities behind the mathematics within the very axioms. All of this needs to be controlled for as factors of cognitive bias of the theorist.

I should also state that these matters do not apply generally to the regular mathematics of physics - i.e. analysis, differential equations, geometry and so on - because the normal practice of physics, i.e. making predictions and doing experiments, doesn't concern the making of formal mathematics arguments utilizing proof and axiomatic reasoning; almost all physicists working in the field should be able to attest to this. This is why most physicists and applied mathematicians tend to be relatively bad at axiomatic reasoning, while formal mathematicians, logicians and philosophers excel at this type of reasoning being simultaneously relatively bad at regular 'physical' reasoning.
 
  • Like
Likes Fra and Buzz Bloom
Physics news on Phys.org
  • #62
Auto-Didact said:
I'm very glad to announce that this is the source of our disagreement. Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
To structure the response to your post in general, could I just know what you mean by saying Speekens has misunderstood what ##\psi##-ontic means? That definition is the one used in Quantum Foundations, so I want to understand how it is wrong. It would really surprise me as Spekkens invented the term.

Currently I'm trying to understand the rest of your post, you are saying the framework has limits/doesn't apply in general, am I right? Isn't that what I said, i.e. the framework has specific rules and you can ignore the no-go theorems if you reject the rules?

Auto-Didact said:
I'm just presuming that the anti-'fine-tuning stance' probably originated there and then spilled over from physicists who perhaps began working in hep-th (or were influenced there during training) and then ended up working in quantum foundations.
Unlikely, there aren't many. Plus it isn't anti-fine tuning it's just saying it is present. Many simply accept the fine-tuning.
 
Last edited:
  • #63
DarMM said:
A minor technical point, I would say "this is what is used in the framework", i.e. the Ontic models framework in general and all proofs that take place in it, not just Bell's theorem.
...
Indeed this is what is used, as the theorem attempts to constrain Realist theories. If you reject ##\Lambda##, then you have a non-Realist interpretation, which the framework isn't about. Indeed you could see the no-go results as suggesting moving to a non-Realist interpretation/model, it's not meant to also argue against them.
...
I think you might be missing the point of the framework as I discussed above. It's not to constrain all interpretations, just Realist ones, so the ansatz is justified as an encoding of the the framework's targets, i.e. Realist theories.

I see, then our disagreements here are mainly a matter of definition of "ontology for QM". My reaction was against that somewhere earlier in the thread I got the impression that bells theorem was supposed to be an sweeping argument against the validity of the explanatory value of understanding particles as self-organised systems in a chaotic setting. I think that is wrong and misguided, and risks dumbing down idea which may turn out be interesting. I was assuming we talked about ontological understanding of QM in general, not the narrowed down version of realist models. IMO ontology is not quite the same as classical realism?

/Fredrik
 
  • Like
Likes DarMM
  • #64
DarMM said:
To structure the response to your post in general, could I just know what you mean by saying Speekens has misunderstood what ##\psi##-ontic means? That definition is the one used in Quantum Foundations, so I want to understand how it is wrong. It would really surprise me as Spekkens invented the term.
Psi-ontic simply means treating the wavefunction as an ontological object, i.e. as a matter of existence. This is directly opposed to psi-epistemic which simply means treating the wavefunction as an epistemological object, i.e. as a matter of knowledge.

Spekkens may have popularized the usage of these terms in foundations based on his specific operationalization, but he certainly did not invent these terms (perhaps only the shorthand 'psi-ontic/epistemic' opposed to 'psi is ontological/epistemological').

These terms have been used in the foundations literature since Bohr, Einstein, Heisenberg et al. and they have of course already been standard terminology in philosophy (metaphysics) for millenia.
DarMM said:
Currently I'm trying to understand the rest of your post, you are saying the framework has limits/doesn't apply in general, am I right? Isn't that what I said, i.e. the framework has specific rules and you can ignore the no-go theorems if you reject the rules?
Yes, basically. I apologize for my somewhat digressive form of writing; I'm speaking not just to you, but to everyone who may be reading (including future readers!).
 
  • Like
Likes DarMM
  • #65
Auto-Didact said:
Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.
I wouldn't want to be so harsh as to claim Spekkens "misunderstood" anything but I get your point, and incidently the simplification is also power. After all, its hard to make computations on concepts until there is a mathematical model for it.

This reminds me also on one of Smolins notes on Wigners query about the unreasonable effectiveness of mathematics.

"The view I will propose answers Wigner’s query about the ”unreasonable effectiveness of mathematics in physics” by showing that the role of mathematics within physics is reasonable, because it is limited."
-- L.Smolin, https://arxiv.org/pdf/1506.03733.pdf

This is in fact related to how i see how deductive logic is emergent from general inference such as induction and abduction, by compressed sensing. To be precise, you sometimes need to take the risk of beeing wrong, and not account for all the various subtle concerns that are under the FAPP radar.

/Fredrik
 
  • Like
Likes Auto-Didact
  • #66
Auto-Didact said:
Psi-ontic simply means treating the wavefunction as an ontological object, i.e. as a matter of existence.
And what aspect of this does the ontological framework miss out on/misunderstand?
 
  • #67
Fra said:
I was assuming we talked about ontological understanding of QM in general, not the narrowed down version of realist models
The no-go theorems refer to the latter. Self-organising chaotic models not relating to an underlying ontic space would not be covered.

Fra said:
IMO ontology is not quite the same as classical realism?
It's certainly not, but it is important to show that classical realism is heavily constrained by QM as many will reach for it, hence the ontological models framework.
 
  • #68
DarMM said:
And what aspect of this does the ontological framework miss out on/misunderstand?
Ontology being fully equivalent and therefore reducible to a state space treatment (or any other simplified/highly idealized mathematical treatment for that matter), whether that be for the ontology of the wavefunction of QM or for ontology of some (theoretical) object in general.

To say that having an ontology of psi is equivalent to a state space treatment, is to say that no other possible mathematical formulation of an ontology of psi is possible which isn't equivalent to the state space one.

This is a hypothesis which is easily falsified, namely by constructing another mathematical formulation based on a completely different conceptual basis which can also capture the ontology of psi.

Perhaps this would end up being completely equivalent to the state space formulation, but that would have to be demonstrated. Moreover, there already seem to be models which treat psi as ontological and which aren't equivalent to the state space formulation, implying the hypothesis has already been falsified.

To give another example by analogy: Newtonian mechanics clearly isn't the only possible formulation of mechanics despite what hundreds/thousands of physicists and philosophers working in the foundations of physics argued for centuries and regardless of the fact that reformulations such as the Hamiltonian/Lagrangian ones were fully equivalent to it while sounding conceptually completely different.
 
Last edited:
  • #69
Auto-Didact said:
To say that having an ontology of psi is equivalent to a state space treatment, is to say that no other possible mathematical formulation of an ontology of psi is possible which isn't equivalent to the state space one.
##\psi## is a solution to the Schrodinger equation and it has a state space, Hilbert space, what would it mean for a theory in which ##\psi## is a real object for it not to have a state space formulation?

Auto-Didact said:
Moreover, there already seem to be models which treat psi as ontological and which aren't equivalent to the state space formulation, implying the hypothesis has already been falsified.
This might help, can you give an example?
 
  • #70
DarMM said:
##\psi## is a solution to the Schrodinger equation and it has a state space, Hilbert space, what would it mean for a theory in which ##\psi## is a real object for it not to have a state space formulation?
Of course, I am not saying that it doesn't have a state space formulation, but rather that such a formulation need not capture all the intricacies of a possible more completed version of QM or theory beyond QM wherein ##\psi## is taken to be ontological. To avoid misunderstanding: by a 'state space formulation of the ontology of ##\psi##' I am referring very particularly to this:
DarMM said:
Psi-Ontic models are simply the case where the state space of the theory ##\Lambda## takes the form ##\Lambda = \mathcal{H} \times \mathcal{A}## with ##\mathcal{H}## the quantum Hilbert Space and ##\mathcal{A}## some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.
This ##\Lambda## seems to be a very particular kind of fiber bundle or symplectic manifold. You are calling it a state space, but can you elaborate on what kind of mathematical space this is exactly?
DarMM said:
This might help, can you give an example?
Some (if not all) wavefunction collapse schemes, whether or not they are supplanted with a dynamical model characterizing the collapse mechanism. The proper combination of such a scheme and a model can produce a theory beyond QM wherein ##\psi## is ontological.
 
  • #72
Auto-Didact said:
This ##\Lambda## seems to be a very particular kind of fiber bundle or symplectic manifold. You are calling it a state space, but can you elaborate on what kind of mathematical space this is exactly?
A measurable set, to be honest it needn't even be a space (i.e. have a topology). In the ontic case it has the additional constraint that it can be decomposed as a Cartesian product with one element the Hilbert space. It doesn't have to be a fiber bundle or a symplectic manifold.
Things like GRW are covered within the general ontological models framework, unless you make other assumptions that exclude them (which some theorems do, but not all).

The ##\psi##-ontic model would have to break out of the framework to escape many results, by breaking some of the axioms, so called "exotic" ontic models. However even these (e.g. Many-Worlds) still have ##\Lambda = \mathcal{H} \times \mathcal{A}##. The only way you would escape this is if there were additional variables beyond the wavefunction that had a non-measurable state space.
 
  • #73
As we keeping thinking differently, and this is a repeating theme in various disguises, I think it could be worth noting that IMO there is a difference between uncertainty in the general case and what we call ignorance.

Both can be treated within a probability framework, but their origin and logical properties when we start to talk about conjunctions etc, are very different.

Uncertainty that are originating from non-commutative information.
- This the the typical HUP uncertainty relation between conjugate variables. This uncertainty is not be interpreted as "ignorance", it is rather a structural constraint, and there is no "cure" for this" but adding "missing informaiton".
- One can OTOH, ask WHY nature seems to "complicate matters" by encoding conjunctions of non-commutative information? My own explanatory model is that its simply a evolutionary selected compressed sensing. Ie this "more complicated logic" is more efficient. [Is to be proven in context though]

In a way, i reject the "ontic space model" on generic grounds, simply because its for the above reason doomed to fail. I would even think its irrational to try to model general uncertainty by means of ignorance in this sense. This can be expectated from the logic i think, even without bothering with theorems. That is, give or take nonlocality or superdeterminism etc. I find it pathological to start with.

Uncertainty that are originating from incomplete information,
Even though this is more closely to "classical statstical" uncertainty, there is another twist here that makes things very interesting. Its one think to think about "ignorance" in the sense of "could have known" but dont't beacuse i was not informed, or i "lost the information" etc.

But there can also be (this is a conjecture of mine) physical constrainst in the observers structure, that fundamentally limints the AMOUNT of information it can encode. This is actually another "non-classical" uncertainty in the sense that when considering models where the action DEPENDS on summing over probabilities, then this actually changes the game! Because the "path integral" or what version we use is getting a self-imposed regularization, that is associated with the observers, say mass or informartion capacity (details here are an open question). This latter "uncertainty" is the reason also for the significant of compression sensing.

So I would say there are at least THREE types of uncertaint here, and ALL three are IMO at play in a general model.

This kind of model is what i am personally working on, and this is obviously fundamental as it not only reconstructs spacetime, it reconstructs the mathematical inference logic for physics. It aims to expalin the emergence of quantum logic, and to understand also how it incorporates gravity. But it does NOT aim todo so in terms of a simple ontic model, that uses only one of the THREE type of uncertainty. This is why i keep calling it general inference, because its a generalisation with goes beyond both kolmogorov probability and quantum logic.

/Fredrik
 
  • #74
DarMM said:
A measurable set, to be honest it needn't even be a space (i.e. have a topology). In the ontic case it has the additional constraint that it can be decomposed as a Cartesian product with one element the Hilbert space. It doesn't have to be a fiber bundle or a symplectic manifold.
Doesn't sound conceptually right as a foundation at all. This is because all state spaces in the canon of physics - definitely all I have ever seen in physics and applied mathematics w.r.t. other sciences - are always most naturally formulated as symplectic manifolds, fibre bundles or some other analogous highly structured mathematical object.

Being, as you say, in essence free from a natural conceptual formulation in terms of mathematics as some space would make this a very atypical foundational object outside of pure mathematics proper - i.e. an artificially constructed (a posteriori) mathematicized object completely based in axiomatics. This means the mathematical identification or construction of the object was purely a matter of being defined into existence by axiomatic reasoning instead of naturally discovered - and therefore almost certainly outside of physics proper.

Such 'artificial mathematical objects' are rife outside the exact sciences, e.g. defined operationalizations of phenomenon which only strenously bare any resemblences to the phenomena they are meant to reflect on the real world. Usually such objects are based on an extrapolation of some (statistical) data, i.e. a (premature) operationalization of a concept into a technical mathematical definition.

It seems to me that the exact same thing is going on here, since it is as you say a measurable set i.e. an axiomatic probability-esque object. Practically all mathematical objects with such properties are (or tend to be) purely epistemological in nature, directly implying that ##\psi## is actually being treated epistemologically instead of ontologically, the epistemic nature carefully hidden behind axiomatic gymnastics.
DarMM said:
The only way you would escape this is if there were additional variables beyond the wavefunction that had a non-measurable state space.
I see absolutely no reason to exclude such spaces a priori, especially given Manasson's model's mathematical basis in non-linear dynamical systems. One only need recall that practically all objects from non-linear dynamical systems were just a few decades ago universally regarded by mathematicians as nothing more than 'pathological' mathematical notions which were meant to be excluded by hand.
 
  • #75
Auto-Didact said:
Doesn't sound conceptually right as a foundation at all. This is because all state spaces in the canon of physics - definitely all I have ever seen in physics and applied mathematics w.r.t. other sciences - are always most naturally formulated as symplectic manifolds, fibre bundles or some other analogous highly structured mathematical object.
It includes those as special cases, so anything proven for the general ##\Lambda## holds for them.

Auto-Didact said:
It seems to me that the exact same thing is going on here, since it is as you say a measurable set i.e. an axiomatic probability-esque object. Practically all mathematical objects with such properties are (or tend to be) purely epistemological in nature, directly implying that ##\psi## is actually being treated epistemologically instead of ontologically, the epistemic nature carefully hidden behind axiomatic gymnastics.
I don't follow this to be honest. It only being required to be a measurable set is simply to make it very general. It also includes symplectic manifolds, fiber bundles, etc as special cases. I don't really see how that makes it secretly epistemic, otherwise anything supporting integration is secretly epistemic.

Auto-Didact said:
I see absolutely no reason to exclude such spaces a priori, especially given Manasson's model's mathematical basis in non-linear dynamical systems. One only need recall that practically all objects from non-linear dynamical systems were just a few decades ago universally regarded by mathematicians as nothing more than 'pathological' mathematical notions which were meant to be excluded by hand.
There's no a priori reason to exclude them and I think this is where the point is being missed.

I think both you and @Fra are taking the ontological foundations framework as some kind of claim of a "final argument" or something. For example thinking they are "rejecting the possibility of non-measurable spaces". Nobody is doing that, it's simply a very general framework so one can analyse a broad class of theories, nobody is arguing that the truth "must" lie within the framework. It's just that we have no-go results for anything that does, which is useful to have. It has allowed us to see what list of things you have to reject in order to get a decent theory that explains quantum phenomena and what set of theories are either hopeless or will end up with unnatural fine-tunings.

As I said to @Fra :
DarMM said:
It would be like somebody setting out to see what constraints apply to discrete models of some theory and then objecting to their use of ##\mathbb{Z}^{d}##

And to come back to a point you made earlier:
Auto-Didact said:
In my view, you seem to be a) frequently confusing (psi-)ontology with (psi-)epistemology, b) interpreting certain extremely subtle arguments at face value and therefore incorrectly (e.g. Valentini's argument for BM, on the face of it goes against accepted wisdom in contemporary physics, this in no way invalidates his argument, his argument is a logically valid argument), c) interpreting no-go theorems possibly based on shaky assumptions as actual definitive demonstrations and d) attributing concepts deemed impossible within contemporary physics (e.g. superdeterminism, superluminality, retrocausation) as effects of fine-tuning based arguments in the form of c).
(a) I don't think so, I know what the terms mean, they're fairly simple
(b) Where am I doing this?
(c) I am explicitly not doing this, as I know the ontological framework axioms and their exact limits, so I know when they don't hold.
(d) This is backwards, some of the theorems show they have fine-tunings, nothing says they are the result of fine-tunings.

Auto-Didact said:
This is made clearer when you automatically generalize the validity of proofs using concepts defined in a certain context as if the proof covers all contexts - seemingly purely because it is a theorem - something you have no doubt learned to trust from your experience of using/learning mathematics.
Again where have I done this? I know the ontological framework axioms, so I know where they apply and don't.

Can either of you two state what the ontological framework axioms are in your understanding, considering you have been arguing against it?
 
Last edited:
  • #76
Fra said:
In a way, i reject the "ontic space model" on generic grounds, simply because its for the above reason doomed to fail. I would even think its irrational to try to model general uncertainty by means of ignorance in this sense. This can be expectated from the logic i think, even without bothering with theorems. That is, give or take nonlocality or superdeterminism etc. I find it pathological to start with.
Well the ontological models framework show certain such models have pathologies in a rigorous way, one can easily say something is "expected" according to your personal intuition, quite another to actually prove it.
 
  • Like
Likes Fra
  • #77
ftr said:
All indications are that nonlocality is the first reason for QM, local effects are byproducts in a similar fashion to how "virtual particles" paradigm works.
I don't understand the link with virtual particles.
 
  • #78
DarMM said:
It includes those as special cases, so anything proven for the general ##\Lambda## holds for them.
I understand that that is the intention, but it is actually a quite subtle point I am trying to make. I understand that the goal is to make scientific progress, my point is that axiomatics isn't a proper basis for reasoning in (the foundations of) physics: research in foundations needs to lead to experimentally falsifiable statements, not axiomatic theorems. There is simply too much uncertainty regarding the actual range of validity of the known laws of physics, way too much uncertainty in order that one can make definitive axiomatic statements (theorems) without deluding oneself.
DarMM said:
I don't follow this to be honest. It only being required to be a measurable set is simply to make it very general. It also includes symplectic manifolds, fiber bundles, etc as special cases. I don't really see how that makes it secretly epistemic, otherwise anything supporting integration is secretly epistemic.
The problem with reification of some mathematics as an actual physical concept, especially w.r.t. an overt generalization such as the measurable set - a mathematical construction, which just like ZFC and other standard foundations of mathematics, was purely constructed a posteriori in order to do axiomatics - is that the underlying physical concept originally under research can become divorced from its conceptual properties and so lost during this abstraction process; I am arguing that the ontology vanishes as well leaving one with nothing but epistemology.
DarMM said:
There's no a priori reason to exclude them and I think this is where the point is being missed.
I understand that and I am glad that you realize that, however I'm not so sure other physicists who read and cite foundations literature do realize this as well. In my experience they tend to take statements - especially theorems - at face value as either non-empirical evidence or definitive mathematical proof; this goes for physicists at all levels, from (under)grad students up to professors.
DarMM said:
I think both you and @Fra are taking the ontological foundations framework as some kind of claim of a "final argument" or something. For example thinking they are "rejecting the possibility of non-measurable spaces". Nobody is doing that, it's simply a very general framework so one can analyse a broad class of theories, nobody is arguing that the truth "must" lie within the framework. It's just that we have no-go results for anything that does, which is useful to have. It has allowed us to see what list of things you have to reject in order to get a decent theory that explains quantum phenomena and what set of theories are either hopeless or will end up with unnatural fine-tunings.

As I said to @Fra :
The goal of foundations is to provide exactly such definitive statements; the problem is that axiomatic statements such as the no-go theorems, and in fact axiomatic reasoning itself, has historically never belonged to the toolbox of foundations of physics, but instead to the toolbox of mathematical physics. It is paramount to understand axiomatics, being essentially a form of deductive logic, cannot go beyond what is defined. As Poincaré said:
Poincaré said:
We have confined ourselves to bringing together one or other of two purely conventional definitions, and we have verified their identity; nothing new has been learned. Verification differs from proof precisely because it is analytical, and because it leads to nothing. It leads to nothing because the conclusion is nothing but the premisses translated into another language. A real proof, on the other hand, is fruitful, because the conclusion is in a sense more general than the premisses.
Historically, the goal of foundations of physics has always been to challenge accepted concepts which are deemed fundamental, by looking for mathematical reformulations which enable a natural synthesis (NB: not natural in the sense of naturalness but in the classical sense, i.e. 'spontaneous' or the opposite of artificial) between conflicting concepts often directly paired with novel experimental predictions.

Once some theory becomes too entrenched or embedded, dogmatically taken as necessarily (absolutely) true above other theories, things start to go awry. As Poincaré brilliantly pointed out a century ago - and echoed by Feynman decades later - axiomatic reasoning, being purely deductive, cannot offer a resolution to foundational issues in physics, because physical theory is incomplete: only hypotheses checkable by experiment can go beyond what is already known.

Having no-go results of uncertain validity is therefore actually of very questionable utility in the field of foundations, especially given the danger of premature closure and therefore promotion of cognitive biases among theoreticians. The fact of the matter is that foundations is a small branch in the practice of physics; everyone has benefit to avoid it becoming little more than an echo chamber, which sadly is definitely a possibility as we have seen in the practice of physics over the last century.
DarMM said:
And to come back to a point you made earlier:

(a) I don't think so, I know what the terms mean, they're fairly simple
(b) Where am I doing this?
(c) I am explicitly not doing this, as I know the ontological framework axioms and their exact limits, so I know when they don't hold.
(d) This is backwards, some of the theorems show they have fine-tunings, nothing says they are the result of fine-tunings.
Apart from a) which I have elaborated further upon, including in this very post, I agree you aren't doing b), c) and d). The problem is those less familiar with foundations of physics will almost certainly do b), c) and d) - especially if (self proclaimed) experts openly do a) as they in fact regularly do seem to do since foundations started adopting axiomatics starting with von Neumann.
DarMM said:
Again where have I done this? I know the ontological framework axioms, so I know where they apply and don't.

Can either of you two state what the ontological framework axioms are in your understanding, considering you have been arguing against it?
I have stated it already in post #64, and moreover argued at length why I believe the definition as given is a mischaracterization i.e. an incorrect operationalization of the concept that ##\psi## is ontological into a technical definition. Even stronger, I believe the strategy for some rigorous definition based on axioms at this stage is itself a misguided quest; we need preliminary mathematical models, not highly rigorous axiomatics.

W.r.t. QM foundations, I believe that the immediate focus should be theoretical i.e. a qualitative reformulation until conflicting concepts become consistent, leading to a resolution in which this new conceptual formulation can be restated using existing (but possibly non-standard) concrete mathematics, leading to experimental predictions; it is only after experimental verification that the mathematical physicists should try to find rigorous mathematical definitions.

Incidentally, Lucien Hardy essentially argues for this strategy for solving the problems in QM foundations as well, as seen in this thread, see my post there as well.
 
Last edited:
  • #79
DarMM said:
Well the ontological models framework show certain such models have pathologies in a rigorous way, one can easily say something is "expected" according to your personal intuition, quite another to actually prove it.
Agreed. But I guess my point was that - even in despite proofs, it does not seem to prevent people keep to looking for loopholes, and in this perspective i argue that there is an easier way to argue with yourself against using the type of uncertainty implicitiy in "ignorance" as i defined it above, as universal explanations.

/Fredrik
 
  • #80
I always thought self-similarity as seen in fractal sets was a way to accommodate Bell's non-local correlations - in an "all structure is emergent structure" approach.
Any two measurements must differ at minimum by some index on an operation (maybe an evolutionary one) that walked them far enough apart to make them stereoscopically observable. Maybe Bell and the rest of the SM (i.e. all conservation laws including super-selection) are just pointing out degrees of similarity in emergent (dissipative) structure.

The problem with fractal sets is that they are not well behaved spaces as manifolds go, to put it mildly.
It's hard to imagine any notion of calculus and algebra on sets that can have have all manner of wild mechanisms of smooth-roughness, continuity (with dis-continuity included) and connections that are truly not-local.

Maybe someday Ai will be able to help with this, discovering brute force what derivative and integral operators do in a big zoo of multi-fractal generated spaces.
 
  • #81
Jimster41 said:
I always thought self-similarity as seen in fractal sets was a way to accommodate Bell's non-local correlations - in an "all structure is emergent structure" approach.
Any two measurements must differ at minimum by some index on an operation (maybe an evolutionary one) that walked them far enough apart to make them stereoscopically observable. Maybe Bell and the rest of the SM (i.e. all conservation laws including super-selection) are just pointing out degrees of similarity in emergent (dissipative) structure.

The problem with fractal sets is that they are not well behaved spaces as manifolds go, to put it mildly.
It's hard to imagine any notion of calculus and algebra on sets that can have have all manner of wild mechanisms of smooth-roughness, continuity (with dis-continuity included) and connections that are truly not-local.

Maybe someday Ai will be able to help with this, discovering brute force what derivative and integral operators do in a big zoo of multi-fractal generated spaces.
Optimistically, multifractal analysis might already be a sufficient tool, but that is just grasping in the dark.

If I remember correct though, Nottale has a theory which sounds somewhat similar to the "all structure is emergent" idea called scale relativity (or something like that).

Moreover, I would naively like to presume that maybe something simple but alternative like fractional calculus or multiplicative calculus may perhaps be useful alternative forms of calculus which might be enlightening w.r.t. naturally capturing or identifying the correct physical quantities or equations involved in such a framework. Else, perhaps more advanced algebraic-geometric or holomorphic notions would probably be necessary.
 
  • Like
Likes Jimster41
  • #82
The question of what mathematics that will be required is indeed an interesting question. It is a paramount question also in my perspective, as one of they key ingredients in the quest for a physical inference theory, that is to generalise probability, is to characterise a MEASURE, that is intrisically constructable by a physical observer.

An inference as in reasoning with uncertainty needs a measure to quantify the confidence in certain things, as it conceptual boils down to how to COUNT evidence, in a rational way. One problem in most axiomatic constructions of probabilitiy theory is that one introduces uncountable numbers without justification. Does an arbitrary observer have acccess to infinite bit counters? The real justification is limits, but if you consider physical processes to be like computations, these limits are never actually reached, and pathologies in the theories arise when you assume that limits are manifested in observer states. What happens is that you loose tracking of limit procedures. I think a careful compliance to intrinsic measures will make convergences manifest. Divergences in theories is a symptom of abusing of mathematics, mixing up "mathematical possibilities" with actual possibilities in the inference and placing bets. Even though you "can" fix it, it shouldn't have to arise in the first place.

So what I am saying is that that I think smooth mathematics might approxiamate reality, not the other way around. Reconstructing quantum theory imo unavoidably goes hand in hand with reconstructing the measure mathematics for counting, and "summing". Ie. what ends up calculcus in the continuum limit, but this are more complicated here, beucase the actualy LIMIT may not be physical at all! My hunch is that its definitely not so.

/Fredrik
 
  • Like
Likes Jimster41
  • #83
Auto-Didact said:
I understand that that is the intention, but it is actually a quite subtle point I am trying to make. I understand that the goal is to make scientific progress, my point is that axiomatics isn't a proper basis for reasoning in (the foundations of) physics: research in foundations needs to lead to experimentally falsifiable statements, not axiomatic theorems. There is simply too much uncertainty regarding the actual range of validity of the known laws of physics, way too much uncertainty in order that one can make definitive axiomatic statements (theorems) without deluding oneself.
Well it eliminates theories that can't work, since many people thought to suggest and build models that align with the theorem, I think it's useful for knowing what's not going on in QM. Since the eliminated models seem to be the most conventional ones and the class eliminated quite broad, I think it's useful, it just shows how weird the explanation will be. Even the fact that it has made anti-realist explanations more plausible is interesting enough on its own.

Auto-Didact said:
The problem with reification of some mathematics as an actual physical concept, especially w.r.t. an overt generalization such as the measurable set - a mathematical construction, which just like ZFC and other standard foundations of mathematics, was purely constructed a posteriori in order to do axiomatics - is that the underlying physical concept originally under research can become divorced from its conceptual properties and so lost during this abstraction process; I am arguing that the ontology vanishes as well leaving one with nothing but epistemology.
Doesn't this just apply to any kind of mathematical research though. I still don't see why something "supporting integration" is epistemological, it depends on how you are viewing it in your theory. You might consider it an underlying state space of ontic states, just that the state space can be integrated over, but it doesn't make it an epistemic object, otherwise the manifold in GR is an epistemic object.

Auto-Didact said:
The goal of foundations is to provide exactly such definitive statements...Historically, the goal of foundations of physics has always been to challenge accepted concepts
The ontological models framework does challenge accepted concepts, because it tells you what won't work. So it eliminates more naive ideas people had. I don't think it is the goal of the framework to provide definitive statements about what is actually going on in QM, just to help eliminate various lines of reasoning.

Auto-Didact said:
I have stated it already in post #64, and moreover argued at length why I believe the definition as given is a mischaracterization i.e. an incorrect operationalization of the concept that ##\psi## is ontological into a technical definition. Even stronger, I believe the strategy for some rigorous definition based on axioms at this stage is itself a misguided quest; we need preliminary mathematical models, not highly rigorous axiomatics.
I genuinely still don't understand what's actually wrong with using axiomatic reasoning to eliminate various mathematical models. I also still don't really understand what is wrong with defining ##\psi##-ontology to be having a state space like ##\Lambda = \mathcal{H} \times \mathcal{A}##, ##\mathcal{H}## being part of the state space seems to be necessary for ##\psi##-ontology as ##\mathcal{H}## is simply the space of all ##\psi##s. Can you explain what ##\psi## being ontic without ##\mathcal{H}## being involved means? I think this would really help me.

I understand if you simply see this sort of axiomatic investigation as not the optimal strategy or unlikely to help with progress. However at times you seem to suggesting their conclusions are also incorrect, or even some of the definitions, this I don't really understand.
 
  • #84
Auto-Didact said:
The author convincingly demonstrates that practically everything known about particle physics, including the SM itself, can be derived from first principles by treating the electron as an evolved self-organized open system in the context of dissipative nonlinear systems.

So, setting the important Bell theorem criticism aside, I can't help but view this paper as furious hand waving followed by an isolated guess. While the ideas seem intriguing (well worth a paper) what is the predictive power of the idea? SOSs need some level of underlying "system" to organize. Without QM what is that system even in principle?
 
  • Like
Likes akvadrako
  • #85
Paul Colby said:
So, setting the important Bell theorem criticism aside, I can't help but view this paper as furious hand waving followed by an isolated guess. While the ideas seem intriguing (well worth a paper) what is the predictive power of the idea? SOSs need some level of underlying "system" to organize. Without QM what is that system even in principle?
At this stage, not immediately having PDE's or other equations isn't an issue whatsoever: one of the most successful scientific theories ever, evolution through natural selection, was not formulated using any mathematics at all, yet the predictions were very clear once conceptually grasped, but I digress.

To answer your question: that system is still particles, just not particles resulting from QM but instead from some deeper viewpoint; this is nothing new since viewpoints of particulate matter far precede QM (by millenia).

Manasson's deeper perspective predicts among many other things, as actual physical phenomenon both:
1) a dynamical mechanism underlying renormalization capable of explaining all possible bare and dressed values of particles
2) the quantized nature of objects in QM as a direct result of the underlying dynamics of particles themselves, instead of the quantized nature being a theoretically unexplained postulate.

Essentially, according to Manasson, there is a shift of particle physics foundations from QT to dynamical systems theory, with the mathematics and qualitative nature of QT resulting directly from the properties of a very special kind of dynamical system.
 
  • Like
Likes Jimster41
  • #86
Auto-Didact said:
To answer your question: that system is still particles, just not particles resulting from QM but instead from some deeper viewpoint; this is nothing new since viewpoints of particulate matter far precede QM (by millenia).

This isn't an answer IMO, nor does it resemble the state space arguments of section II of the paper in a way that I understand. The argument first postulates a "state space", ##\psi_k##, which is essentially undefined and a transformation, ##F(\psi_k)##, which is assumed to have frequency doubling property which people familiar with SOS likely understand. The author then makes a general argument about the behavior of such systems. All this is fine.

However, clearly one cannot simply utter the word particles and get anywhere. One must also include some form of nonlinear interaction to even have frequency doubling. To extract any information, one must add the existence of electric charge and presumably a whole zoo of other quantum numbers like charm and so on. In the current scheme would these simply be disjoint facts, one parameter per prediction?
 
Last edited:
  • #87
Paul Colby said:
This isn't an answer IMO, nor does it resemble the state space arguments of section II of the paper in a way that I understand. The argument first postulates a "state space", ##\psi_k##, which is essentially undefined and a transformation, ##F(\psi_k)##, which is assumed to have frequency doubling property which people familiar with SOS likely understand. The author then makes a general argument about the behavior of such systems. All this is fine.

However, clearly one cannot simply utter the word particles and get anywhere. One must also include some form of nonlinear interaction to even have frequency doubling. To extract any information, one must add the existence of electric charge and presumably a whole zoo of other quantum numbers like charm and so on. In the current scheme would these simply be disjoint facts, one parameter per prediction?
Actually, section II starts off by considering the evolution into stable existence an electron, i.e. a particle, in the following manner:
1) A negatively charged fluctuation of the vacuum occurs due to some perturbation.
2) The presence of the fluctuation causes a polarization of the vacuum.
3) This leads to positive and negative feedback loops in the interaction between vacuum and fluctuation, which together form the system.
4) Depending on the energy of the original perturbation, there are only two possible outcomes for this system: settling into thermodynamic equilibrium or bifurcation into a dynamically stable state.
5) Hypothesis: the electron is such a dynamically stable state.

In the above description there is only one characteristic relevant parameter for this system, namely charge (##q##). This can be reasoned as follows:

6) The described dynamics occur in a manifestly open system.
7) The stable states of this system are fractals, i.e. strange attractors, in the state space.
8) Therefore the full dynamics of the system is described by a nonlinear vector field ##\vec \psi## in an infinite dimensional state space.
9) Practically, this can be reduced to a low dimensional state space using a statistical mechanics or a hydrodynamics treatment.
10) This results in the state space of the system being described by just a few extensive variables, most importantly ##q##.

A simple dimensional analysis argument gives us a relationship between the action (##J##) and ##q## i.e. ##J=(\sqrt {\frac {\mu_0} {\epsilon_0}})q^2##. Carrying on:

11) Then take a Poincaré section through the attractor in the state space to generate the Poincaré map.
12) Parametrize this map using ##q## or ##J## and we have ourselves the needed recurrence map ##\psi_{J} = F(\psi_{J-1})##.
13) Given that the dynamics of this system is described by a strange attractor in state space this automatically ensures that the above map is a Feigenbaum map, displaying period doubling.
14) A period doubling is a phase transition of the attractor leading to a double loop attractor (a la Rössler).
15) The topology of this double loop attractor is the Möbius strip, with vectors inside this strip being spinors, i.e. this is also a first principles derivation of spinor theory.

A purely qualitative treatment of attractor characterization leading directly to conclusions is standard practice in dynamical systems research and none of the above taken steps seem to be particularly - either mathematically or physically - controversial.
 
  • #88
Auto-Didact said:
A purely qualitative treatment of attractor characterization leading directly to conclusions is standard practice in dynamical systems research and none of the above taken steps seem to be particularly - either mathematically or physically - controversial.

So, it should be fairly straight forward to reproduce the observed energy levels of a hydrogen atom. Please include hyperfine splitting and the Lamb shift in the analysis. How would such a calculation proceed?
 
  • #89
DarMM said:
Well it eliminates theories that can't work, since many people thought to suggest and build models that align with the theorem, I think it's useful for knowing what's not going on in QM. Since the eliminated models seem to be the most conventional ones and the class eliminated quite broad, I think it's useful, it just shows how weird the explanation will be. Even the fact that it has made anti-realist explanations more plausible is interesting enough on its own.
Can't work given certain assumptions, including the full validity of axioms of QM beyond what has been experimentally demonstrated; if QM is shown to be a limiting theory, many utilizations of the theorems to test hypotheses will be rendered invalid.
DarMM said:
Doesn't this just apply to any kind of mathematical research though. I still don't see why something "supporting integration" is epistemological, it depends on how you are viewing it in your theory. You might consider it an underlying state space of ontic states, just that the state space can be integrated over, but it doesn't make it an epistemic object, otherwise the manifold in GR is an epistemic object.
If the only relevant property is that 'it supports integration', then you have removed all the physics and are left with just mathematics. 'It supports integration' is equally empty as the statement 'numbers are used in physics'.

If you would consider that the manifold in GR is just a measurable set, not necessarily pseudo-Riemannian nor differentiable, you would actually lose all the physics of GR including diffeomorphism invariance: it would transform the manifold into exactly an epistemological object! Both statistics and information geometry have such manifolds which are purely epistemic objects. The point is that you would not be doing physics anymore but secretly slipped into doing mathematics.
DarMM said:
The ontological models framework does challenge accepted concepts, because it tells you what won't work. So it eliminates more naive ideas people had. I don't think it is the goal of the framework to provide definitive statements about what is actually going on in QM, just to help eliminate various lines of reasoning.
It eliminates lines of reasoning yes, it however may introduce lines of reasoning falsely as described above. Every QM foundations paper using or suggesting that no-go theorems can effectively be used as statistical tests to make conclusive statements about different physical hypotheses need to correct for the non-ideal nature of the test, i.e. report the accuracy of this test; this is an empirical matter not a logical or mathematical one.
DarMM said:
I genuinely still don't understand what's actually wrong with using axiomatic reasoning to eliminate various mathematical models. I also still don't really understand what is wrong with defining ##\psi##-ontology to be having a state space like ##\Lambda = \mathcal{H} \times \mathcal{A}##, ##\mathcal{H}## being part of the state space seems to be necessary for ##\psi##-ontology as ##\mathcal{H}## is simply the space of all ##\psi##s. Can you explain what ##\psi## being ontic without ##\mathcal{H}## being involved means? I think this would really help me.
I'm not saying ##\mathcal{H}## shouldn't be involved, I am saying in terms of physics it isn't the most important mathematical quantity we should be thinking about.
DarMM said:
I understand if you simply see this sort of axiomatic investigation as not the optimal strategy or unlikely to help with progress. However at times you seem to suggesting their conclusions are also incorrect, or even some of the definitions, this I don't really understand.
Yes, there is a blatant use of the theorems as selection criteria for empirical hypotheses, i.e. as a statistical selection tool for novel hypotheses. The use of axiomatics in this manner has no scientific basis and is unheard of in the practice of physics, or worse, known to be an abuse of rationality in empirical matters.

The only valid use of such reasoning is selection of hypotheses based on confirming to unquestionable physical laws, such as conservation laws, which have been demonstrated to be empirically valid in an enormous range independent of specific theories; the axioms of QM (and QM itself despite all that it has done) have simply not met this criteria yet.
 
  • #90
Auto-Didact said:
The only valid use of such reasoning is selection of hypotheses based on confirming to unquestionable physical laws, such as conservation laws, which have been demonstrated to be empirically valid

An evolutionary model needs to allow for both variation and stability in balance. If there is too much flexibility we loose stability and convergence in evolution. A natural way to do this is that hypothesis generation naturally rates the possibilities worth testing. In this perspective one can imagine that constraining hypothesis space is rational. Rationality here however does not imply that its the right choice. After all even in nature, evolved successful spieces, sometimes simply die out, and it does not mean that they were irrational. They placed the bet optimally and they died, and that's how the game goes.

What I am trying to say here is that the situation is paradoxal. This is both a key and a curse. The problem is when human scientists only sees it from one side.

And IMO a possible resolution to the paradoxal situation is to see that the rationality of the constraints of hypothesis space is observer dependent. If you absorb this, there is a possible exploit to make here. For a human scientist to constrain his own thinking is one thing, and bor an electron to constrain its own map of its environment is another. In the former case it has to do with beeing aware of our own logic and its limitations, and in the latter case its an opportunity for humans to for example understand the action of subatomic systems.

/Fredrik
 

Similar threads

  • · Replies 16 ·
Replies
16
Views
5K
  • · Replies 10 ·
Replies
10
Views
3K
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
5K
Replies
26
Views
5K
  • · Replies 21 ·
Replies
21
Views
5K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
62
Views
10K