Quantization isn't fundamental

In summary, the paper "Are Particles Self-Organized Systems?" by Manasson V. discusses the idea that elementary particles can be described as self-organized dynamical systems, and that their properties such as charge and action quantization, SU(2) symmetry, and the coupling constants for strong, weak, and electromagnetic interactions can be derived from first principles. The author also suggests that quantum theory may be a quasi-linear approximation to a deeper theory describing the nonlinear world of elementary particles. While the specific model presented in the paper may have some flaws, the approach of reformulating the axioms of quantum theory based on identifying its mathematical properties is thought-provoking and warrants further exploration.
Physics news on Phys.org
  • #72
Auto-Didact said:
This ##\Lambda## seems to be a very particular kind of fiber bundle or symplectic manifold. You are calling it a state space, but can you elaborate on what kind of mathematical space this is exactly?
A measurable set, to be honest it needn't even be a space (i.e. have a topology). In the ontic case it has the additional constraint that it can be decomposed as a Cartesian product with one element the Hilbert space. It doesn't have to be a fiber bundle or a symplectic manifold.
Things like GRW are covered within the general ontological models framework, unless you make other assumptions that exclude them (which some theorems do, but not all).

The ##\psi##-ontic model would have to break out of the framework to escape many results, by breaking some of the axioms, so called "exotic" ontic models. However even these (e.g. Many-Worlds) still have ##\Lambda = \mathcal{H} \times \mathcal{A}##. The only way you would escape this is if there were additional variables beyond the wavefunction that had a non-measurable state space.
 
  • #73
As we keeping thinking differently, and this is a repeating theme in various disguises, I think it could be worth noting that IMO there is a difference between uncertainty in the general case and what we call ignorance.

Both can be treated within a probability framework, but their origin and logical properties when we start to talk about conjunctions etc, are very different.

Uncertainty that are originating from non-commutative information.
- This the the typical HUP uncertainty relation between conjugate variables. This uncertainty is not be interpreted as "ignorance", it is rather a structural constraint, and there is no "cure" for this" but adding "missing informaiton".
- One can OTOH, ask WHY nature seems to "complicate matters" by encoding conjunctions of non-commutative information? My own explanatory model is that its simply a evolutionary selected compressed sensing. Ie this "more complicated logic" is more efficient. [Is to be proven in context though]

In a way, i reject the "ontic space model" on generic grounds, simply because its for the above reason doomed to fail. I would even think its irrational to try to model general uncertainty by means of ignorance in this sense. This can be expectated from the logic i think, even without bothering with theorems. That is, give or take nonlocality or superdeterminism etc. I find it pathological to start with.

Uncertainty that are originating from incomplete information,
Even though this is more closely to "classical statstical" uncertainty, there is another twist here that makes things very interesting. Its one think to think about "ignorance" in the sense of "could have known" but dont't beacuse i was not informed, or i "lost the information" etc.

But there can also be (this is a conjecture of mine) physical constrainst in the observers structure, that fundamentally limints the AMOUNT of information it can encode. This is actually another "non-classical" uncertainty in the sense that when considering models where the action DEPENDS on summing over probabilities, then this actually changes the game! Because the "path integral" or what version we use is getting a self-imposed regularization, that is associated with the observers, say mass or informartion capacity (details here are an open question). This latter "uncertainty" is the reason also for the significant of compression sensing.

So I would say there are at least THREE types of uncertaint here, and ALL three are IMO at play in a general model.

This kind of model is what i am personally working on, and this is obviously fundamental as it not only reconstructs spacetime, it reconstructs the mathematical inference logic for physics. It aims to expalin the emergence of quantum logic, and to understand also how it incorporates gravity. But it does NOT aim todo so in terms of a simple ontic model, that uses only one of the THREE type of uncertainty. This is why i keep calling it general inference, because its a generalisation with goes beyond both kolmogorov probability and quantum logic.

/Fredrik
 
  • #74
DarMM said:
A measurable set, to be honest it needn't even be a space (i.e. have a topology). In the ontic case it has the additional constraint that it can be decomposed as a Cartesian product with one element the Hilbert space. It doesn't have to be a fiber bundle or a symplectic manifold.
Doesn't sound conceptually right as a foundation at all. This is because all state spaces in the canon of physics - definitely all I have ever seen in physics and applied mathematics w.r.t. other sciences - are always most naturally formulated as symplectic manifolds, fibre bundles or some other analogous highly structured mathematical object.

Being, as you say, in essence free from a natural conceptual formulation in terms of mathematics as some space would make this a very atypical foundational object outside of pure mathematics proper - i.e. an artificially constructed (a posteriori) mathematicized object completely based in axiomatics. This means the mathematical identification or construction of the object was purely a matter of being defined into existence by axiomatic reasoning instead of naturally discovered - and therefore almost certainly outside of physics proper.

Such 'artificial mathematical objects' are rife outside the exact sciences, e.g. defined operationalizations of phenomenon which only strenously bare any resemblences to the phenomena they are meant to reflect on the real world. Usually such objects are based on an extrapolation of some (statistical) data, i.e. a (premature) operationalization of a concept into a technical mathematical definition.

It seems to me that the exact same thing is going on here, since it is as you say a measurable set i.e. an axiomatic probability-esque object. Practically all mathematical objects with such properties are (or tend to be) purely epistemological in nature, directly implying that ##\psi## is actually being treated epistemologically instead of ontologically, the epistemic nature carefully hidden behind axiomatic gymnastics.
DarMM said:
The only way you would escape this is if there were additional variables beyond the wavefunction that had a non-measurable state space.
I see absolutely no reason to exclude such spaces a priori, especially given Manasson's model's mathematical basis in non-linear dynamical systems. One only need recall that practically all objects from non-linear dynamical systems were just a few decades ago universally regarded by mathematicians as nothing more than 'pathological' mathematical notions which were meant to be excluded by hand.
 
  • #75
Auto-Didact said:
Doesn't sound conceptually right as a foundation at all. This is because all state spaces in the canon of physics - definitely all I have ever seen in physics and applied mathematics w.r.t. other sciences - are always most naturally formulated as symplectic manifolds, fibre bundles or some other analogous highly structured mathematical object.
It includes those as special cases, so anything proven for the general ##\Lambda## holds for them.

Auto-Didact said:
It seems to me that the exact same thing is going on here, since it is as you say a measurable set i.e. an axiomatic probability-esque object. Practically all mathematical objects with such properties are (or tend to be) purely epistemological in nature, directly implying that ##\psi## is actually being treated epistemologically instead of ontologically, the epistemic nature carefully hidden behind axiomatic gymnastics.
I don't follow this to be honest. It only being required to be a measurable set is simply to make it very general. It also includes symplectic manifolds, fiber bundles, etc as special cases. I don't really see how that makes it secretly epistemic, otherwise anything supporting integration is secretly epistemic.

Auto-Didact said:
I see absolutely no reason to exclude such spaces a priori, especially given Manasson's model's mathematical basis in non-linear dynamical systems. One only need recall that practically all objects from non-linear dynamical systems were just a few decades ago universally regarded by mathematicians as nothing more than 'pathological' mathematical notions which were meant to be excluded by hand.
There's no a priori reason to exclude them and I think this is where the point is being missed.

I think both you and @Fra are taking the ontological foundations framework as some kind of claim of a "final argument" or something. For example thinking they are "rejecting the possibility of non-measurable spaces". Nobody is doing that, it's simply a very general framework so one can analyse a broad class of theories, nobody is arguing that the truth "must" lie within the framework. It's just that we have no-go results for anything that does, which is useful to have. It has allowed us to see what list of things you have to reject in order to get a decent theory that explains quantum phenomena and what set of theories are either hopeless or will end up with unnatural fine-tunings.

As I said to @Fra :
DarMM said:
It would be like somebody setting out to see what constraints apply to discrete models of some theory and then objecting to their use of ##\mathbb{Z}^{d}##

And to come back to a point you made earlier:
Auto-Didact said:
In my view, you seem to be a) frequently confusing (psi-)ontology with (psi-)epistemology, b) interpreting certain extremely subtle arguments at face value and therefore incorrectly (e.g. Valentini's argument for BM, on the face of it goes against accepted wisdom in contemporary physics, this in no way invalidates his argument, his argument is a logically valid argument), c) interpreting no-go theorems possibly based on shaky assumptions as actual definitive demonstrations and d) attributing concepts deemed impossible within contemporary physics (e.g. superdeterminism, superluminality, retrocausation) as effects of fine-tuning based arguments in the form of c).
(a) I don't think so, I know what the terms mean, they're fairly simple
(b) Where am I doing this?
(c) I am explicitly not doing this, as I know the ontological framework axioms and their exact limits, so I know when they don't hold.
(d) This is backwards, some of the theorems show they have fine-tunings, nothing says they are the result of fine-tunings.

Auto-Didact said:
This is made clearer when you automatically generalize the validity of proofs using concepts defined in a certain context as if the proof covers all contexts - seemingly purely because it is a theorem - something you have no doubt learned to trust from your experience of using/learning mathematics.
Again where have I done this? I know the ontological framework axioms, so I know where they apply and don't.

Can either of you two state what the ontological framework axioms are in your understanding, considering you have been arguing against it?
 
Last edited:
  • #76
Fra said:
In a way, i reject the "ontic space model" on generic grounds, simply because its for the above reason doomed to fail. I would even think its irrational to try to model general uncertainty by means of ignorance in this sense. This can be expectated from the logic i think, even without bothering with theorems. That is, give or take nonlocality or superdeterminism etc. I find it pathological to start with.
Well the ontological models framework show certain such models have pathologies in a rigorous way, one can easily say something is "expected" according to your personal intuition, quite another to actually prove it.
 
  • Like
Likes Fra
  • #77
ftr said:
All indications are that nonlocality is the first reason for QM, local effects are byproducts in a similar fashion to how "virtual particles" paradigm works.
I don't understand the link with virtual particles.
 
  • #78
DarMM said:
It includes those as special cases, so anything proven for the general ##\Lambda## holds for them.
I understand that that is the intention, but it is actually a quite subtle point I am trying to make. I understand that the goal is to make scientific progress, my point is that axiomatics isn't a proper basis for reasoning in (the foundations of) physics: research in foundations needs to lead to experimentally falsifiable statements, not axiomatic theorems. There is simply too much uncertainty regarding the actual range of validity of the known laws of physics, way too much uncertainty in order that one can make definitive axiomatic statements (theorems) without deluding oneself.
DarMM said:
I don't follow this to be honest. It only being required to be a measurable set is simply to make it very general. It also includes symplectic manifolds, fiber bundles, etc as special cases. I don't really see how that makes it secretly epistemic, otherwise anything supporting integration is secretly epistemic.
The problem with reification of some mathematics as an actual physical concept, especially w.r.t. an overt generalization such as the measurable set - a mathematical construction, which just like ZFC and other standard foundations of mathematics, was purely constructed a posteriori in order to do axiomatics - is that the underlying physical concept originally under research can become divorced from its conceptual properties and so lost during this abstraction process; I am arguing that the ontology vanishes as well leaving one with nothing but epistemology.
DarMM said:
There's no a priori reason to exclude them and I think this is where the point is being missed.
I understand that and I am glad that you realize that, however I'm not so sure other physicists who read and cite foundations literature do realize this as well. In my experience they tend to take statements - especially theorems - at face value as either non-empirical evidence or definitive mathematical proof; this goes for physicists at all levels, from (under)grad students up to professors.
DarMM said:
I think both you and @Fra are taking the ontological foundations framework as some kind of claim of a "final argument" or something. For example thinking they are "rejecting the possibility of non-measurable spaces". Nobody is doing that, it's simply a very general framework so one can analyse a broad class of theories, nobody is arguing that the truth "must" lie within the framework. It's just that we have no-go results for anything that does, which is useful to have. It has allowed us to see what list of things you have to reject in order to get a decent theory that explains quantum phenomena and what set of theories are either hopeless or will end up with unnatural fine-tunings.

As I said to @Fra :
The goal of foundations is to provide exactly such definitive statements; the problem is that axiomatic statements such as the no-go theorems, and in fact axiomatic reasoning itself, has historically never belonged to the toolbox of foundations of physics, but instead to the toolbox of mathematical physics. It is paramount to understand axiomatics, being essentially a form of deductive logic, cannot go beyond what is defined. As Poincaré said:
Poincaré said:
We have confined ourselves to bringing together one or other of two purely conventional definitions, and we have verified their identity; nothing new has been learned. Verification differs from proof precisely because it is analytical, and because it leads to nothing. It leads to nothing because the conclusion is nothing but the premisses translated into another language. A real proof, on the other hand, is fruitful, because the conclusion is in a sense more general than the premisses.
Historically, the goal of foundations of physics has always been to challenge accepted concepts which are deemed fundamental, by looking for mathematical reformulations which enable a natural synthesis (NB: not natural in the sense of naturalness but in the classical sense, i.e. 'spontaneous' or the opposite of artificial) between conflicting concepts often directly paired with novel experimental predictions.

Once some theory becomes too entrenched or embedded, dogmatically taken as necessarily (absolutely) true above other theories, things start to go awry. As Poincaré brilliantly pointed out a century ago - and echoed by Feynman decades later - axiomatic reasoning, being purely deductive, cannot offer a resolution to foundational issues in physics, because physical theory is incomplete: only hypotheses checkable by experiment can go beyond what is already known.

Having no-go results of uncertain validity is therefore actually of very questionable utility in the field of foundations, especially given the danger of premature closure and therefore promotion of cognitive biases among theoreticians. The fact of the matter is that foundations is a small branch in the practice of physics; everyone has benefit to avoid it becoming little more than an echo chamber, which sadly is definitely a possibility as we have seen in the practice of physics over the last century.
DarMM said:
And to come back to a point you made earlier:

(a) I don't think so, I know what the terms mean, they're fairly simple
(b) Where am I doing this?
(c) I am explicitly not doing this, as I know the ontological framework axioms and their exact limits, so I know when they don't hold.
(d) This is backwards, some of the theorems show they have fine-tunings, nothing says they are the result of fine-tunings.
Apart from a) which I have elaborated further upon, including in this very post, I agree you aren't doing b), c) and d). The problem is those less familiar with foundations of physics will almost certainly do b), c) and d) - especially if (self proclaimed) experts openly do a) as they in fact regularly do seem to do since foundations started adopting axiomatics starting with von Neumann.
DarMM said:
Again where have I done this? I know the ontological framework axioms, so I know where they apply and don't.

Can either of you two state what the ontological framework axioms are in your understanding, considering you have been arguing against it?
I have stated it already in post #64, and moreover argued at length why I believe the definition as given is a mischaracterization i.e. an incorrect operationalization of the concept that ##\psi## is ontological into a technical definition. Even stronger, I believe the strategy for some rigorous definition based on axioms at this stage is itself a misguided quest; we need preliminary mathematical models, not highly rigorous axiomatics.

W.r.t. QM foundations, I believe that the immediate focus should be theoretical i.e. a qualitative reformulation until conflicting concepts become consistent, leading to a resolution in which this new conceptual formulation can be restated using existing (but possibly non-standard) concrete mathematics, leading to experimental predictions; it is only after experimental verification that the mathematical physicists should try to find rigorous mathematical definitions.

Incidentally, Lucien Hardy essentially argues for this strategy for solving the problems in QM foundations as well, as seen in this thread, see my post there as well.
 
Last edited:
  • #79
DarMM said:
Well the ontological models framework show certain such models have pathologies in a rigorous way, one can easily say something is "expected" according to your personal intuition, quite another to actually prove it.
Agreed. But I guess my point was that - even in despite proofs, it does not seem to prevent people keep to looking for loopholes, and in this perspective i argue that there is an easier way to argue with yourself against using the type of uncertainty implicitiy in "ignorance" as i defined it above, as universal explanations.

/Fredrik
 
  • #80
I always thought self-similarity as seen in fractal sets was a way to accommodate Bell's non-local correlations - in an "all structure is emergent structure" approach.
Any two measurements must differ at minimum by some index on an operation (maybe an evolutionary one) that walked them far enough apart to make them stereoscopically observable. Maybe Bell and the rest of the SM (i.e. all conservation laws including super-selection) are just pointing out degrees of similarity in emergent (dissipative) structure.

The problem with fractal sets is that they are not well behaved spaces as manifolds go, to put it mildly.
It's hard to imagine any notion of calculus and algebra on sets that can have have all manner of wild mechanisms of smooth-roughness, continuity (with dis-continuity included) and connections that are truly not-local.

Maybe someday Ai will be able to help with this, discovering brute force what derivative and integral operators do in a big zoo of multi-fractal generated spaces.
 
  • #81
Jimster41 said:
I always thought self-similarity as seen in fractal sets was a way to accommodate Bell's non-local correlations - in an "all structure is emergent structure" approach.
Any two measurements must differ at minimum by some index on an operation (maybe an evolutionary one) that walked them far enough apart to make them stereoscopically observable. Maybe Bell and the rest of the SM (i.e. all conservation laws including super-selection) are just pointing out degrees of similarity in emergent (dissipative) structure.

The problem with fractal sets is that they are not well behaved spaces as manifolds go, to put it mildly.
It's hard to imagine any notion of calculus and algebra on sets that can have have all manner of wild mechanisms of smooth-roughness, continuity (with dis-continuity included) and connections that are truly not-local.

Maybe someday Ai will be able to help with this, discovering brute force what derivative and integral operators do in a big zoo of multi-fractal generated spaces.
Optimistically, multifractal analysis might already be a sufficient tool, but that is just grasping in the dark.

If I remember correct though, Nottale has a theory which sounds somewhat similar to the "all structure is emergent" idea called scale relativity (or something like that).

Moreover, I would naively like to presume that maybe something simple but alternative like fractional calculus or multiplicative calculus may perhaps be useful alternative forms of calculus which might be enlightening w.r.t. naturally capturing or identifying the correct physical quantities or equations involved in such a framework. Else, perhaps more advanced algebraic-geometric or holomorphic notions would probably be necessary.
 
  • Like
Likes Jimster41
  • #82
The question of what mathematics that will be required is indeed an interesting question. It is a paramount question also in my perspective, as one of they key ingredients in the quest for a physical inference theory, that is to generalise probability, is to characterise a MEASURE, that is intrisically constructable by a physical observer.

An inference as in reasoning with uncertainty needs a measure to quantify the confidence in certain things, as it conceptual boils down to how to COUNT evidence, in a rational way. One problem in most axiomatic constructions of probabilitiy theory is that one introduces uncountable numbers without justification. Does an arbitrary observer have acccess to infinite bit counters? The real justification is limits, but if you consider physical processes to be like computations, these limits are never actually reached, and pathologies in the theories arise when you assume that limits are manifested in observer states. What happens is that you loose tracking of limit procedures. I think a careful compliance to intrinsic measures will make convergences manifest. Divergences in theories is a symptom of abusing of mathematics, mixing up "mathematical possibilities" with actual possibilities in the inference and placing bets. Even though you "can" fix it, it shouldn't have to arise in the first place.

So what I am saying is that that I think smooth mathematics might approxiamate reality, not the other way around. Reconstructing quantum theory imo unavoidably goes hand in hand with reconstructing the measure mathematics for counting, and "summing". Ie. what ends up calculcus in the continuum limit, but this are more complicated here, beucase the actualy LIMIT may not be physical at all! My hunch is that its definitely not so.

/Fredrik
 
  • Like
Likes Jimster41
  • #83
Auto-Didact said:
I understand that that is the intention, but it is actually a quite subtle point I am trying to make. I understand that the goal is to make scientific progress, my point is that axiomatics isn't a proper basis for reasoning in (the foundations of) physics: research in foundations needs to lead to experimentally falsifiable statements, not axiomatic theorems. There is simply too much uncertainty regarding the actual range of validity of the known laws of physics, way too much uncertainty in order that one can make definitive axiomatic statements (theorems) without deluding oneself.
Well it eliminates theories that can't work, since many people thought to suggest and build models that align with the theorem, I think it's useful for knowing what's not going on in QM. Since the eliminated models seem to be the most conventional ones and the class eliminated quite broad, I think it's useful, it just shows how weird the explanation will be. Even the fact that it has made anti-realist explanations more plausible is interesting enough on its own.

Auto-Didact said:
The problem with reification of some mathematics as an actual physical concept, especially w.r.t. an overt generalization such as the measurable set - a mathematical construction, which just like ZFC and other standard foundations of mathematics, was purely constructed a posteriori in order to do axiomatics - is that the underlying physical concept originally under research can become divorced from its conceptual properties and so lost during this abstraction process; I am arguing that the ontology vanishes as well leaving one with nothing but epistemology.
Doesn't this just apply to any kind of mathematical research though. I still don't see why something "supporting integration" is epistemological, it depends on how you are viewing it in your theory. You might consider it an underlying state space of ontic states, just that the state space can be integrated over, but it doesn't make it an epistemic object, otherwise the manifold in GR is an epistemic object.

Auto-Didact said:
The goal of foundations is to provide exactly such definitive statements...Historically, the goal of foundations of physics has always been to challenge accepted concepts
The ontological models framework does challenge accepted concepts, because it tells you what won't work. So it eliminates more naive ideas people had. I don't think it is the goal of the framework to provide definitive statements about what is actually going on in QM, just to help eliminate various lines of reasoning.

Auto-Didact said:
I have stated it already in post #64, and moreover argued at length why I believe the definition as given is a mischaracterization i.e. an incorrect operationalization of the concept that ##\psi## is ontological into a technical definition. Even stronger, I believe the strategy for some rigorous definition based on axioms at this stage is itself a misguided quest; we need preliminary mathematical models, not highly rigorous axiomatics.
I genuinely still don't understand what's actually wrong with using axiomatic reasoning to eliminate various mathematical models. I also still don't really understand what is wrong with defining ##\psi##-ontology to be having a state space like ##\Lambda = \mathcal{H} \times \mathcal{A}##, ##\mathcal{H}## being part of the state space seems to be necessary for ##\psi##-ontology as ##\mathcal{H}## is simply the space of all ##\psi##s. Can you explain what ##\psi## being ontic without ##\mathcal{H}## being involved means? I think this would really help me.

I understand if you simply see this sort of axiomatic investigation as not the optimal strategy or unlikely to help with progress. However at times you seem to suggesting their conclusions are also incorrect, or even some of the definitions, this I don't really understand.
 
  • #84
Auto-Didact said:
The author convincingly demonstrates that practically everything known about particle physics, including the SM itself, can be derived from first principles by treating the electron as an evolved self-organized open system in the context of dissipative nonlinear systems.

So, setting the important Bell theorem criticism aside, I can't help but view this paper as furious hand waving followed by an isolated guess. While the ideas seem intriguing (well worth a paper) what is the predictive power of the idea? SOSs need some level of underlying "system" to organize. Without QM what is that system even in principle?
 
  • Like
Likes akvadrako
  • #85
Paul Colby said:
So, setting the important Bell theorem criticism aside, I can't help but view this paper as furious hand waving followed by an isolated guess. While the ideas seem intriguing (well worth a paper) what is the predictive power of the idea? SOSs need some level of underlying "system" to organize. Without QM what is that system even in principle?
At this stage, not immediately having PDE's or other equations isn't an issue whatsoever: one of the most successful scientific theories ever, evolution through natural selection, was not formulated using any mathematics at all, yet the predictions were very clear once conceptually grasped, but I digress.

To answer your question: that system is still particles, just not particles resulting from QM but instead from some deeper viewpoint; this is nothing new since viewpoints of particulate matter far precede QM (by millenia).

Manasson's deeper perspective predicts among many other things, as actual physical phenomenon both:
1) a dynamical mechanism underlying renormalization capable of explaining all possible bare and dressed values of particles
2) the quantized nature of objects in QM as a direct result of the underlying dynamics of particles themselves, instead of the quantized nature being a theoretically unexplained postulate.

Essentially, according to Manasson, there is a shift of particle physics foundations from QT to dynamical systems theory, with the mathematics and qualitative nature of QT resulting directly from the properties of a very special kind of dynamical system.
 
  • Like
Likes Jimster41
  • #86
Auto-Didact said:
To answer your question: that system is still particles, just not particles resulting from QM but instead from some deeper viewpoint; this is nothing new since viewpoints of particulate matter far precede QM (by millenia).

This isn't an answer IMO, nor does it resemble the state space arguments of section II of the paper in a way that I understand. The argument first postulates a "state space", ##\psi_k##, which is essentially undefined and a transformation, ##F(\psi_k)##, which is assumed to have frequency doubling property which people familiar with SOS likely understand. The author then makes a general argument about the behavior of such systems. All this is fine.

However, clearly one cannot simply utter the word particles and get anywhere. One must also include some form of nonlinear interaction to even have frequency doubling. To extract any information, one must add the existence of electric charge and presumably a whole zoo of other quantum numbers like charm and so on. In the current scheme would these simply be disjoint facts, one parameter per prediction?
 
Last edited:
  • #87
Paul Colby said:
This isn't an answer IMO, nor does it resemble the state space arguments of section II of the paper in a way that I understand. The argument first postulates a "state space", ##\psi_k##, which is essentially undefined and a transformation, ##F(\psi_k)##, which is assumed to have frequency doubling property which people familiar with SOS likely understand. The author then makes a general argument about the behavior of such systems. All this is fine.

However, clearly one cannot simply utter the word particles and get anywhere. One must also include some form of nonlinear interaction to even have frequency doubling. To extract any information, one must add the existence of electric charge and presumably a whole zoo of other quantum numbers like charm and so on. In the current scheme would these simply be disjoint facts, one parameter per prediction?
Actually, section II starts off by considering the evolution into stable existence an electron, i.e. a particle, in the following manner:
1) A negatively charged fluctuation of the vacuum occurs due to some perturbation.
2) The presence of the fluctuation causes a polarization of the vacuum.
3) This leads to positive and negative feedback loops in the interaction between vacuum and fluctuation, which together form the system.
4) Depending on the energy of the original perturbation, there are only two possible outcomes for this system: settling into thermodynamic equilibrium or bifurcation into a dynamically stable state.
5) Hypothesis: the electron is such a dynamically stable state.

In the above description there is only one characteristic relevant parameter for this system, namely charge (##q##). This can be reasoned as follows:

6) The described dynamics occur in a manifestly open system.
7) The stable states of this system are fractals, i.e. strange attractors, in the state space.
8) Therefore the full dynamics of the system is described by a nonlinear vector field ##\vec \psi## in an infinite dimensional state space.
9) Practically, this can be reduced to a low dimensional state space using a statistical mechanics or a hydrodynamics treatment.
10) This results in the state space of the system being described by just a few extensive variables, most importantly ##q##.

A simple dimensional analysis argument gives us a relationship between the action (##J##) and ##q## i.e. ##J=(\sqrt {\frac {\mu_0} {\epsilon_0}})q^2##. Carrying on:

11) Then take a Poincaré section through the attractor in the state space to generate the Poincaré map.
12) Parametrize this map using ##q## or ##J## and we have ourselves the needed recurrence map ##\psi_{J} = F(\psi_{J-1})##.
13) Given that the dynamics of this system is described by a strange attractor in state space this automatically ensures that the above map is a Feigenbaum map, displaying period doubling.
14) A period doubling is a phase transition of the attractor leading to a double loop attractor (a la Rössler).
15) The topology of this double loop attractor is the Möbius strip, with vectors inside this strip being spinors, i.e. this is also a first principles derivation of spinor theory.

A purely qualitative treatment of attractor characterization leading directly to conclusions is standard practice in dynamical systems research and none of the above taken steps seem to be particularly - either mathematically or physically - controversial.
 
  • #88
Auto-Didact said:
A purely qualitative treatment of attractor characterization leading directly to conclusions is standard practice in dynamical systems research and none of the above taken steps seem to be particularly - either mathematically or physically - controversial.

So, it should be fairly straight forward to reproduce the observed energy levels of a hydrogen atom. Please include hyperfine splitting and the Lamb shift in the analysis. How would such a calculation proceed?
 
  • #89
DarMM said:
Well it eliminates theories that can't work, since many people thought to suggest and build models that align with the theorem, I think it's useful for knowing what's not going on in QM. Since the eliminated models seem to be the most conventional ones and the class eliminated quite broad, I think it's useful, it just shows how weird the explanation will be. Even the fact that it has made anti-realist explanations more plausible is interesting enough on its own.
Can't work given certain assumptions, including the full validity of axioms of QM beyond what has been experimentally demonstrated; if QM is shown to be a limiting theory, many utilizations of the theorems to test hypotheses will be rendered invalid.
DarMM said:
Doesn't this just apply to any kind of mathematical research though. I still don't see why something "supporting integration" is epistemological, it depends on how you are viewing it in your theory. You might consider it an underlying state space of ontic states, just that the state space can be integrated over, but it doesn't make it an epistemic object, otherwise the manifold in GR is an epistemic object.
If the only relevant property is that 'it supports integration', then you have removed all the physics and are left with just mathematics. 'It supports integration' is equally empty as the statement 'numbers are used in physics'.

If you would consider that the manifold in GR is just a measurable set, not necessarily pseudo-Riemannian nor differentiable, you would actually lose all the physics of GR including diffeomorphism invariance: it would transform the manifold into exactly an epistemological object! Both statistics and information geometry have such manifolds which are purely epistemic objects. The point is that you would not be doing physics anymore but secretly slipped into doing mathematics.
DarMM said:
The ontological models framework does challenge accepted concepts, because it tells you what won't work. So it eliminates more naive ideas people had. I don't think it is the goal of the framework to provide definitive statements about what is actually going on in QM, just to help eliminate various lines of reasoning.
It eliminates lines of reasoning yes, it however may introduce lines of reasoning falsely as described above. Every QM foundations paper using or suggesting that no-go theorems can effectively be used as statistical tests to make conclusive statements about different physical hypotheses need to correct for the non-ideal nature of the test, i.e. report the accuracy of this test; this is an empirical matter not a logical or mathematical one.
DarMM said:
I genuinely still don't understand what's actually wrong with using axiomatic reasoning to eliminate various mathematical models. I also still don't really understand what is wrong with defining ##\psi##-ontology to be having a state space like ##\Lambda = \mathcal{H} \times \mathcal{A}##, ##\mathcal{H}## being part of the state space seems to be necessary for ##\psi##-ontology as ##\mathcal{H}## is simply the space of all ##\psi##s. Can you explain what ##\psi## being ontic without ##\mathcal{H}## being involved means? I think this would really help me.
I'm not saying ##\mathcal{H}## shouldn't be involved, I am saying in terms of physics it isn't the most important mathematical quantity we should be thinking about.
DarMM said:
I understand if you simply see this sort of axiomatic investigation as not the optimal strategy or unlikely to help with progress. However at times you seem to suggesting their conclusions are also incorrect, or even some of the definitions, this I don't really understand.
Yes, there is a blatant use of the theorems as selection criteria for empirical hypotheses, i.e. as a statistical selection tool for novel hypotheses. The use of axiomatics in this manner has no scientific basis and is unheard of in the practice of physics, or worse, known to be an abuse of rationality in empirical matters.

The only valid use of such reasoning is selection of hypotheses based on confirming to unquestionable physical laws, such as conservation laws, which have been demonstrated to be empirically valid in an enormous range independent of specific theories; the axioms of QM (and QM itself despite all that it has done) have simply not met this criteria yet.
 
  • #90
Auto-Didact said:
The only valid use of such reasoning is selection of hypotheses based on confirming to unquestionable physical laws, such as conservation laws, which have been demonstrated to be empirically valid

An evolutionary model needs to allow for both variation and stability in balance. If there is too much flexibility we loose stability and convergence in evolution. A natural way to do this is that hypothesis generation naturally rates the possibilities worth testing. In this perspective one can imagine that constraining hypothesis space is rational. Rationality here however does not imply that its the right choice. After all even in nature, evolved successful spieces, sometimes simply die out, and it does not mean that they were irrational. They placed the bet optimally and they died, and that's how the game goes.

What I am trying to say here is that the situation is paradoxal. This is both a key and a curse. The problem is when human scientists only sees it from one side.

And IMO a possible resolution to the paradoxal situation is to see that the rationality of the constraints of hypothesis space is observer dependent. If you absorb this, there is a possible exploit to make here. For a human scientist to constrain his own thinking is one thing, and bor an electron to constrain its own map of its environment is another. In the former case it has to do with beeing aware of our own logic and its limitations, and in the latter case its an opportunity for humans to for example understand the action of subatomic systems.

/Fredrik
 
  • #91
Auto-Didact said:
Can't work given certain assumptions
Of course, as I have said, the theorems have assumptions, that's a given.

Auto-Didact said:
including the full validity of axioms of QM beyond what has been experimentally demonstrated
That depends on the particular theorem. Bell's theorem for example does not rely on the full validity of QM, similar for many others. This implies to me that you haven't actually looked at the framework and are criticising it from a very abstract position of your own personal philosophy of science and your impression of what it must be.

Auto-Didact said:
If the only relevant property is that 'it supports integration', then you have removed all the physics and are left with just mathematics. 'It supports integration' is equally empty as the statement 'numbers are used in physics'.
It's not a proposal that the real space of states only has the property of supporting integration and nothing else. Remember how it is being used here. It is saying "If your model involves a state space that at least supports integration..."

So it constrains models where this (and four other assumptions) are true. It's not a proposal that nature involves only a set that involves integration and nothing else. The fact that you can prove theorems constraining such models shows it isn't as empty as "physics has numbers", to be honest that is just a kneejerk sneer at an entire field. Do you think if the framework was as useful as just saying "physics has numbers" that it would be accepted into major journals?

I think you are still treating the ontological models framework as an actual proposal for what nature is like, i.e. objecting to only looking at a state space that involves integration. Rather it is a presentation of general properties common to many models that attempt to move beyond QM and then demonstrating that from those properties alone one gets constraints.

i.e. Many models that attempt to replicate QM do have a state space that supports integration and that with four other properties is all you need to prove some theorems about them. Again all the actual models are richer and more physical than this, but some of their less pleasant to some properties follow from very general features like the integrability of the state space.

An analogue would be proving features of various metric theories of gravity. In such proofs you only state something like "the action possesses extrema", not because you're saying the action has that feature and nothing more, but because it's all you need to derive certain general features of such theories.

Auto-Didact said:
it would transform the manifold into exactly an epistemological object
I don't understand your use of epistemic I have to say. You seem to use it to mean abstract, but I don't see how a manifold is epistemic. "Stripped of physical content" maybe, but I don't know of any major literature calling this epistemic.

Auto-Didact said:
I'm not saying ##\mathcal{H}## shouldn't be involved
Well then coming back to where this originated, what makes it invalid as a definition of ##\psi##-ontic?
 
  • #92
Paul Colby said:
So, it should be fairly straight forward to reproduce the observed energy levels of a hydrogen atom. Please include hyperfine splitting and the Lamb shift in the analysis. How would such a calculation proceed?
Not necessarily, there are multiple routes:
1) Direct prediction of numerics based on experiment: this requires attractor reconstruction and unfortunately this usually isn't that simple. Usually to discover numerics, one would have do make very precise time series measurements, in this case of the vacuum polarization process and of extremely high-field electrodynamics, and then utilize the Ruelle-Takens theorem in order to identify the attractor; the problem here is that precise experimentation seems to be viciously complicated.

2) Direct prediction of numerics by guessing the correct NPDE: In order to characterize the actual numerics of orbits in QM without having precise measurements, requires essentially knowing the complete equations. Knowing the correct class of equations - giving qualitatively correct predictions of the general characteristics - is only a miniscule help w.r.t. identifying the uniquely correct NPDE. This is obviously because there is no superposition principle to help here.

3) Indirect: utilize the constructed spinor theory to rederive the Dirac equation and then guess the correct non-linearization thereof which incorporates renormalization as a physical process characterized by terms inside the new equation instead of an ad hoc procedure applied to an equation. This is far easier said than done, theorists have been attempting to do this since Dirac himself without any success so far.
 
  • #93
DarMM said:
Of course, as I have said, the theorems have assumptions, that's a given.
Its more important than you realize as it makes or breaks everything even given the truth of the 5 other assumptions you are referring to. If for example unitarity is not actually 100% true in nature, then many no-go theorems lose their validity.
DarMM said:
That depends on the particular theorem. Bell's theorem for example does not rely on the full validity of QM, similar for many others. This implies to me that you haven't actually looked at the framework and are criticising it from a very abstract position of your own personal philosophy of science and your impression of what it must be.
I have looked at the theorems. I should make clear that I am not judging all no-go theorems equally, I am saying each of them has to be judged on a case by case basis (like in law). Bell's theorem for example would survive, because it doesn't make the same assumptions/'mistakes' some of the other do. I am also saying just because Bell's theorem is valid, it doesn't mean the others will be as well.
DarMM said:
The fact that you can prove theorems constraining such models shows it isn't as empty as "physics has numbers", to be honest that is just a kneejerk sneer at an entire field.
I think you are misunderstanding me, but maybe only slightly. The reason I asked about the properties of the resulting state space is to discover if these properties are necessarily part of all models which are extensions of QM. It seems very clear to me that being integrable isn't the most important property of the state space ##\Lambda##.
DarMM said:
Do you think if the framework was as useful as just saying "physics has numbers" that it would be accepted into major journals?
Yes, definitely. I have seen 'very good' papers across many fields of science, including physics, finance, economics, neuroscience, medicine, psychology and biology with equally bad or worse underlying conceptual reasoning; a mere mention of the limitations of the conclusions due to the assumptions is all a scientist needs to do to cover himself. There is no reason to suspect physicists are better than other scientists in this aspect.

Journals, including major journals, tend to accept papers based on clear scientific relevance, strong methodology and clear results, and not based on extremely carefully reasoned out hypotheses; one can be as sloppy in coming up with hypotheses as one wants as long as a) one can refer to the literature that what he is doing is standard practice, and/or b) the hypothesis can be operationalized and that operationalization directly tested empirically.
DarMM said:
I think you are still treating the ontological models framework as an actual proposal for what nature is like, i.e. objecting to only looking at a state space that involves integration. Rather it is a presentation of general properties common to many models that attempt to move beyond QM and then demonstrating that from those properties alone one gets constraints.
That framework is a class of model, characterizing the properties of many models. The particular theorem(s) in question then in one swoop argue against the entire class.

A model moving beyond QM may either change the axioms of QM or not. These changes may be non-trivial or not. Some of these changes may not yet have been implemented in the particular version of that model for whatever reason (usually 'first study the simple version, then the harder version'). It isn't clear to me whether some (if not most) of the no-go theorems are taking such factors into account.
DarMM said:
I don't understand your use of epistemic I have to say. You seem to use it to mean abstract, but I don't see how a manifold is epistemic. "Stripped of physical content" maybe, but I don't know of any major literature calling this epistemic.
I quote the Oxford Dictionary:
Definition of 'epistemic' in English:
epistemic (adjective): Relating to knowledge or to the degree of its validation.

Origin: 1920s: from Greek epistēmē ‘knowledge’ (see epistemology) + -ic.
Definition of epistemology in English:
epistemology (noun, mass noun):
Philosophy
The theory of knowledge, especially with regard to its methods, validity, and scope, and the distinction between justified belief and opinion.

Origin: Mid 19th century: from Greek epistēmē ‘knowledge’, from epistasthai ‘know, know how to do’.
 
Last edited:
  • #94
Auto-Didact said:
Not necessarily, there are multiple routes:

Okay, so what I'm taking from your list of potential approaches is that the answer to my initial question on what the underlying system to which the "method" is applied, is at present completely unknown. I chose the example of the hydrogen atom because, at least in the current body of theory, it is a very specific and detailed dynamical system. Apparently, this new approach doesn't work on the hydrogen atom as is. It's going to be a hard sell.
 
  • #95
I'm trying to follow this discussion - which is interesting.
I am confused about how lattice models of quantum gravity fit (or don't) here.

My naive cartoon is that such a structure supports non-linearity with manifold-like properties. I mean Isn't iteration all that is required for some fractal generation?
There is the a-priori structure of a "causal lattice" of space-time geometry to explain but as epistemological ontologies go that's pretty minimal. Most importantly, as I understand it anyway, there are real calculators that are getting close to building the SM from them. In fact @atyy posted one in this very forum. I found it very very hard to get much from it tho - really hard.

https://www.physicsforums.com/threads/lattice-standard-model-wang-wen.958852/
 
Last edited:
  • #96
Auto-Didact said:
I quote the Oxford Dictionary:
How is a differentiable manifold epistemic though?
 
  • #97
Paul Colby said:
Okay, so what I'm taking from your list of potential approaches is that the answer to my initial question on what the underlying system to which the "method" is applied, is at present completely unknown.
No, partially unknown. It is known that the correct equation:
- is a NPDE
- is reducible to the Dirac equation in the correct limit
- describes vacuum fluctuations
- has a strange attractor in its state space
- has a parameter displaying period doubling

An equation has to be constructed with the above things as given.
Paul Colby said:
I chose the example of the hydrogen atom because, at least in the current body of theory, it is a very specific and detailed dynamical system. Apparently, this new approach doesn't work on the hydrogen atom as is. It's going to be a hard sell.
I will let Feynman tell you why having immediately such an unrealistic expectation of a preliminary model such as this one is extremely shortsighted.
Feynman said:
For those people who insist that the only thing that is important is that the theory agrees with experiment, I would like to imagine a discussion between a Mayan astronomer and his student. The Mayans were able to calculate with great precision predictions, for example, for eclipses and for the position of the moon in the sky, the position of Venus, etc. It was all done by arithmetic. They counted a certain number and subtracted some numbers, and so on. There was no discussion of what the moon was. There was no discussion even of the idea that it went around. They just calculated the time when there would be an eclipse, or when the moon would rise at the full, and so on.

Suppose that a young man went to the astronomer and said, ‘I have an idea. Maybe those things are going around, and there are balls of something like rocks out there, and we could calculate how they move in a completely different way from just calculating what time they appear in the sky’. ‘Yes’, says the astronomer, ‘and how accurately can you predict eclipses ?’ He says, ‘I haven’t developed the thing very far yet’. Then says the astronomer, ‘Well, we can calculate eclipses more accurately than you can with your model, so you must not pay any attention to your idea because obviously the mathematical scheme is better’.

There is a very strong tendency, when someone comes up with an idea and says, ‘Let’s suppose that the world is this way’, for people to say to him, ‘What would you get for the answer to such and such a problem ?’ And he says, ‘I haven’t developed it far enough’. And they say, ‘Well, we have already developed it much further, and we can get the answers very accurately’. So it is a problem whether or not to worry about philosophies behind ideas.
In other words, what you are asking is an important eventual goal post - one of several goal posts - which should be attempted to be reached. Arguing from a QG or QM foundations perspective it is important but definitely not the most important thing for the preliminary model to achieve at this stage.

In the ideal circumstance, this would be achieved in the format of a large research programme investigating the model, preferably with Manasson as the head of the research group and with PhD students carrying out the research.
 
  • Like
Likes Buzz Bloom
  • #98
Auto-Didact said:
In other words, what you are asking is an important eventual goal post - one of several goal posts - which should be attempted to be reached.

If 50 years of string theory has taught us anything it's something about chicken counting and hatching.
 
  • Like
Likes Auto-Didact
  • #99
DarMM said:
How is a differentiable manifold epistemic though?
Easy: if the manifold doesn't characterize an existing object, but merely characterizes knowledge. There are manifolds in information geometry which can be constructed using the Fisher information metric; these constructions are purely epistemic.

In fact, all objects in statistics based on probability theory are completely epistemic, because probabilities (and all related quantities such as distributions, averages, variances, etc) aren't themselves objects in the world but encodings of the relative occurrence of objects in the world.

Physics, outside of QM, is different because it directly refers to actually existing - i.e. ontic - properties of objects in the world like mass and velocity. This is why physics is clearly an empirical science, while probability theory is part of mathematics.
 
  • #100
Paul Colby said:
If 50 years of string theory has taught us anything it's something about chicken counting and hatching.
The research program should be initially limited to 10 years; if no empirical results are reached in 5 years, the budget should be halved. Another 5 years without anything but mathematical discoveries and it should be abandoned.
 
  • Like
Likes Fra
  • #101
Auto-Didact said:
The research program should be initially limited to 10 years; if no empirical results are reached in 5 years, the budget should be halved. Another 5 years without anything but mathematical discoveries and it should be abandoned.

Well, things don't work that way and I'm kind of glad they don't. The literature is littered with less than successful ideas and programs people push and try to sell. String theory will go away if we run out of string theorists. I always had a soft spot for Chew's bootstrap program. Everything from unitarity and analyticity. The only problem is, it's an incomplete idea. Super symmetry doesn't work, not because it's not a great thought, but because nature doesn't work that way as far as I can tell. One reason to persist in my questions is to see if there is anything to work with here. I don't see it. No shame in that and no problem either. Carry on.
 
  • Like
Likes Auto-Didact
  • #102
Paul Colby said:
Well, things don't work that way and I'm kind of glad they don't. The literature is littered with less than successful ideas and programs people push and try to sell.
You're more lenient than I am; perhaps 'export to the mathematics department' is the correct euphemism.
There are other sciences that actually do work more or less in the way that I describe. There are literally mountains of empirical data on things like this. Such strategies of course have pros and cons:

Pros:
- Discourages adherents to remain loyal to some framework/theory
- Makes everyone involved in the field at least somewhat familiar with all current frameworks
- Increases marginal innovation rate due to luck by constantly exposing all aspects of a framework to a huge diversity of specialized views and methodologies
- Increases the likelihood of discoveries contingent upon the smooth operation of this system, i.e. "teamwork"

Cons:
- Time consuming in comparison with the current system
- Slow-down of particular projects, speed-up of others
- Less freedom to work on what you want just because you want to work on that
- Teamwork can lead to increased human errors, through miscommunication, frustration, misunderstanding, etc especially if one or more parties do not want to work together

Despite the cons, I think it may be a good idea to try and implement the strategy in the practice of theoretical physics. I will illustrate this by way of an example:

I said earlier (in route 1) that precise time measurements of extremely high-field electrodynamics is necessary, while I - having never worked in that field - know next to nothing about doing such measurements, nor about the state of the art of such measurements; there are two choices: carry on this part of the research myself or consult/defer this part of the research to another person.

If I "don't want to share the credit" I'll do it myself, with the danger that I'll continuously be adding more work for myself, certainly if I'll have to learn some new mathematics along the way. On the other hand, it is almost a guarantee that there might actually be other theorists who do already have some experience in that field and/or are in direct contact with those that do.

A strategy like the one I described would make such a possible meeting not accidental but a mandatory next step in the scientific process. This means theorists would think twice before writing papers making any big claims, because all such big claims would have to get chased down immediately. This would probably lead to a new performance index, namely not just a citation count but also a 'boy who cried wolf'-count.
 
  • #103
@Auto-Didact , I see your points now and I think we are in agreement. I'm restricted in my ability to reply for the next few days, but I think we're on the same lines just using different terminology. I'll write a longer post when I'm free.

Apologies for getting heated in the previous post, I was mischaracterising you.
 
  • Like
Likes Auto-Didact
  • #104
DarMM said:
@Auto-Didact , I see your points now and I think we are in agreement. I'm restricted in my ability to reply for the next few days, but I think we're on the same lines just using different terminology. I'll write a longer post when I'm free.
:)
DarMM said:
Apologies for getting heated in the previous post, I was mischaracterising you.
No damage done, to be fair I have probably done some mischaracterization along the way as well.
 
  • #105
  • Like
Likes Paul Colby

Similar threads

  • Beyond the Standard Models
Replies
16
Views
4K
Replies
2
Views
2K
  • Beyond the Standard Models
Replies
1
Views
150
  • Beyond the Standard Models
Replies
2
Views
2K
  • Quantum Interpretations and Foundations
Replies
0
Views
1K
  • Beyond the Standard Models
Replies
14
Views
3K
  • Beyond the Standard Models
Replies
11
Views
2K
  • Quantum Physics
Replies
4
Views
2K
  • Beyond the Standard Models
Replies
5
Views
2K
Back
Top