# A Quantization isn't fundamental

Tags:
1. Nov 9, 2018

### Auto-Didact

Optimistically, multifractal analysis might already be a sufficient tool, but that is just grasping in the dark.

If I remember correct though, Nottale has a theory which sounds somewhat similar to the "all structure is emergent" idea called scale relativity (or something like that).

Moreover, I would naively like to presume that maybe something simple but alternative like fractional calculus or multiplicative calculus may perhaps be useful alternative forms of calculus which might be enlightening w.r.t. naturally capturing or identifying the correct physical quantities or equations involved in such a framework. Else, perhaps more advanced algebraic-geometric or holomorphic notions would probably be necessary.

2. Nov 9, 2018

### Fra

The question of what mathematics that will be required is indeed an interesting question. It is a paramount question also in my perspective, as one of they key ingredients in the quest for a physical inference theory, that is to generalise probability, is to characterise a MEASURE, that is intrisically constructable by a physical observer.

An inference as in reasoning with uncertainty needs a measure to quantify the confidence in certain things, as it conceptual boils down to how to COUNT evidence, in a rational way. One problem in most axiomatic constructions of probabilitiy theory is that one introduces uncountable numbers without justification. Does an arbitrary observer have acccess to infinite bit counters? The real justification is limits, but if you consider physical processes to be like computations, these limits are never actually reached, and pathologies in the theories arise when you assume that limits are manifested in observer states. What happens is that you loose tracking of limit procedures. I think a careful compliance to intrinsic measures will make convergences manifest. Divergences in theories is a symptom of abusing of mathematics, mixing up "mathematical possibilities" with actual possibilities in the inference and placing bets. Even though you "can" fix it, it shouldnt have to arise in the first place.

So what I am saying is that that I think smooth mathematics might approxiamate reality, not the other way around. Reconstructing quantum theory imo unavoidably goes hand in hand with reconstructing the measure mathematics for counting, and "summing". Ie. what ends up calculcus in the continuum limit, but this are more complicated here, beucase the actualy LIMIT may not be physical at all! My hunch is that its definitely not so.

/Fredrik

3. Nov 11, 2018

### DarMM

Well it eliminates theories that can't work, since many people thought to suggest and build models that align with the theorem, I think it's useful for knowing what's not going on in QM. Since the eliminated models seem to be the most conventional ones and the class eliminated quite broad, I think it's useful, it just shows how weird the explanation will be. Even the fact that it has made anti-realist explanations more plausible is interesting enough on its own.

Doesn't this just apply to any kind of mathematical research though. I still don't see why something "supporting integration" is epistemological, it depends on how you are viewing it in your theory. You might consider it an underlying state space of ontic states, just that the state space can be integrated over, but it doesn't make it an epistemic object, otherwise the manifold in GR is an epistemic object.

The ontological models framework does challenge accepted concepts, because it tells you what won't work. So it eliminates more naive ideas people had. I don't think it is the goal of the framework to provide definitive statements about what is actually going on in QM, just to help eliminate various lines of reasoning.

I genuinely still don't understand what's actually wrong with using axiomatic reasoning to eliminate various mathematical models. I also still don't really understand what is wrong with defining $\psi$-ontology to be having a state space like $\Lambda = \mathcal{H} \times \mathcal{A}$, $\mathcal{H}$ being part of the state space seems to be necessary for $\psi$-ontology as $\mathcal{H}$ is simply the space of all $\psi$s. Can you explain what $\psi$ being ontic without $\mathcal{H}$ being involved means? I think this would really help me.

I understand if you simply see this sort of axiomatic investigation as not the optimal strategy or unlikely to help with progress. However at times you seem to suggesting their conclusions are also incorrect, or even some of the definitions, this I don't really understand.

4. Nov 11, 2018

### Paul Colby

So, setting the important Bell theorem criticism aside, I can't help but view this paper as furious hand waving followed by an isolated guess. While the ideas seem intriguing (well worth a paper) what is the predictive power of the idea? SOSs need some level of underlying "system" to organize. Without QM what is that system even in principle?

5. Nov 13, 2018

### Auto-Didact

At this stage, not immediately having PDE's or other equations isn't an issue whatsoever: one of the most successful scientific theories ever, evolution through natural selection, was not formulated using any mathematics at all, yet the predictions were very clear once conceptually grasped, but I digress.

To answer your question: that system is still particles, just not particles resulting from QM but instead from some deeper viewpoint; this is nothing new since viewpoints of particulate matter far precede QM (by millenia).

Manasson's deeper perspective predicts among many other things, as actual physical phenomenon both:
1) a dynamical mechanism underlying renormalization capable of explaining all possible bare and dressed values of particles
2) the quantized nature of objects in QM as a direct result of the underlying dynamics of particles themselves, instead of the quantized nature being a theoretically unexplained postulate.

Essentially, according to Manasson, there is a shift of particle physics foundations from QT to dynamical systems theory, with the mathematics and qualitative nature of QT resulting directly from the properties of a very special kind of dynamical system.

6. Nov 13, 2018

### Paul Colby

This isn't an answer IMO, nor does it resemble the state space arguments of section II of the paper in a way that I understand. The argument first postulates a "state space", $\psi_k$, which is essentially undefined and a transformation, $F(\psi_k)$, which is assumed to have frequency doubling property which people familiar with SOS likely understand. The author then makes a general argument about the behavior of such systems. All this is fine.

However, clearly one cannot simply utter the word particles and get anywhere. One must also include some form of nonlinear interaction to even have frequency doubling. To extract any information, one must add the existence of electric charge and presumably a whole zoo of other quantum numbers like charm and so on. In the current scheme would these simply be disjoint facts, one parameter per prediction?

Last edited: Nov 13, 2018
7. Nov 13, 2018

### Auto-Didact

Actually, section II starts off by considering the evolution into stable existence an electron, i.e. a particle, in the following manner:
1) A negatively charged fluctuation of the vacuum occurs due to some perturbation.
2) The presence of the fluctuation causes a polarization of the vacuum.
3) This leads to positive and negative feedback loops in the interaction between vacuum and fluctuation, which together form the system.
4) Depending on the energy of the original perturbation, there are only two possible outcomes for this system: settling into thermodynamic equilibrium or bifurcation into a dynamically stable state.
5) Hypothesis: the electron is such a dynamically stable state.

In the above description there is only one characteristic relevant parameter for this system, namely charge ($q$). This can be reasoned as follows:

6) The described dynamics occur in a manifestly open system.
7) The stable states of this system are fractals, i.e. strange attractors, in the state space.
8) Therefore the full dynamics of the system is described by a nonlinear vector field $\vec \psi$ in an infinite dimensional state space.
9) Practically, this can be reduced to a low dimensional state space using a statistical mechanics or a hydrodynamics treatment.
10) This results in the state space of the system being described by just a few extensive variables, most importantly $q$.

A simple dimensional analysis argument gives us a relationship between the action ($J$) and $q$ i.e. $J=(\sqrt {\frac {\mu_0} {\epsilon_0}})q^2$. Carrying on:

11) Then take a Poincaré section through the attractor in the state space to generate the Poincaré map.
12) Parametrize this map using $q$ or $J$ and we have ourselves the needed recurrence map $\psi_{J} = F(\psi_{J-1})$.
13) Given that the dynamics of this system is described by a strange attractor in state space this automatically ensures that the above map is a Feigenbaum map, displaying period doubling.
14) A period doubling is a phase transition of the attractor leading to a double loop attractor (a la Rössler).
15) The topology of this double loop attractor is the Möbius strip, with vectors inside this strip being spinors, i.e. this is also a first principles derivation of spinor theory.

A purely qualitative treatment of attractor characterization leading directly to conclusions is standard practice in dynamical systems research and none of the above taken steps seem to be particularly - either mathematically or physically - controversial.

8. Nov 14, 2018

### Paul Colby

So, it should be fairly straight forward to reproduce the observed energy levels of a hydrogen atom. Please include hyperfine splitting and the Lamb shift in the analysis. How would such a calculation proceed?

9. Nov 14, 2018

### Auto-Didact

Can't work given certain assumptions, including the full validity of axioms of QM beyond what has been experimentally demonstrated; if QM is shown to be a limiting theory, many utilizations of the theorems to test hypotheses will be rendered invalid.
If the only relevant property is that 'it supports integration', then you have removed all the physics and are left with just mathematics. 'It supports integration' is equally empty as the statement 'numbers are used in physics'.

If you would consider that the manifold in GR is just a measurable set, not necessarily pseudo-Riemannian nor differentiable, you would actually lose all the physics of GR including diffeomorphism invariance: it would transform the manifold into exactly an epistemological object! Both statistics and information geometry have such manifolds which are purely epistemic objects. The point is that you would not be doing physics anymore but secretly slipped into doing mathematics.
It eliminates lines of reasoning yes, it however may introduce lines of reasoning falsely as described above. Every QM foundations paper using or suggesting that no-go theorems can effectively be used as statistical tests to make conclusive statements about different physical hypotheses need to correct for the non-ideal nature of the test, i.e. report the accuracy of this test; this is an empirical matter not a logical or mathematical one.
I'm not saying $\mathcal{H}$ shouldn't be involved, I am saying in terms of physics it isn't the most important mathematical quantity we should be thinking about.
Yes, there is a blatant use of the theorems as selection criteria for empirical hypotheses, i.e. as a statistical selection tool for novel hypotheses. The use of axiomatics in this manner has no scientific basis and is unheard of in the practice of physics, or worse, known to be an abuse of rationality in empirical matters.

The only valid use of such reasoning is selection of hypotheses based on confirming to unquestionable physical laws, such as conservation laws, which have been demonstrated to be empirically valid in an enormous range independent of specific theories; the axioms of QM (and QM itself despite all that it has done) have simply not met this criteria yet.

10. Nov 14, 2018

### Fra

An evolutionary model needs to allow for both variation and stability in balance. If there is too much flexibility we loose stability and convergence in evolution. A natural way to do this is that hypothesis generation naturally rates the possibilities worth testing. In this perspective one can imagine that constraining hypothesis space is rational. Rationality here however does not imply that its the right choice. After all even in nature, evolved successful spieces, sometimes simply die out, and it does not mean that they were irrational. They placed the bet optimally and they died, and that's how the game goes.

What I am trying to say here is that the situation is paradoxal. This is both a key and a curse. The problem is when human scientists only sees it from one side.

And IMO a possible resolution to the paradoxal situation is to see that the rationality of the constraints of hypothesis space is observer dependent. If you absorb this, there is a possible exploit to make here. For a human scientist to constrain his own thinking is one thing, and bor an electron to constrain its own map of its environment is another. In the former case it has to do with beeing aware of our own logic and its limitations, and in the latter case its an opportunity for humans to for example understand the action of subatomic systems.

/Fredrik

11. Nov 15, 2018

### DarMM

Of course, as I have said, the theorems have assumptions, that's a given.

That depends on the particular theorem. Bell's theorem for example does not rely on the full validity of QM, similar for many others. This implies to me that you haven't actually looked at the framework and are criticising it from a very abstract position of your own personal philosophy of science and your impression of what it must be.

It's not a proposal that the real space of states only has the property of supporting integration and nothing else. Remember how it is being used here. It is saying "If your model involves a state space that at least supports integration..."

So it constrains models where this (and four other assumptions) are true. It's not a proposal that nature involves only a set that involves integration and nothing else. The fact that you can prove theorems constraining such models shows it isn't as empty as "physics has numbers", to be honest that is just a kneejerk sneer at an entire field. Do you think if the framework was as useful as just saying "physics has numbers" that it would be accepted into major journals?

I think you are still treating the ontological models framework as an actual proposal for what nature is like, i.e. objecting to only looking at a state space that involves integration. Rather it is a presentation of general properties common to many models that attempt to move beyond QM and then demonstrating that from those properties alone one gets constraints.

i.e. Many models that attempt to replicate QM do have a state space that supports integration and that with four other properties is all you need to prove some theorems about them. Again all the actual models are richer and more physical than this, but some of their less pleasant to some properties follow from very general features like the integrability of the state space.

An analogue would be proving features of various metric theories of gravity. In such proofs you only state something like "the action possesses extrema", not because you're saying the action has that feature and nothing more, but because it's all you need to derive certain general features of such theories.

I don't understand your use of epistemic I have to say. You seem to use it to mean abstract, but I don't see how a manifold is epistemic. "Stripped of physical content" maybe, but I don't know of any major literature calling this epistemic.

Well then coming back to where this originated, what makes it invalid as a definition of $\psi$-ontic?

12. Nov 15, 2018

### Auto-Didact

Not necessarily, there are multiple routes:
1) Direct prediction of numerics based on experiment: this requires attractor reconstruction and unfortunately this usually isn't that simple. Usually to discover numerics, one would have do make very precise time series measurements, in this case of the vacuum polarization process and of extremely high-field electrodynamics, and then utilize the Ruelle-Takens theorem in order to identify the attractor; the problem here is that precise experimentation seems to be viciously complicated.

2) Direct prediction of numerics by guessing the correct NPDE: In order to characterize the actual numerics of orbits in QM without having precise measurements, requires essentially knowing the complete equations. Knowing the correct class of equations - giving qualitatively correct predictions of the general characteristics - is only a miniscule help w.r.t. identifying the uniquely correct NPDE. This is obviously because there is no superposition principle to help here.

3) Indirect: utilize the constructed spinor theory to rederive the Dirac equation and then guess the correct non-linearization thereof which incorporates renormalization as a physical process characterized by terms inside the new equation instead of an ad hoc procedure applied to an equation. This is far easier said than done, theorists have been attempting to do this since Dirac himself without any success so far.

13. Nov 15, 2018

### Auto-Didact

Its more important than you realize as it makes or breaks everything even given the truth of the 5 other assumptions you are referring to. If for example unitarity is not actually 100% true in nature, then many no-go theorems lose their validity.
I have looked at the theorems. I should make clear that I am not judging all no-go theorems equally, I am saying each of them has to be judged on a case by case basis (like in law). Bell's theorem for example would survive, because it doesn't make the same assumptions/'mistakes' some of the other do. I am also saying just because Bell's theorem is valid, it doesn't mean the others will be as well.
I think you are misunderstanding me, but maybe only slightly. The reason I asked about the properties of the resulting state space is to discover if these properties are necessarily part of all models which are extensions of QM. It seems very clear to me that being integrable isn't the most important property of the state space $\Lambda$.
Yes, definitely. I have seen 'very good' papers across many fields of science, including physics, finance, economics, neuroscience, medicine, psychology and biology with equally bad or worse underlying conceptual reasoning; a mere mention of the limitations of the conclusions due to the assumptions is all a scientist needs to do to cover himself. There is no reason to suspect physicists are better than other scientists in this aspect.

Journals, including major journals, tend to accept papers based on clear scientific relevance, strong methodology and clear results, and not based on extremely carefully reasoned out hypotheses; one can be as sloppy in coming up with hypotheses as one wants as long as a) one can refer to the literature that what he is doing is standard practice, and/or b) the hypothesis can be operationalized and that operationalization directly tested empirically.
That framework is a class of model, characterizing the properties of many models. The particular theorem(s) in question then in one swoop argue against the entire class.

A model moving beyond QM may either change the axioms of QM or not. These changes may be non-trivial or not. Some of these changes may not yet have been implemented in the particular version of that model for whatever reason (usually 'first study the simple version, then the harder version'). It isn't clear to me whether some (if not most) of the no-go theorems are taking such factors into account.
I quote the Oxford Dictionary:

Last edited: Nov 15, 2018
14. Nov 15, 2018

### Paul Colby

Okay, so what I'm taking from your list of potential approaches is that the answer to my initial question on what the underlying system to which the "method" is applied, is at present completely unknown. I chose the example of the hydrogen atom because, at least in the current body of theory, it is a very specific and detailed dynamical system. Apparently, this new approach doesn't work on the hydrogen atom as is. It's going to be a hard sell.

15. Nov 15, 2018

### Jimster41

I'm trying to follow this discussion - which is interesting.
I am confused about how lattice models of quantum gravity fit (or don't) here.

My naive cartoon is that such a structure supports non-linearity with manifold-like properties. I mean Isn't iteration all that is required for some fractal generation?
There is the a-priori structure of a "causal lattice" of space-time geometry to explain but as epistemological ontologies go that's pretty minimal. Most importantly, as I understand it anyway, there are real calculators that are getting close to building the SM from them. In fact @atyy posted one in this very forum. I found it very very hard to get much from it tho - really hard.

Last edited: Nov 15, 2018
16. Nov 15, 2018

### DarMM

How is a differentiable manifold epistemic though?

17. Nov 15, 2018

### Auto-Didact

No, partially unknown. It is known that the correct equation:
- is a NPDE
- is reducible to the Dirac equation in the correct limit
- describes vacuum fluctuations
- has a strange attractor in its state space
- has a parameter displaying period doubling

An equation has to be constructed with the above things as given.
I will let Feynman tell you why having immediately such an unrealistic expectation of a preliminary model such as this one is extremely shortsighted.
In other words, what you are asking is an important eventual goal post - one of several goal posts - which should be attempted to be reached. Arguing from a QG or QM foundations perspective it is important but definitely not the most important thing for the preliminary model to achieve at this stage.

In the ideal circumstance, this would be achieved in the format of a large research programme investigating the model, preferably with Manasson as the head of the research group and with PhD students carrying out the research.

18. Nov 15, 2018

### Paul Colby

If 50 years of string theory has taught us anything it's something about chicken counting and hatching.

19. Nov 15, 2018

### Auto-Didact

Easy: if the manifold doesn't characterize an existing object, but merely characterizes knowledge. There are manifolds in information geometry which can be constructed using the Fisher information metric; these constructions are purely epistemic.

In fact, all objects in statistics based on probability theory are completely epistemic, because probabilities (and all related quantities such as distributions, averages, variances, etc) aren't themselves objects in the world but encodings of the relative occurrence of objects in the world.

Physics, outside of QM, is different because it directly refers to actually existing - i.e. ontic - properties of objects in the world like mass and velocity. This is why physics is clearly an empirical science, while probability theory is part of mathematics.

20. Nov 15, 2018

### Auto-Didact

The research program should be initially limited to 10 years; if no empirical results are reached in 5 years, the budget should be halved. Another 5 years without anything but mathematical discoveries and it should be abandoned.