# Quantization isn't fundamental

• A
Auto-Didact
@Paul Colby the dynamics of the underlying system, i.e. the vacuum, is described in a bit more detail in Manasson's 2017 paper linked above. I haven't read the 2018 paper yet.

There happens to be another version of QED called Stochastic Electrodynamics (SED) which is based on de Broglie-Bohm theory; SED encorporates the ground state of the EM vacuum as the pilot wave. SED is an explicitly non-local hidden variables theory and particles immersed in this vacuum display highly nonlinear behavior.

The SED approach on the face of it sounds very similar to what Manasson has described in his 2017 paper linked above; this might actually represent a direct route to what you asked here:
So, it should be fairly straight forward to reproduce the observed energy levels of a hydrogen atom. Please include hyperfine splitting and the Lamb shift in the analysis. How would such a calculation proceed?

Gold Member
@Auto-Didact Well, honest opinion, what I see of the 2017 paper so far is disappointing. Reads like numerology where each calculation seems independent of the previous one and finely crafted to "work." Can't help but feel the only thing appearing out of the vacuum are the papers equations. Just my opinion and off the cuff impression.

Auto-Didact
@Auto-Didact Well, honest opinion, what I see of the 2017 paper so far is disappointing. Reads like numerology where each calculation seems independent of the previous one and finely crafted to "work." Can't help but feel the only thing appearing out of the vacuum are the papers equations. Just my opinion and off the cuff impression.
I haven't finished reading it, but I agree. His 2008 paper is of higher quality, in my opinion.

That said, the 2017 paper, just like the earlier one, naturally seems to construct several important concepts - both the Fermi-Dirac and Bose-Einstein statistics without even assuming the existence of identical particles - seemingly completely out of thin air. The whole treatment in 3.1 reeks of an extension of the Kuramoto model playing a role here; if this is true it alone would already make the entire thing worthwhile in terms of mathematics.

For now, I want to end on something that Feynman said about the art of doing theoretical physics:
Feynman said:
One of the most important things in this ‘guess - compute consequences - compare with experiment’ business is to know when you are right. It is possible to know when you are right way ahead of checking all the consequences. You can recognize truth by its beauty and simplicity. It is always easy when you have made a guess, and done two or three little calculations to make sure that it is not obviously wrong, to know that it is right. When you get it right, it is obvious that it is right - at least if you have any experience - because usually what happens is that more comes out than goes in. Your guess is, in fact, that something is very simple. If you cannot see immediately that it is wrong, and it is simpler than it was before, then it is right.

The inexperienced, and crackpots, and people like that, make guesses that are simple, but you can immediately see that they are wrong, so that does not count. Others, the inexperienced students, make guesses that are very complicated, and it sort of looks as if it is all right, but I know it is not true because the truth always turns out to be simpler than you thought. What we need is imagination, but imagination in a terrible strait-jacket. We have to find a new view of the world that has to agree with everything that is known, but disagree in its predictions somewhere, otherwise it is not interesting. And in that disagreement it must agree with nature.

If you can find any other view of the world which agrees over the entire range where things have already been observed, but disagrees somewhere else, you have made a great discovery. It is very nearly impossible, but not quite, to find any theory which agrees with experiments over the entire range in which all theories have been checked, and yet gives different consequences in some other range, even a theory whose different consequences do not turn out to agree with nature. A new idea is extremely difficult to think of. It takes a fantastic imagination.

Jimster41
In the later paper I like how he invokes continuity but then pretty much immediately jumps to an "iterated map" approach to get to some notion of cellular evolution.

What's the difference between that and a causal lattice representing evolution of space time geometry - especially an n dimensional one inhabiting an n+1 dimensional space (the thread/paper I referenced above)?

Both seem to be saying that non-linearity is hallmark and basically identical to "discrete" though there must be some coherent support (i.e. differentiable-manifold-like) to support the non-linear dynamics.

I mean you could put the label "self-gravitation vs. self-diffusion?" on the edge between two lattice nodes...

Last edited:
Jimster41
I think his stuff is pretty interesting. It reminds me a lot of Winfree with his tori. I get it's out there but why no peer review even if said review was very critical?

 I see he refs Strogatz.

Last edited:
Gold Member
I get it's out there but why no peer review even if said review was very critical?

IMO, because these papers are not even wrong. If one started with a complete identifiable system, like a classical field theory for instance, and systematically extracted results, a reviewable paper would result even if the results themselves were wrong. A development that begins with "imagine a charge fluctuation" isn't a development. Just my 2 cents.

Auto-Didact
In the later paper I like how he invokes continuity but then pretty much immediately jumps to an "iterated map" approach to get to some notion of cellular evolution.

What's the difference between that and a causal lattice representing evolution of space time geometry - especially an n dimensional one inhabiting an n+1 dimensional space (the thread/paper I referenced above)?
There is a huge difference: lattice models are simplified (often regular) discretizations of continuous spaces which are exactly solvable, making approximation schemes such as perturbation theory superfluous (NB: Heisenberg incidentally wrote a very good piece about this very topic in Physics Today 1967). In other words, lattice models are simplifications that help to solve a small subset of the full nonlinear problem based on certain 'nice' properties of the problem such as symmetry, periodicity, isotropy, etc.

On the other hand, iterative maps (also known as recurrence relations) are simply discrete differential equations, i.e. difference equations. Things that can be immensely difficult to analytically work out for nonlinear differential equations can sometimes become trivially easy for difference equations; the results of this discrete analysis can then be directly compared to the numerical analysis of the continuous case carried out by a computer. The generalisation of this discrete analysis to the full continuous case, can then often be made using several techniques and theorems. In other words, the entire nonlinear problem can actually get solved by cleverly utilizing numerical techniques, computers and mathematics.
Both seem to be saying that non-linearity is hallmark and basically identical to "discrete" though there must be some coherent support (i.e. differentiable-manifold-like) to support the non-linear dynamics.

I mean you could put the label "self-gravitation vs. self-diffusion?" on the edge between two lattice nodes...
You misunderstand it. I will let you in on the best kept secret in nonlinear dynamics, which seems to make most physicists uncomfortable: Feigenbaum universality, when applicable, can predict almost everything about the extremely complicated physics of a system, without knowing almost anything about the physics of that system, or indeed, anything about physics whatsoever; even worse, this can almost be carried out entirely using mosty high school level mathematics.

I will give you an example, to make things more clear: Iterative maps can be used to carry out stability analysis of the fixed points and so describe the dynamics of a system. There are multiple theorems which shows that all unimodal map (such as a negative parabola or even a ##\Lambda## shape) have qualitatively identical dynamics and quantitatively almost the same dynamics (up to numerical factors and renormalization).

Importantly, all unimodal maps follow the same period doubling route to chaos and the Feigenbaum constant ##\delta## is the universal mathematical constant characterizing this concept, very similar to how ##\pi## characterizes circularity. It cannot be stressed enough that ##\delta## naturally appears in all kinds of systems, putting it on the same status of importance in mathematics such as ##\pi##, ##e## and ##i##.

Now the thing to realize is that period doubling bifurcations do not only occur in discrete systems; they can also occur in continuous systems. The only criteria such continuous systems need to satisfy are:
1. be at least three dimensional (due to the existence and uniqueness theorem of analysis) i.e. three coupled partial differential equations (PDEs)
2. have a nonlinearity in at least one of these PDEs
3. have a tunable parameter in at least one of these (N)PDEs.
Given that the above criteria hold, one can then numerically integrate one of these PDEs in time and then use the Lorenz map technique to construct a discrete recurrence map of the local maxima over time of the numerical integration.

This is where the miracle occurs: if the resulting Lorenz map of the continuous system is unimodal for a given parameter, then the continuous system will display period doubling. This mapping doesn't even have to be approximatable by a proper function i.e. uniqueness isn't required!

Incidentally, this unimodal Lorenz map miracle as I have described it only directly applies for any strange attractor with fractal dimension close to 2 and Lorenz map dimension close to 1. It can be generalized, but that requires more experience and a little bit more sophisticated mathematics.
IMO, because these papers are not even wrong. If one started with a complete identifiable system, like a classical field theory for instance, and systematically extracted results, a reviewable paper would result even if the results themselves were wrong. A development that begins with "imagine a charge fluctuation" isn't a development. Just my 2 cents.
That's too harsh and it doesn't nearly adequately describe our modern world of scientific superspecialization, especially from the point of view of interdisciplinary researchers. There are today many other factors which can prohibit a publication from happening. For example, papers by applied mathematicians often tend to get refused by physics journals and vice versa due to different interoperable standards; the solution is to then settle for interdisciplinary journals, but depending on the subject matter, these journals then either tend be extremely obscure or simply non-existent.

The right credentials and connections are sometimes practically necessary to get taken seriously, especially if you go as left field as Manasson is going, and he obviously isn't in academia. Remember the case of Faraday, one of the greatest physicists ever, who was untrained in mathematics yet invented the field concept, purely by intuition and experiment; today he would get rubbished by physicists to no end simply because he couldn't state what he was doing mathematically. Going through the trouble of getting published therefore sometimes just isn't worth the trouble; this is why we are extremely lucky online preprint services like the arxiv exist.

Jimster41
@Auto-Didact Thanks for such a substantial reply. Really.

Is there a notion of Feigenbaum Universality associated with multi-parameter iterated maps? Or does his proof fall apart for cases other than the one d, single quadratic maximum?

Maybe another way of asking the same question, do I understand correctly that Feigenbaum Universality dictates there is periodicity (structure) to the mixture of order and chaos in non-linear maps that switch back and forth not just the rate of convergence (to chaos) of maps that... just converge to chaos?

 You know never mind. Those aren't very good questions. I Just spent some more time on the wiki chaos pages. I need to find another book (besides Schroeder's) on chaotic systems. Most are either silly or real textbooks. Schroeder's was something rare... in between. I'd like to understand the topic of non-linear dynamics, chaos, fractals, mo' better.

Last edited:
Auto-Didact
@Auto-Didact Thanks for such a substantial reply. Really.
My pleasure. I should say that during my physics undergraduate days, there were only three subjects I really fell in love with: Relativistic Electrodynamics, General Relativity and Nonlinear Dynamics. They required so little, yet produce so much; it is a real shame in my opinion that neither of the last two seem to be standard part of the undergrad physics curricula (none of the other physics majors took it in my year, nor the three subsequent years under my year).

Each of these subjects simultaneously both deepened my understanding of physics and widened my view of (classical pure and modern applied) mathematics in ways that none of the other subjects in physics ever seemed to be capable of doing (in particular what neither QM nor particle physics were ever able to achieve for me aesthetically in the classical pure mathematics sense). It saddens me to no end that more physicists don't seem to have taken the subject of nonlinear dynamics in its full glory.
Is there a notion of Feigenbaum Universality associated with multi-parameter iterated maps? Or does his proof fall apart for cases other than the one d, single quadratic maximum
To once again clarify, it doesn't just apply to iterative maps; it directly applies to systems of differential equations i.e. to dynamical systems. Feigenbaum universality directly applies to the dynamics of any system of 3 or more coupled NDEs with any amount of parameters.

The iterative map is just a tool to study the dynamical system, by studying a section of that system: you could use more parameters but one parameter is all one actually needs, so why bother? Once you start using more than one, you might as well just directly study the dynamical system.

In fact, you would need to be very lucky to find a nonlinear dynamical system (NDS) which only has one parameter! I only know of one example of an NDS with only one nonlinearity yet it has 3 parameters, namely the Rössler system:
##\dot x=-y-z##
##\dot y=x+ay##
##\dot z=b+z(x-c)##

In order to actually carry out the Lorenz map technique I described earlier on this system, we need to numerically keep two of the 3 parameters ##a##, ##b## and ##c## constant to even attempt an analysis! Knowing which one needs to be constant and which one needs to be varied is an art that you learn by trial and error.

To analyze any amount of parameters simultaneously is beyond the capabilities of present day mathematics, because it requires simultaneously varying, integrating and solving for several parameters; fully understanding turbulence for example requires this. This kind of mathematics doesn't actually seem to exist yet; inventing such mathematics would directly lead to a resolution of proving existence and uniqueness of the Navier-Stokes equation.

Luckily, we can vary each parameter independently while keeping the others fixed and there are even several powerful theorems which help us get around the practical limitations such as "the mathematics doesn't exist yet"; moreover, I'm optimistic that some kind of neural network might eventually actually be capable of doing this.
Maybe another way of asking the same question, do I understand correctly that Feigenbaum Universality dictates the periodicity of order and chaos in non-linear maps that switch back and forth not just the rate of convergence to chaos?
Yes, if by periodicity of order and chaos you mean how the system goes into and out of chaotic dynamics.
Or at least that there is some geometry (logic) of the parameter space that controls the periodicity of switching...
Yes, for an iterative map the points on the straight line ##x_{n+1}=x_n## intersects with the graph of the iterative map; these intersections define fixed points and so induce a vector field on this line. Varying the parameter ##r## directly leads to the creation and annihilation of fixed points; these fixed points constitute the bifurcation diagram in the parameter space (##r,x##).

For the full continuous state space of the NDS, i.e. in the differential equations case, the periodicity is equal to the amount of 'loops' in the attractor characterizing the NDS; if the loops double by varying parameters, there will be chaos beyond some combination of parameters, i.e. an infinite amount of loops i.e. a fractal i.e. a strange attractor.

This special combination of parameters is a nondimensionalisation of all relevant physical quantities; this is why all of this seems to be completely independent of any physics of the system. In other words, a mathematical scheme for going back from these dimensionless numbers to a complete description of the physics is "mathematics which doesn't exist yet".

The attractor itself is embedded within a topological manifold, i.e. a particular subset of the state space. All of this is completely clear visually by just looking at the attractors while varying parameters. This can all be naturally described using symplectic geometry.

To state things more bluntly, attractor analysis in nonlinear dynamics is a generalization of Hamiltonian dynamics by studying the evolution of Hamiltonian vector fields in phase space; the main difference being that the vector fields need not be conservative nor satisfy the Liouville theorem during time evolution.
You know never mind.
Too late! I went to the movies (First Man) and didn't refresh the tab before I finished the post.
Those aren't very good questions. I Just spent some more time on the wiki chaos pages. I need to find another book (besides Schroeder's) on chaotic systems. Most are either silly or real textbooks. Schroeder's was something rare... in between. I'd like to understand the topic of non-linear dynamics, chaos, fractals, mo' better.
Glad to hear that, I recommend Strogatz and the historical papers. To my other fellow physicists: I implore thee, take back what is rightfully yours from the mathematicians!

Jimster41
@Auto-Didact Once again, Thanks. The fact you could understand and answer my questions so clearly means a lot to me. Very encouraging.

I read Sync. by Strogatz. Does he have others? It was quite good, fascinating. Though I wish he'd gone deeper into describing more of the math of the chase - sort of as you do above. IOW It was a bit pop. I bought and delved into Winfree's "Geometry of Biological Time" absolutely beautiful book. His 3D helix of fruit fly eclosion and the examples of sync and singularities he gives in the first few chapters is worth the price alone but it becomes a real practitioners bible pretty quickly.

The only part of your reply above that makes my knee jerk is the statement "iterated maps are just a tool to study dynamical systems..." I get that is the context in which the math was invented, the bauble of value supposedly being the continuous NDS. But back to the topic of this thread (maybe flipping it's title while at the same time finding a lot of agreement in content). Don't discrete lattice, triangulation and causal loop models of space-time imply, perhaps, that continuous NDS's exist in appearance only, from a distance, because iterated maps are fundamental...

I just started Rovelli's book "Reality Is Not What It Seems". Word to the wise - he starts of with a (really prettily written) review of the the philosophical history behind the particle/field duality; Theodosius, Democritus et. al. I am taking my time and expecting a really nice ride. It looks painfully brief tho.

You ever heard of, read Nowak, "Evolutionary Dynamics". It's one of those few Shcroeder-like ones. And fascinating. After Rovelli's reminder on Einstien's important work re Brownian motion and the "Atomic Theory" I am wrestling with the question of whether Einstien's method isn't the same thing Nowak lays out in his chapter on evolutionary drift - which really took me some time to grok - blowing my mind as it did. I stopped reading that book halfway through partly because that chapter seemed to me to describe spontaneous symmetry breaking - using just an assertion of discrete iteration. Which made me sure I had misunderstood - since spontaneous symmetry breaking seems to require a lot more fuss than that.

Looking forward to "First Man" though I just don't think it's fair that Ryan Gosling gets to play "Officer K" and "Neil Armstrong". That's just too much cool...

Last edited:
Auto-Didact
Quick reply, since I wasn't entirely satisfied with this either:
The iterative map is just a tool to study the dynamical system, by studying a section of that system: you could use more parameters but one parameter is all one actually needs, so why bother? Once you start using more than one, you might as well just directly study the dynamical system.
I should clarify this; saying that the iterative map is "just a tool" is a very physics oriented way of looking at things, but it is essential (also partially because of the possibility to carry out experiments) to be able to look at it in this way; physicists trump mathematicians in being capable of doing this.

The first point is that iterative maps, being discrete, allows having functions which aren't bijective, i.e. for a single input ##x## you can get several (even an infinite amount of) outputs ##y##; this violates uniqueness and therefore makes doing calculus impossible.

The second point is that there are several kinds of prototypical iterative mapping techniques which to the physicist are literally tools, in the same sense like how e.g. the small angle approximation and perturbation theory are merely tools. These prototypical iterative mapping techniques are
- the Lorenz map, constructable using only one input variable as I described before.
- the Poincaré map, which is a section through the attractor which maps input points (i.e. the flow on a loop) ##x_n## within this section to subsequent input points ##x_{n+1}## which pass through this same section.
- the Henon map, which is unlike the other two literally just a discrete analog of a NDS, consisting of two coupled difference equations with two parameters; in contrast to the continuous case, attractors in this map can already display chaos in just a two dimensional state space.

For completeness, in order to understand the numerical parameters themselves better from a physics perspective, check out this post. I'll fully read and reply to the rest of your post later.

Auto-Didact
@Auto-Didact Once again, Thanks. The fact you could understand and answer my questions so clearly means a lot to me. Very encouraging.
No problem.
I read Sync. by Strogatz. Does he have others? It was quite good, fascinating. Though I wish he'd gone deeper into describing more of the math of the chase - sort of as you do above. IOW It was a bit pop. I bought and delved into Winfree's "Geometry of Biological Time" absolutely beautiful book. His 3D helix of fruit fly eclosion and the examples of sync and singularities he gives in the first few chapters is worth the price alone but it becomes a real practitioners bible pretty quickly.
Strogatz' masterpiece is his textbook on nonlinear dynamics and chaos theory. Coincidentally, Winfree's book was put on my to read list after I read Sync a few years ago; the problem is my list is ever expanding, but I'll move it up a bit since you say it's more than pop.
The only part of your reply above that makes my knee jerk is the statement "iterated maps are just a tool to study dynamical systems..." I get that is the context in which the math was invented, the bauble of value supposedly being the continuous NDS.
In my previous post I addressed how some maps (like the Lorenz and Poincaré maps) are 'just tools' just like how perturbation theory is merely a tool, but I'll add to that the statement that the attractors in some actually simplified and discretized versions of the continuous NDS (like the two-dimensional Henon map) can have problems at the edges of the attractor with values going off to infinity; in proper attractors, i.e. in the continuous case with three or more dimensions, such problems do not occur, which shows that the discretized reduced versions are nothing but idealized approximations in some limit.
But back to the topic of this thread (maybe flipping it's title while at the same time finding a lot of agreement in content). Don't discrete lattice, triangulation and causal loop models of space-time imply, perhaps, that continuous NDS's exist in appearance only, from a distance, because iterated maps are fundamental...
Perhaps, but unlikely since those are all discrete models of spacetime, not of state space. Having said that, discrete state space is a largely unexplored topic at the cutting edge intersection of NLD, statistical mechanics and network theory, called 'dynamical networks' or more broadly 'network science'; incidentally Strogatz, his former student Watts and a guy named Barabasi are pioneers in this new field. For a textbook on this subject, search for "Network Science" by Barabasi.
I just started Rovelli's book "Reality Is Not What It Seems". Word to the wise - he starts of with a (really prettily written) review of the the philosophical history behind the particle/field duality; Theodosius, Democritus et. al. I am taking my time and expecting a really nice ride. It looks painfully brief tho.
I read it awhile ago, back to back with some of his other works, see here.
You ever heard of, read Nowak, "Evolutionary Dynamics". It's one of those few Shcroeder-like ones. And fascinating. After Rovelli's reminder on Einstien's important work re Brownian motion and the "Atomic Theory" I am wrestling with the question of whether Einstien's method isn't the same thing Nowak lays out in his chapter on evolutionary drift - which really took me some time to grok - blowing my mind as it did.
I'll put it on the list.
I stopped reading that book halfway through partly because that chapter seemed to me to describe spontaneous symmetry breaking - using just an assertion of discrete iteration. Which made me sure I had misunderstood - since spontaneous symmetry breaking seems to require a lot more fuss than that.
In my opinion, all the fuss behind spontaneous symmetry breaking is actually far less deep than what is conventionally conveyed by particle physicists, but my point of view is clearly an unconventional one among physicists because I think QT is not fundamental i.e. that the presumed fundamentality of operator algebra and group theory in physics is a hopelessly misguided misconception.
Looking forward to "First Man" though I just don't think it's fair that Ryan Gosling gets to play "Officer K" and "Neil Armstrong". That's just too much cool...
It wasn't bad, but I was expecting more; I actually saw 'Bohemian Rhapsody' the same day. They are both dramatized biography films, with clearly different subjects, but if I had to recommend one, especially if you are going with others, I'd say go watch Bohemian Rhapsody instead of First Man.

Jimster41
Jimster41
Perhaps, but unlikely since those are all discrete models of spacetime, not of state space. Having said that, discrete state space is a largely unexplored topic at the cutting edge intersection of NLD, statistical mechanics and network theory, called 'dynamical networks' or more broadly 'network science'; incidentally Strogatz, his former student Watts and a guy named Barabasi are pioneers in this new field. For a textbook on this subject, search for "Network Science" by Barabasi.

Well, I hadn't considered the difference to be honest and in hindsight I can see why it's important to distinguish...
But I'm really going to have a think, I think, on just what the distinction implies. It sharpens my confusion w/respect to how a continuous support can spontaneously generate discrete stuff vs. the seemingly intuitive nature of things going the other way - where discrete stuff creates an illusion of continuity.

The book you mention looks right on target...

I assume you knew his site existed (an on-line version of the book). I just found it but I'm a bit afraid to post the link here. I think I will have to own the actual book tho...

I am also really looking forward to Bohemian Rhapsody.

Gold Member
Okay I meant to come back to this. As I said I agree with you in the main. It's more I'm just not sure what you're actually disagreeing with and I think you're being very dismissive of a field without providing much reason.

Its more important than you realize as it makes or breaks everything even given the truth of the 5 other assumptions you are referring to. If for example unitarity is not actually 100% true in nature, then many no-go theorems lose their validity.
Which no-go theorems? Not PBR, not Bell's, not the Kochen-Specker, not Hardy's baggage theorem, not the absence of maximally epistemic theories. What are these many theorems?

Bell's theorem for example would survive, because it doesn't make the same assumptions/'mistakes' some of the other do.
Most of the major no-go theorems take place in the same framework as Bell's theorem, e.g. Kochen-Specker, Hardy. What's an example of one that could fail while Bell's would still stand?

I think you are misunderstanding me, but maybe only slightly. The reason I asked about the properties of the resulting state space is to discover if these properties are necessarily part of all models which are extensions of QM. It seems very clear to me that being integrable isn't the most important property
No it mightn't be, but nobody is saying that is. It more highlights an interesting possibility, that you might need an unmeasurable space and those are never really looked at.

Yes, definitely.
Sorry, but you really think most of the no-go theorems are nonsense that's as useful as saying "physics uses numbers"? The PBR theorem, the Pusey-Leifer theorem, etc are just contentless garbage? If not could you tell me which are?

I still don't think taking the state space to be "at least measurable" is devoid of content and as meaningful as saying "physics uses numbers". It's setting out what models are considered. In fact I would say it strengthens the theorems considering how weak an assumption it is.

Also I still don't understand how it is necessarily epistemic. A measurable space might be put to an epistemic use, but I don't see how it is intrinsically so.

A model moving beyond QM may either change the axioms of QM or not. These changes may be non-trivial or not. Some of these changes may not yet have been implemented in the particular version of that model for whatever reason (usually 'first study the simple version, then the harder version'). It isn't clear to me whether some (if not most) of the no-go theorems are taking such factors into account.
So your main objection to the framework is that it might unfairly eliminate a model in the early stages of development? In other words, an earlier simpler version of an idea might have some interesting insights, but it's early form, being susceptible to the no-go theorems, might be unfairly dismissed without being given time to advance to a form that doesn't and might help us understand/supersede QM?

Twodogs
This is an intriguing proposition. As noted, self-organizing dynamics occur on a myriad of scales, are robust and have an extensive mathematical basis. Speaking with a very superficial understanding, it feels organic rather than mechanistic and potentially rooted in a new foundational paradigm. Having just read something about Bohmian mechanics it feels like the two might go together.

Auto-Didact
Auto-Didact
I assume you knew his site existed (an on-line version of the book). I just found it but I'm a bit afraid to post the link here. I think I will have to own the actual book tho...
Whose book is online?
It more highlights an interesting possibility, that you might need an unmeasurable space and those are never really looked at.
Now this is indeed an intriguing possibility.
Sorry, but you really think most of the no-go theorems are nonsense that's as useful as saying "physics uses numbers"?
I was being a bit derisive of them, they clearly aren't mere nonsense, but I would say that you yourself are making light of the statement that physics uses numbers; the fact that physics uses real numbers and complex numbers is quite profound in its own right, perhaps more so than the state space being measurable.

My point is that no-go theorems which are about theories instead of about physical phenomena aren't actually theorems belonging to physics, but instead theorems belonging to logic, mathematics and philosophy; see e.g. Gleason's theorem for another such extra-physical theorem pretending to be physics proper.

There is no precedent whatsoever within the practice of physics for such kind of theorems which is why it isn't clear at all that the statistical utility of such theorems for non-empirical theory selection is actually a valid methodology, and there is a good reason for that; how would the sensitivity and specificity w.r.t. the viability of theories be accounted for if the empirically discriminatory test is a non-empirical theorem?

It is unclear whether such a non-empirical tool is epistemologically - i.e. scientifically - coherently capable of doing anything else except demonstrating consistency with unmodified QM/QFT. If this is all the theorems are capable of, sure they aren't useless, but they aren't nearly as interesting if QM is in fact in need of modification, just like all known theories in physics so far were also in need of modification.

Physics is not mathematics, philosophy or logic; it is an empirical science, which means that all of this would have to be answered before advising or encouraging theorists to practically use such theorems in order to select the likelihood of a theory beyond QM in such a statistical manner. To put it bluntly, scientifically these theorems might just end up proving to be 'not even wrong'.
If not could you tell me which are?
I'll get back to this.
Also I still don't understand how it is necessarily epistemic. A measurable space might be put to an epistemic use, but I don't see how it is intrinsically so.
If some necessary particular mathematical ingredients such as geometric or topological aspects are removed, physical content may be removed as well; what randomly ends up getting left may just turn out to be irrelevant fluff, physically speaking.
So your main objection to the framework is that it might unfairly eliminate a model in the early stages of development? In other words, an earlier simpler version of an idea might have some interesting insights, but it's early form, being susceptible to the no-go theorems, might be unfairly dismissed without being given time to advance to a form that doesn't and might help us understand/supersede QM?
Partially yes, especially given the lack of precedent for using theorems (which might belong more to mathematics or to philosophy instead of to physics) in such a non-empirical statistical selection procedure.
This is an intriguing proposition. As noted, self-organizing dynamics occur on a myriad of scales, are robust and have an extensive mathematical basis. Speaking with a very superficial understanding, it feels organic rather than mechanistic and potentially rooted in a new foundational paradigm. Having just read something about Bohmian mechanics it feels like the two might go together.
There seems to be at least one link with BM, namely that Manasson's model seems to be fully consistent with Nelson's fully Bohmian program of stochastic electrodynamics.

Auto-Didact
To get back to this:
It more highlights an interesting possibility, that you might need an unmeasurable space and those are never really looked at.
I said earlier that that was an intriguing possibility, but this is actually my entire point: monkeying with the topology and/or the fractality of (a subset of a) space may influence its measurability.

Therefore prematurely excluding theories purely on the basis of their state spaces being (or locally seeming) measurable, is in theoretical practice almost guaranteed to lead to a high degree of false positive exclusions.

Jimster41
Fra
I agree that beeing "measurable" is a key topic in this discussion. In particular to consider the physical basis of what beeing measureable is. In a probabilistic inference the measure is essential in order to quantify and rate empirical evidence. This is essential to the program, so i would say that the insight is not to release ourselves from requirements of measurability, that would be a mistake in the wrong direction. I think they insight must be that what is measurable relative to one observer, need not be measurable with respect to another observers. This all begs for a new intrinsic framework for probabilistic inference, that lacks global or observer invariant measures.

If we think about how intrinsic geometry originated from asking how a life form not beeing aware of en embedding geometry can infer geometry from local experiments within the surface; and translate that to asking how an information processing agent not beeing aware of the embedding truth, can infer things from incomplete knowledge confined only to its limited processing power: What kind of mathematics will that yield us? Then let's try to phrase or reconsturct QM in these terms. note this this would forbid things like infinite ensembles or infinite repeats of experiments. It will force us to formulate QM foundations with the same constraints we live with for cosmological theories.

A side note: Merry Christmas :)

/Fredrik

Auto-Didact
Gold Member
The author convincingly demonstrates that practically everything known about particle physics, including the SM itself, can be derived from first principles by treating the electron as an evolved self-organized open system in the context of dissipative nonlinear systems. Moreover, the dissipative structure gives rise to discontinuities within the equations and so unintentionally also gives an actual prediction/explanation of state vector reduction, i.e. it offers an actual resolution of the measurement problem of QT

Unless I seriously missed something in that article, it isn't very convincing at all. In particular, he describes this self organization as a self organization of the vacuum. However, without quantum field theory, you have nothing which defines a vacuum state and nothing to self organize.

Auto-Didact
In particular, he describes this self organization as a self organization of the vacuum. However, without quantum field theory, you have nothing which defines a vacuum state and nothing to self organize.
The author - without planning to do so - makes a (seemingly) unrelated mathematical argument based on a clear hypothesis and then spontaneously goes on to derive the complete dynamical spinor state set i.e. the foundation of Dirac theory from first principle by doing pure mathematics in state space based on purely empirical grounds.

Quantum field theory, despite being the original context in which vacuum states were predicted theoretically and discovered experimentally, certainly isn't the only possible theory capable of describing the vacuum.

After experimental discovery has taken place, theorists are free to extend the modelling of any emperically occurring phenomenon using any branch of mathematics which seems fit to do so: this is how physics has always worked.

For the vacuum this proliferation of models has already occurred, i.e. the vacuum isn't a unique feature of QFT anymore; any theory aiming to go beyond QFT has to describe the vacuum as part of nature; how it does so depends on the underlying mathematics.

Gold Member
For the vacuum this proliferation of models has already occurred, i.e. the vacuum isn't a unique feature of QFT anymore; any theory aiming to go beyond QFT has to describe the vacuum as part of nature; how it does so depends on the underlying mathematics.
Sure, but the vacuum belonging to a theory must be part of that particular theory. I really do not see where the paper develops the "stuff" (for want of a better word) from which anything self organizes. One cannot discuss self organizing without in some way, defining what it is that is self organizing, what it's properties are, etc. The author's aims are not to go beyond qft, but to replace it, given that the thrust is that quantization isn't fundamental. In qft, quantization is fundamental.

Auto-Didact
I really do not see where the paper develops the "stuff" (for want of a better word) from which anything self organizes.
He describes the process of how the equation should look qualitatively; doing this is standard methodology in dynamical systems research. This is because of the prototypicality of types of equations for their class, especially given Feigenbaum universality which he also derives from his Ansatz.
One cannot discuss self organizing without in some way, defining what it is that is self organizing, what it's properties are, etc.
He posits that the vacuum field, an experimentally established phenomenon, has inner dynamics which makes it self-organizing. Establishing the mathematical properties of this dynamical system is at this stage more important than establishing the actual equation; moreover, his argument is so general that it applies to any equation in this class, if they exist.
The author's aims are not to go beyond qft, but to replace it, given that the thrust is that quantization isn't fundamental.
'Going beyond' and 'replacing' are often used as synonyms in this context. For example, GR went beyond Newtonian theory and replaced it; arguing this point any further is purely a discussion on semantics.

The point is that any kind of vacuum field - fully of a purely QFT type or otherwise - assuming it has a particular kind of internal dynamics, automatically seems to reproduce Dirac theory, SM particle hierarchy & symmetry groups, coupling constants and more; if anything this sounds too good to be true.

Auto-Didact
I left out this bit in the previous post:
In qft, quantization is fundamental.
The core idea is that a vacuum field with a particular kind of internal dynamics, has necessarily a particular state space with special kinds of attractors in it, which will automatically lead to a display of quantized properties for any system in interaction with this field, i.e. for particles; this makes the experimentally determined quantum nature of particles, their properties, orbits and possibly even their very existence, fully an effect of always being in interaction with the vacuum field.

Twodogs
It might be useful to look at self-organizing systems in their better-known habitat. There are 350-some genes and certain cell functions that are present in every living thing on earth, plant and animal. Biologists have triangulated their origin back three billion years to a hypothetical single celled organism identified as the “last universal common ancestor,” (LUCA). So here is a dissipative dynamical system that has not only long endured, but radically extended its phase space.

Is there a LUCA analogue for physics? Is there a dynamical seed from which all else follows? It would needs be an iterative process with a timeline rather than a one-off event. Note that, in an iterative process the distinction between cause and effect and the notion of retro-causality become less meaningful. Can one identify the fundamental origin of iterative processes?

Fra
It might be useful to look at self-organizing systems in their better-known habitat. There are 350-some genes and certain cell functions that are present in every...
I agree that physicists has a lot to learn from analyzing evolution on life. What are the analogies to "laws", "observers" and "experiments" in the game of life?
Can one identify the fundamental origin of iterative processes?
I think this is a good thought, and this is something I've been thinking about for quite some time, but what will happen is something like this:

You need mathematical abstractions of observers and their behavior which correspond to "lifeforms". Then ponder about the mechanisms for these abstractions to interact and to be each others enviroment. Then try to see how total theory space can be reduced in complexity and the origin of things?

The phase i am currently in is abstractions that are like interacting information processing agents and dna of law can be thought of as the computational code that determines the dices that are used to play. But each dice is fundamentally hidden to other agents whose collective ignorance supports acting as if they did not exist so that is does not quailfy as a hidden variable model. Agents also has intertia associated to the codes. This is how volatile codes can easily mutate but inertial ones not.

No matter how conceptually nice there is a huge gap from this toy model to making contact to low energy physics as we know it.

Conceptually the abstrations here are at the highest possible energyscale. But they trick to avoid getting lost in a landscape of possible high energy models - given the low energy perspective, is to also consider the observer to be in the high energy domain - not in the low energy lab frame from which we normally do scattering statistis in qft.

Noone is currently interested in toy models along these lines though, this ia why the "activation enegy" for this approach to publish something that normal physicists can relate to is huge.

Perhaps if there was a new discipline in this direction there would be a community for partial progress to see the light.

/Fredrik

Auto-Didact
It might be useful to look at self-organizing systems in their better-known habitat. There are 350-some genes and certain cell functions that are present in every living thing on earth, plant and animal.
Last time I checked (~2010), the mathematics behind this (i.e. evolution by natural selection) hadn't been properly straightened out yet apart from gross simplified models which weren't necessarily generalizable. If it has been worked out, the analogy might be clearer.
Is there a LUCA analogue for physics? Is there a dynamical seed from which all else follows?
The author of this model proposes that there is a LUCA for the next two generations of fermions, with the vacuum field being the ancestor to all. There is an illustration of this in the paper (Figure 1). I'm sure in high energy particle physics there are tonnes of models which have such structure.
It would needs be an iterative process with a timeline rather than a one-off event.
Actually a one-off time event is sufficient, given the fundamentality of the system: if a universe exists with nothing else but a dynamical vacuum field, any perturbation of this field capable of causing feedback to the field could lead to the scenario the author describes. The existence of the dynamical field alone then already fully determines the state space of the vacuum including all its attractors.
Can one identify the fundamental origin of iterative processes?
I see no reason why not, precisely because they can be fitted to mathematical models of iteration and then the origin can be worked out by studying the model.

Last edited:
Gold Member
Last edited:
Auto-Didact

I found the following value for δ:
This gives
α = 2πδ2 ~= 136.98 .​
This is ~= 1/α rather than ~= α.

Might you have a typo? Perhaps you should have
α = 1/ 2πδ2 .​

Regards,
Buzz
yeah, it is a typo, should've been $$\alpha = (2\pi\delta^2)^{-1} \cong \frac {1} {137}$$I immediately wrote up and posted this thread from my smartphone directly after I finished reading the paper, without checking the (LaTeX) equations.

I actually spotted this typo when I reread the thread for the first time later that day after I had posted it, but I couldn't edit it anymore.

Auto-Didact
What is the physics implication of the approximation error of ~0.06 in 1/α using the formula with δ.
My first hunch would be that this numerical discrepancy arises from the existence of an imperfection parameter in addition to the bifurcation parameter, i.e. the proper level of analysis for addressing the numerical error is by using the methods of catastrophe theory to study cusps in the surface in the higher dimensional parameter space consisting of the state ##\psi##, a bifurcation parameter ##r## and an imperfection parameter ##h##.

Buzz Bloom
Twodogs

"Last time I checked (~2010), the mathematics behind this (i.e. evolution by natural selection) hadn't been properly straightened out yet apart from gross simplified models which weren't necessarily generalizable. If it has been worked out, the analogy might be clearer."

The need for scientific rigor is understood, but still a phenomenon may be real without an exacting mathematical description. In the case of LUCA, I believe there is a shovel-worthy trail of bread crumbs leading to its approximation.

"Actually a one-off time event is sufficient, given the fundamentality of the system: if a universe exists with nothing else but a dynamical vacuum field, any perturbation of this field capable of causing feedback to the field could lead to the scenario the author describes. The existence of the dynamical field alone then already fully determines the state space of the vacuum including all its attractors."

This is interesting. I don’t want to waste your time, but I have questions. You present what I take to be a schematic of a kind minimal, prototypical universe and identify its necessary ingredients. Setting them on the lab bench we have a dynamical vacuum field, a perturbation and its associated feedback.

I read that fields were the first quantities to emerge from the initial flux and they seem like elegant dynamical constructs to arise at a time of maximal stress unless strongly driven by an underlying principle.

And feedback itself is not a given in an outwardly dispersing wave impulse without a displacement constraining boundary condition. Where does that arise?

For reasons above, are the dynamics of quantum fields an ‘integrative level’ of description that arises from the phenomena of a lower level?

This is a rather large question, but it does affect the substrate upon which Manasson’s model would be operating.
Thanks,

Auto-Didact
I read that fields were the first quantities to emerge from the initial flux and they seem like elegant dynamical constructs to arise at a time of maximal stress unless strongly driven by an underlying principle.
I'm not too keen on speculating when exactly the scenario which the author describes might have occurred; without giving explicit equations, anything going further than just stating that the author's picture is mathematically consistent seems to me to be baseless speculation.
And feedback itself is not a given in an outwardly dispersing wave impulse without a displacement constraining boundary condition. Where does that arise?
Due to the conservative nature of the initially chargeless field itself, any fluctuation which has a non-neutral charge will lead to a polarization of the charge of the surrounding field into the opposite end; this balancing act is limited by the speed of light and therefore will lead to interaction between the charges, i.e. feedback.
For reasons above, are the dynamics of quantum fields an ‘integrative level’ of description that arises from the phenomena of a lower level?
If by 'an integrative level of description' you mean 'emergent from underlying mechanics', then the answer is yes.

Last edited:
*now*
Hi, not having read everything here, but would any possible results from the tests proposed by Bose et al. and Marletto and Vedral for gravitationally induced entanglement likely pose any problems for this picture?

Auto-Didact
Hi, not having read everything here, but would any possible results from the tests proposed by Bose et al. and Marletto and Vedral for gravitationally induced entanglement likely pose any problems for this picture?
The model as constructed only incorporates forces under the SM.

Suffice to say it might be generalizable to include gravitation, but that would probably make the model less natural, e.g. modifying the correspondence between the three generations of known particles and bifurcations as well as predict a wrong gravitational coupling constant.

*now*
*now*
Suffice to say it might be generalizable to include gravitation, but that would probably make the model less natural, e.g. modifying the correspondence between the three generations of known particles and bifurcations as well as predict a wrong gravitational coupling constant.

Ok, thanks very much for the interesting response, Auto-Didact.