Graduate Quantization isn't fundamental

Click For Summary
The discussion centers on the idea that quantization in particle physics may not be fundamental but rather an emergent phenomenon resulting from nonlinear dynamics and dissipative processes. Manasson's model suggests that elementary particles can be viewed as self-organized systems, with quantized properties arising from their interactions near stable superattractors. This approach challenges conventional views on conservation laws and symmetries in quantum theory, proposing that these may be emergent rather than intrinsic. The paper also connects fundamental constants like the Planck constant and elementary charge through chaos theory, hinting at a deeper, nonlinear framework underlying quantum mechanics. Overall, the theory invites further investigation into the nature of particles and the foundations of quantum physics.
  • #91
Auto-Didact said:
Can't work given certain assumptions
Of course, as I have said, the theorems have assumptions, that's a given.

Auto-Didact said:
including the full validity of axioms of QM beyond what has been experimentally demonstrated
That depends on the particular theorem. Bell's theorem for example does not rely on the full validity of QM, similar for many others. This implies to me that you haven't actually looked at the framework and are criticising it from a very abstract position of your own personal philosophy of science and your impression of what it must be.

Auto-Didact said:
If the only relevant property is that 'it supports integration', then you have removed all the physics and are left with just mathematics. 'It supports integration' is equally empty as the statement 'numbers are used in physics'.
It's not a proposal that the real space of states only has the property of supporting integration and nothing else. Remember how it is being used here. It is saying "If your model involves a state space that at least supports integration..."

So it constrains models where this (and four other assumptions) are true. It's not a proposal that nature involves only a set that involves integration and nothing else. The fact that you can prove theorems constraining such models shows it isn't as empty as "physics has numbers", to be honest that is just a kneejerk sneer at an entire field. Do you think if the framework was as useful as just saying "physics has numbers" that it would be accepted into major journals?

I think you are still treating the ontological models framework as an actual proposal for what nature is like, i.e. objecting to only looking at a state space that involves integration. Rather it is a presentation of general properties common to many models that attempt to move beyond QM and then demonstrating that from those properties alone one gets constraints.

i.e. Many models that attempt to replicate QM do have a state space that supports integration and that with four other properties is all you need to prove some theorems about them. Again all the actual models are richer and more physical than this, but some of their less pleasant to some properties follow from very general features like the integrability of the state space.

An analogue would be proving features of various metric theories of gravity. In such proofs you only state something like "the action possesses extrema", not because you're saying the action has that feature and nothing more, but because it's all you need to derive certain general features of such theories.

Auto-Didact said:
it would transform the manifold into exactly an epistemological object
I don't understand your use of epistemic I have to say. You seem to use it to mean abstract, but I don't see how a manifold is epistemic. "Stripped of physical content" maybe, but I don't know of any major literature calling this epistemic.

Auto-Didact said:
I'm not saying ##\mathcal{H}## shouldn't be involved
Well then coming back to where this originated, what makes it invalid as a definition of ##\psi##-ontic?
 
Physics news on Phys.org
  • #92
Paul Colby said:
So, it should be fairly straight forward to reproduce the observed energy levels of a hydrogen atom. Please include hyperfine splitting and the Lamb shift in the analysis. How would such a calculation proceed?
Not necessarily, there are multiple routes:
1) Direct prediction of numerics based on experiment: this requires attractor reconstruction and unfortunately this usually isn't that simple. Usually to discover numerics, one would have do make very precise time series measurements, in this case of the vacuum polarization process and of extremely high-field electrodynamics, and then utilize the Ruelle-Takens theorem in order to identify the attractor; the problem here is that precise experimentation seems to be viciously complicated.

2) Direct prediction of numerics by guessing the correct NPDE: In order to characterize the actual numerics of orbits in QM without having precise measurements, requires essentially knowing the complete equations. Knowing the correct class of equations - giving qualitatively correct predictions of the general characteristics - is only a miniscule help w.r.t. identifying the uniquely correct NPDE. This is obviously because there is no superposition principle to help here.

3) Indirect: utilize the constructed spinor theory to rederive the Dirac equation and then guess the correct non-linearization thereof which incorporates renormalization as a physical process characterized by terms inside the new equation instead of an ad hoc procedure applied to an equation. This is far easier said than done, theorists have been attempting to do this since Dirac himself without any success so far.
 
  • #93
DarMM said:
Of course, as I have said, the theorems have assumptions, that's a given.
Its more important than you realize as it makes or breaks everything even given the truth of the 5 other assumptions you are referring to. If for example unitarity is not actually 100% true in nature, then many no-go theorems lose their validity.
DarMM said:
That depends on the particular theorem. Bell's theorem for example does not rely on the full validity of QM, similar for many others. This implies to me that you haven't actually looked at the framework and are criticising it from a very abstract position of your own personal philosophy of science and your impression of what it must be.
I have looked at the theorems. I should make clear that I am not judging all no-go theorems equally, I am saying each of them has to be judged on a case by case basis (like in law). Bell's theorem for example would survive, because it doesn't make the same assumptions/'mistakes' some of the other do. I am also saying just because Bell's theorem is valid, it doesn't mean the others will be as well.
DarMM said:
The fact that you can prove theorems constraining such models shows it isn't as empty as "physics has numbers", to be honest that is just a kneejerk sneer at an entire field.
I think you are misunderstanding me, but maybe only slightly. The reason I asked about the properties of the resulting state space is to discover if these properties are necessarily part of all models which are extensions of QM. It seems very clear to me that being integrable isn't the most important property of the state space ##\Lambda##.
DarMM said:
Do you think if the framework was as useful as just saying "physics has numbers" that it would be accepted into major journals?
Yes, definitely. I have seen 'very good' papers across many fields of science, including physics, finance, economics, neuroscience, medicine, psychology and biology with equally bad or worse underlying conceptual reasoning; a mere mention of the limitations of the conclusions due to the assumptions is all a scientist needs to do to cover himself. There is no reason to suspect physicists are better than other scientists in this aspect.

Journals, including major journals, tend to accept papers based on clear scientific relevance, strong methodology and clear results, and not based on extremely carefully reasoned out hypotheses; one can be as sloppy in coming up with hypotheses as one wants as long as a) one can refer to the literature that what he is doing is standard practice, and/or b) the hypothesis can be operationalized and that operationalization directly tested empirically.
DarMM said:
I think you are still treating the ontological models framework as an actual proposal for what nature is like, i.e. objecting to only looking at a state space that involves integration. Rather it is a presentation of general properties common to many models that attempt to move beyond QM and then demonstrating that from those properties alone one gets constraints.
That framework is a class of model, characterizing the properties of many models. The particular theorem(s) in question then in one swoop argue against the entire class.

A model moving beyond QM may either change the axioms of QM or not. These changes may be non-trivial or not. Some of these changes may not yet have been implemented in the particular version of that model for whatever reason (usually 'first study the simple version, then the harder version'). It isn't clear to me whether some (if not most) of the no-go theorems are taking such factors into account.
DarMM said:
I don't understand your use of epistemic I have to say. You seem to use it to mean abstract, but I don't see how a manifold is epistemic. "Stripped of physical content" maybe, but I don't know of any major literature calling this epistemic.
I quote the Oxford Dictionary:
Definition of 'epistemic' in English:
epistemic (adjective): Relating to knowledge or to the degree of its validation.

Origin: 1920s: from Greek epistēmē ‘knowledge’ (see epistemology) + -ic.
Definition of epistemology in English:
epistemology (noun, mass noun):
Philosophy
The theory of knowledge, especially with regard to its methods, validity, and scope, and the distinction between justified belief and opinion.

Origin: Mid 19th century: from Greek epistēmē ‘knowledge’, from epistasthai ‘know, know how to do’.
 
Last edited:
  • #94
Auto-Didact said:
Not necessarily, there are multiple routes:

Okay, so what I'm taking from your list of potential approaches is that the answer to my initial question on what the underlying system to which the "method" is applied, is at present completely unknown. I chose the example of the hydrogen atom because, at least in the current body of theory, it is a very specific and detailed dynamical system. Apparently, this new approach doesn't work on the hydrogen atom as is. It's going to be a hard sell.
 
  • #95
I'm trying to follow this discussion - which is interesting.
I am confused about how lattice models of quantum gravity fit (or don't) here.

My naive cartoon is that such a structure supports non-linearity with manifold-like properties. I mean Isn't iteration all that is required for some fractal generation?
There is the a-priori structure of a "causal lattice" of space-time geometry to explain but as epistemological ontologies go that's pretty minimal. Most importantly, as I understand it anyway, there are real calculators that are getting close to building the SM from them. In fact @atyy posted one in this very forum. I found it very very hard to get much from it tho - really hard.

https://www.physicsforums.com/threads/lattice-standard-model-wang-wen.958852/
 
Last edited:
  • #96
Auto-Didact said:
I quote the Oxford Dictionary:
How is a differentiable manifold epistemic though?
 
  • #97
Paul Colby said:
Okay, so what I'm taking from your list of potential approaches is that the answer to my initial question on what the underlying system to which the "method" is applied, is at present completely unknown.
No, partially unknown. It is known that the correct equation:
- is a NPDE
- is reducible to the Dirac equation in the correct limit
- describes vacuum fluctuations
- has a strange attractor in its state space
- has a parameter displaying period doubling

An equation has to be constructed with the above things as given.
Paul Colby said:
I chose the example of the hydrogen atom because, at least in the current body of theory, it is a very specific and detailed dynamical system. Apparently, this new approach doesn't work on the hydrogen atom as is. It's going to be a hard sell.
I will let Feynman tell you why having immediately such an unrealistic expectation of a preliminary model such as this one is extremely shortsighted.
Feynman said:
For those people who insist that the only thing that is important is that the theory agrees with experiment, I would like to imagine a discussion between a Mayan astronomer and his student. The Mayans were able to calculate with great precision predictions, for example, for eclipses and for the position of the moon in the sky, the position of Venus, etc. It was all done by arithmetic. They counted a certain number and subtracted some numbers, and so on. There was no discussion of what the moon was. There was no discussion even of the idea that it went around. They just calculated the time when there would be an eclipse, or when the moon would rise at the full, and so on.

Suppose that a young man went to the astronomer and said, ‘I have an idea. Maybe those things are going around, and there are balls of something like rocks out there, and we could calculate how they move in a completely different way from just calculating what time they appear in the sky’. ‘Yes’, says the astronomer, ‘and how accurately can you predict eclipses ?’ He says, ‘I haven’t developed the thing very far yet’. Then says the astronomer, ‘Well, we can calculate eclipses more accurately than you can with your model, so you must not pay any attention to your idea because obviously the mathematical scheme is better’.

There is a very strong tendency, when someone comes up with an idea and says, ‘Let’s suppose that the world is this way’, for people to say to him, ‘What would you get for the answer to such and such a problem ?’ And he says, ‘I haven’t developed it far enough’. And they say, ‘Well, we have already developed it much further, and we can get the answers very accurately’. So it is a problem whether or not to worry about philosophies behind ideas.
In other words, what you are asking is an important eventual goal post - one of several goal posts - which should be attempted to be reached. Arguing from a QG or QM foundations perspective it is important but definitely not the most important thing for the preliminary model to achieve at this stage.

In the ideal circumstance, this would be achieved in the format of a large research programme investigating the model, preferably with Manasson as the head of the research group and with PhD students carrying out the research.
 
  • Like
Likes Buzz Bloom
  • #98
Auto-Didact said:
In other words, what you are asking is an important eventual goal post - one of several goal posts - which should be attempted to be reached.

If 50 years of string theory has taught us anything it's something about chicken counting and hatching.
 
  • Like
Likes Auto-Didact
  • #99
DarMM said:
How is a differentiable manifold epistemic though?
Easy: if the manifold doesn't characterize an existing object, but merely characterizes knowledge. There are manifolds in information geometry which can be constructed using the Fisher information metric; these constructions are purely epistemic.

In fact, all objects in statistics based on probability theory are completely epistemic, because probabilities (and all related quantities such as distributions, averages, variances, etc) aren't themselves objects in the world but encodings of the relative occurrence of objects in the world.

Physics, outside of QM, is different because it directly refers to actually existing - i.e. ontic - properties of objects in the world like mass and velocity. This is why physics is clearly an empirical science, while probability theory is part of mathematics.
 
  • #100
Paul Colby said:
If 50 years of string theory has taught us anything it's something about chicken counting and hatching.
The research program should be initially limited to 10 years; if no empirical results are reached in 5 years, the budget should be halved. Another 5 years without anything but mathematical discoveries and it should be abandoned.
 
  • Like
Likes Fra
  • #101
Auto-Didact said:
The research program should be initially limited to 10 years; if no empirical results are reached in 5 years, the budget should be halved. Another 5 years without anything but mathematical discoveries and it should be abandoned.

Well, things don't work that way and I'm kind of glad they don't. The literature is littered with less than successful ideas and programs people push and try to sell. String theory will go away if we run out of string theorists. I always had a soft spot for Chew's bootstrap program. Everything from unitarity and analyticity. The only problem is, it's an incomplete idea. Super symmetry doesn't work, not because it's not a great thought, but because nature doesn't work that way as far as I can tell. One reason to persist in my questions is to see if there is anything to work with here. I don't see it. No shame in that and no problem either. Carry on.
 
  • Like
Likes Auto-Didact
  • #102
Paul Colby said:
Well, things don't work that way and I'm kind of glad they don't. The literature is littered with less than successful ideas and programs people push and try to sell.
You're more lenient than I am; perhaps 'export to the mathematics department' is the correct euphemism.
There are other sciences that actually do work more or less in the way that I describe. There are literally mountains of empirical data on things like this. Such strategies of course have pros and cons:

Pros:
- Discourages adherents to remain loyal to some framework/theory
- Makes everyone involved in the field at least somewhat familiar with all current frameworks
- Increases marginal innovation rate due to luck by constantly exposing all aspects of a framework to a huge diversity of specialized views and methodologies
- Increases the likelihood of discoveries contingent upon the smooth operation of this system, i.e. "teamwork"

Cons:
- Time consuming in comparison with the current system
- Slow-down of particular projects, speed-up of others
- Less freedom to work on what you want just because you want to work on that
- Teamwork can lead to increased human errors, through miscommunication, frustration, misunderstanding, etc especially if one or more parties do not want to work together

Despite the cons, I think it may be a good idea to try and implement the strategy in the practice of theoretical physics. I will illustrate this by way of an example:

I said earlier (in route 1) that precise time measurements of extremely high-field electrodynamics is necessary, while I - having never worked in that field - know next to nothing about doing such measurements, nor about the state of the art of such measurements; there are two choices: carry on this part of the research myself or consult/defer this part of the research to another person.

If I "don't want to share the credit" I'll do it myself, with the danger that I'll continuously be adding more work for myself, certainly if I'll have to learn some new mathematics along the way. On the other hand, it is almost a guarantee that there might actually be other theorists who do already have some experience in that field and/or are in direct contact with those that do.

A strategy like the one I described would make such a possible meeting not accidental but a mandatory next step in the scientific process. This means theorists would think twice before writing papers making any big claims, because all such big claims would have to get chased down immediately. This would probably lead to a new performance index, namely not just a citation count but also a 'boy who cried wolf'-count.
 
  • #103
@Auto-Didact , I see your points now and I think we are in agreement. I'm restricted in my ability to reply for the next few days, but I think we're on the same lines just using different terminology. I'll write a longer post when I'm free.

Apologies for getting heated in the previous post, I was mischaracterising you.
 
  • Like
Likes Auto-Didact
  • #104
DarMM said:
@Auto-Didact , I see your points now and I think we are in agreement. I'm restricted in my ability to reply for the next few days, but I think we're on the same lines just using different terminology. I'll write a longer post when I'm free.
:)
DarMM said:
Apologies for getting heated in the previous post, I was mischaracterising you.
No damage done, to be fair I have probably done some mischaracterization along the way as well.
 
  • #105
  • Like
Likes Paul Colby
  • #106
@Paul Colby the dynamics of the underlying system, i.e. the vacuum, is described in a bit more detail in Manasson's 2017 paper linked above. I haven't read the 2018 paper yet.

There happens to be another version of QED called Stochastic Electrodynamics (SED) which is based on de Broglie-Bohm theory; SED encorporates the ground state of the EM vacuum as the pilot wave. SED is an explicitly non-local hidden variables theory and particles immersed in this vacuum display highly nonlinear behavior.

The SED approach on the face of it sounds very similar to what Manasson has described in his 2017 paper linked above; this might actually represent a direct route to what you asked here:
Paul Colby said:
So, it should be fairly straight forward to reproduce the observed energy levels of a hydrogen atom. Please include hyperfine splitting and the Lamb shift in the analysis. How would such a calculation proceed?
 
  • #107
@Auto-Didact Well, honest opinion, what I see of the 2017 paper so far is disappointing. Reads like numerology where each calculation seems independent of the previous one and finely crafted to "work." Can't help but feel the only thing appearing out of the vacuum are the papers equations. Just my opinion and off the cuff impression.
 
  • #108
Paul Colby said:
@Auto-Didact Well, honest opinion, what I see of the 2017 paper so far is disappointing. Reads like numerology where each calculation seems independent of the previous one and finely crafted to "work." Can't help but feel the only thing appearing out of the vacuum are the papers equations. Just my opinion and off the cuff impression.
I haven't finished reading it, but I agree. His 2008 paper is of higher quality, in my opinion.

That said, the 2017 paper, just like the earlier one, naturally seems to construct several important concepts - both the Fermi-Dirac and Bose-Einstein statistics without even assuming the existence of identical particles - seemingly completely out of thin air. The whole treatment in 3.1 reeks of an extension of the Kuramoto model playing a role here; if this is true it alone would already make the entire thing worthwhile in terms of mathematics.

For now, I want to end on something that Feynman said about the art of doing theoretical physics:
Feynman said:
One of the most important things in this ‘guess - compute consequences - compare with experiment’ business is to know when you are right. It is possible to know when you are right way ahead of checking all the consequences. You can recognize truth by its beauty and simplicity. It is always easy when you have made a guess, and done two or three little calculations to make sure that it is not obviously wrong, to know that it is right. When you get it right, it is obvious that it is right - at least if you have any experience - because usually what happens is that more comes out than goes in. Your guess is, in fact, that something is very simple. If you cannot see immediately that it is wrong, and it is simpler than it was before, then it is right.

The inexperienced, and crackpots, and people like that, make guesses that are simple, but you can immediately see that they are wrong, so that does not count. Others, the inexperienced students, make guesses that are very complicated, and it sort of looks as if it is all right, but I know it is not true because the truth always turns out to be simpler than you thought. What we need is imagination, but imagination in a terrible strait-jacket. We have to find a new view of the world that has to agree with everything that is known, but disagree in its predictions somewhere, otherwise it is not interesting. And in that disagreement it must agree with nature.

If you can find any other view of the world which agrees over the entire range where things have already been observed, but disagrees somewhere else, you have made a great discovery. It is very nearly impossible, but not quite, to find any theory which agrees with experiments over the entire range in which all theories have been checked, and yet gives different consequences in some other range, even a theory whose different consequences do not turn out to agree with nature. A new idea is extremely difficult to think of. It takes a fantastic imagination.
 
  • #109
In the later paper I like how he invokes continuity but then pretty much immediately jumps to an "iterated map" approach to get to some notion of cellular evolution.

What's the difference between that and a causal lattice representing evolution of space time geometry - especially an n dimensional one inhabiting an n+1 dimensional space (the thread/paper I referenced above)?

Both seem to be saying that non-linearity is hallmark and basically identical to "discrete" though there must be some coherent support (i.e. differentiable-manifold-like) to support the non-linear dynamics.

I mean you could put the label "self-gravitation vs. self-diffusion?" on the edge between two lattice nodes...
 
Last edited:
  • #110
I think his stuff is pretty interesting. It reminds me a lot of Winfree with his tori. I get it's out there but why no peer review even if said review was very critical?

[edit] I see he refs Strogatz.
 
Last edited:
  • #111
Jimster41 said:
I get it's out there but why no peer review even if said review was very critical?

IMO, because these papers are not even wrong. If one started with a complete identifiable system, like a classical field theory for instance, and systematically extracted results, a reviewable paper would result even if the results themselves were wrong. A development that begins with "imagine a charge fluctuation" isn't a development. Just my 2 cents.
 
  • #112
Jimster41 said:
In the later paper I like how he invokes continuity but then pretty much immediately jumps to an "iterated map" approach to get to some notion of cellular evolution.

What's the difference between that and a causal lattice representing evolution of space time geometry - especially an n dimensional one inhabiting an n+1 dimensional space (the thread/paper I referenced above)?
There is a huge difference: lattice models are simplified (often regular) discretizations of continuous spaces which are exactly solvable, making approximation schemes such as perturbation theory superfluous (NB: Heisenberg incidentally wrote a very good piece about this very topic in Physics Today 1967). In other words, lattice models are simplifications that help to solve a small subset of the full nonlinear problem based on certain 'nice' properties of the problem such as symmetry, periodicity, isotropy, etc.

On the other hand, iterative maps (also known as recurrence relations) are simply discrete differential equations, i.e. difference equations. Things that can be immensely difficult to analytically work out for nonlinear differential equations can sometimes become trivially easy for difference equations; the results of this discrete analysis can then be directly compared to the numerical analysis of the continuous case carried out by a computer. The generalisation of this discrete analysis to the full continuous case, can then often be made using several techniques and theorems. In other words, the entire nonlinear problem can actually get solved by cleverly utilizing numerical techniques, computers and mathematics.
Jimster41 said:
Both seem to be saying that non-linearity is hallmark and basically identical to "discrete" though there must be some coherent support (i.e. differentiable-manifold-like) to support the non-linear dynamics.

I mean you could put the label "self-gravitation vs. self-diffusion?" on the edge between two lattice nodes...
You misunderstand it. I will let you in on the best kept secret in nonlinear dynamics, which seems to make most physicists uncomfortable: Feigenbaum universality, when applicable, can predict almost everything about the extremely complicated physics of a system, without knowing almost anything about the physics of that system, or indeed, anything about physics whatsoever; even worse, this can almost be carried out entirely using mosty high school level mathematics.

I will give you an example, to make things more clear: Iterative maps can be used to carry out stability analysis of the fixed points and so describe the dynamics of a system. There are multiple theorems which shows that all unimodal map (such as a negative parabola or even a ##\Lambda## shape) have qualitatively identical dynamics and quantitatively almost the same dynamics (up to numerical factors and renormalization).

Importantly, all unimodal maps follow the same period doubling route to chaos and the Feigenbaum constant ##\delta## is the universal mathematical constant characterizing this concept, very similar to how ##\pi## characterizes circularity. It cannot be stressed enough that ##\delta## naturally appears in all kinds of systems, putting it on the same status of importance in mathematics such as ##\pi##, ##e## and ##i##.

Now the thing to realize is that period doubling bifurcations do not only occur in discrete systems; they can also occur in continuous systems. The only criteria such continuous systems need to satisfy are:
  1. be at least three dimensional (due to the existence and uniqueness theorem of analysis) i.e. three coupled partial differential equations (PDEs)
  2. have a nonlinearity in at least one of these PDEs
  3. have a tunable parameter in at least one of these (N)PDEs.
Given that the above criteria hold, one can then numerically integrate one of these PDEs in time and then use the Lorenz map technique to construct a discrete recurrence map of the local maxima over time of the numerical integration.

This is where the miracle occurs: if the resulting Lorenz map of the continuous system is unimodal for a given parameter, then the continuous system will display period doubling. This mapping doesn't even have to be approximatable by a proper function i.e. uniqueness isn't required!

Incidentally, this unimodal Lorenz map miracle as I have described it only directly applies for any strange attractor with fractal dimension close to 2 and Lorenz map dimension close to 1. It can be generalized, but that requires more experience and a little bit more sophisticated mathematics.
Paul Colby said:
IMO, because these papers are not even wrong. If one started with a complete identifiable system, like a classical field theory for instance, and systematically extracted results, a reviewable paper would result even if the results themselves were wrong. A development that begins with "imagine a charge fluctuation" isn't a development. Just my 2 cents.
That's too harsh and it doesn't nearly adequately describe our modern world of scientific superspecialization, especially from the point of view of interdisciplinary researchers. There are today many other factors which can prohibit a publication from happening. For example, papers by applied mathematicians often tend to get refused by physics journals and vice versa due to different interoperable standards; the solution is to then settle for interdisciplinary journals, but depending on the subject matter, these journals then either tend be extremely obscure or simply non-existent.

The right credentials and connections are sometimes practically necessary to get taken seriously, especially if you go as left field as Manasson is going, and he obviously isn't in academia. Remember the case of Faraday, one of the greatest physicists ever, who was untrained in mathematics yet invented the field concept, purely by intuition and experiment; today he would get rubbished by physicists to no end simply because he couldn't state what he was doing mathematically. Going through the trouble of getting published therefore sometimes just isn't worth the trouble; this is why we are extremely lucky online preprint services like the arxiv exist.
 
  • #113
@Auto-Didact Thanks for such a substantial reply. Really.

Is there a notion of Feigenbaum Universality associated with multi-parameter iterated maps? Or does his proof fall apart for cases other than the one d, single quadratic maximum?

Maybe another way of asking the same question, do I understand correctly that Feigenbaum Universality dictates there is periodicity (structure) to the mixture of order and chaos in non-linear maps that switch back and forth not just the rate of convergence (to chaos) of maps that... just converge to chaos?

[Edit] You know never mind. Those aren't very good questions. I Just spent some more time on the wiki chaos pages. I need to find another book (besides Schroeder's) on chaotic systems. Most are either silly or real textbooks. Schroeder's was something rare... in between. I'd like to understand the topic of non-linear dynamics, chaos, fractals, mo' better.
 
Last edited:
  • #114
Jimster41 said:
@Auto-Didact Thanks for such a substantial reply. Really.
My pleasure. I should say that during my physics undergraduate days, there were only three subjects I really fell in love with: Relativistic Electrodynamics, General Relativity and Nonlinear Dynamics. They required so little, yet produce so much; it is a real shame in my opinion that neither of the last two seem to be standard part of the undergrad physics curricula (none of the other physics majors took it in my year, nor the three subsequent years under my year).

Each of these subjects simultaneously both deepened my understanding of physics and widened my view of (classical pure and modern applied) mathematics in ways that none of the other subjects in physics ever seemed to be capable of doing (in particular what neither QM nor particle physics were ever able to achieve for me aesthetically in the classical pure mathematics sense). It saddens me to no end that more physicists don't seem to have taken the subject of nonlinear dynamics in its full glory.
Jimster41 said:
Is there a notion of Feigenbaum Universality associated with multi-parameter iterated maps? Or does his proof fall apart for cases other than the one d, single quadratic maximum
To once again clarify, it doesn't just apply to iterative maps; it directly applies to systems of differential equations i.e. to dynamical systems. Feigenbaum universality directly applies to the dynamics of any system of 3 or more coupled NDEs with any amount of parameters.

The iterative map is just a tool to study the dynamical system, by studying a section of that system: you could use more parameters but one parameter is all one actually needs, so why bother? Once you start using more than one, you might as well just directly study the dynamical system.

In fact, you would need to be very lucky to find a nonlinear dynamical system (NDS) which only has one parameter! I only know of one example of an NDS with only one nonlinearity yet it has 3 parameters, namely the Rössler system:
##\dot x=-y-z##
##\dot y=x+ay##
##\dot z=b+z(x-c)##

In order to actually carry out the Lorenz map technique I described earlier on this system, we need to numerically keep two of the 3 parameters ##a##, ##b## and ##c## constant to even attempt an analysis! Knowing which one needs to be constant and which one needs to be varied is an art that you learn by trial and error.

To analyze any amount of parameters simultaneously is beyond the capabilities of present day mathematics, because it requires simultaneously varying, integrating and solving for several parameters; fully understanding turbulence for example requires this. This kind of mathematics doesn't actually seem to exist yet; inventing such mathematics would directly lead to a resolution of proving existence and uniqueness of the Navier-Stokes equation.

Luckily, we can vary each parameter independently while keeping the others fixed and there are even several powerful theorems which help us get around the practical limitations such as "the mathematics doesn't exist yet"; moreover, I'm optimistic that some kind of neural network might eventually actually be capable of doing this.
Jimster41 said:
Maybe another way of asking the same question, do I understand correctly that Feigenbaum Universality dictates the periodicity of order and chaos in non-linear maps that switch back and forth not just the rate of convergence to chaos?
Yes, if by periodicity of order and chaos you mean how the system goes into and out of chaotic dynamics.
Jimster41 said:
Or at least that there is some geometry (logic) of the parameter space that controls the periodicity of switching...
Yes, for an iterative map the points on the straight line ##x_{n+1}=x_n## intersects with the graph of the iterative map; these intersections define fixed points and so induce a vector field on this line. Varying the parameter ##r## directly leads to the creation and annihilation of fixed points; these fixed points constitute the bifurcation diagram in the parameter space (##r,x##).

For the full continuous state space of the NDS, i.e. in the differential equations case, the periodicity is equal to the amount of 'loops' in the attractor characterizing the NDS; if the loops double by varying parameters, there will be chaos beyond some combination of parameters, i.e. an infinite amount of loops i.e. a fractal i.e. a strange attractor.

This special combination of parameters is a nondimensionalisation of all relevant physical quantities; this is why all of this seems to be completely independent of any physics of the system. In other words, a mathematical scheme for going back from these dimensionless numbers to a complete description of the physics is "mathematics which doesn't exist yet".

The attractor itself is embedded within a topological manifold, i.e. a particular subset of the state space. All of this is completely clear visually by just looking at the attractors while varying parameters. This can all be naturally described using symplectic geometry.

To state things more bluntly, attractor analysis in nonlinear dynamics is a generalization of Hamiltonian dynamics by studying the evolution of Hamiltonian vector fields in phase space; the main difference being that the vector fields need not be conservative nor satisfy the Liouville theorem during time evolution.
Jimster41 said:
You know never mind.
Too late! I went to the movies (First Man) and didn't refresh the tab before I finished the post.
Jimster41 said:
Those aren't very good questions. I Just spent some more time on the wiki chaos pages. I need to find another book (besides Schroeder's) on chaotic systems. Most are either silly or real textbooks. Schroeder's was something rare... in between. I'd like to understand the topic of non-linear dynamics, chaos, fractals, mo' better.
Glad to hear that, I recommend Strogatz and the historical papers. To my other fellow physicists: I implore thee, take back what is rightfully yours from the mathematicians!
 
  • #115
@Auto-Didact Once again, Thanks. The fact you could understand and answer my questions so clearly means a lot to me. Very encouraging.

I read Sync. by Strogatz. Does he have others? It was quite good, fascinating. Though I wish he'd gone deeper into describing more of the math of the chase - sort of as you do above. IOW It was a bit pop. I bought and delved into Winfree's "Geometry of Biological Time" absolutely beautiful book. His 3D helix of fruit fly eclosion and the examples of sync and singularities he gives in the first few chapters is worth the price alone but it becomes a real practitioners bible pretty quickly.

The only part of your reply above that makes my knee jerk is the statement "iterated maps are just a tool to study dynamical systems..." I get that is the context in which the math was invented, the bauble of value supposedly being the continuous NDS. But back to the topic of this thread (maybe flipping it's title while at the same time finding a lot of agreement in content). Don't discrete lattice, triangulation and causal loop models of space-time imply, perhaps, that continuous NDS's exist in appearance only, from a distance, because iterated maps are fundamental...

I just started Rovelli's book "Reality Is Not What It Seems". Word to the wise - he starts of with a (really prettily written) review of the the philosophical history behind the particle/field duality; Theodosius, Democritus et. al. I am taking my time and expecting a really nice ride. It looks painfully brief tho.

You ever heard of, read Nowak, "Evolutionary Dynamics". It's one of those few Shcroeder-like ones. And fascinating. After Rovelli's reminder on Einstien's important work re Brownian motion and the "Atomic Theory" I am wrestling with the question of whether Einstien's method isn't the same thing Nowak lays out in his chapter on evolutionary drift - which really took me some time to grok - blowing my mind as it did. I stopped reading that book halfway through partly because that chapter seemed to me to describe spontaneous symmetry breaking - using just an assertion of discrete iteration. Which made me sure I had misunderstood - since spontaneous symmetry breaking seems to require a lot more fuss than that.

Looking forward to "First Man" though I just don't think it's fair that Ryan Gosling gets to play "Officer K" and "Neil Armstrong". That's just too much cool...
 
Last edited:
  • #116
Quick reply, since I wasn't entirely satisfied with this either:
Auto-Didact said:
The iterative map is just a tool to study the dynamical system, by studying a section of that system: you could use more parameters but one parameter is all one actually needs, so why bother? Once you start using more than one, you might as well just directly study the dynamical system.
I should clarify this; saying that the iterative map is "just a tool" is a very physics oriented way of looking at things, but it is essential (also partially because of the possibility to carry out experiments) to be able to look at it in this way; physicists trump mathematicians in being capable of doing this.

The first point is that iterative maps, being discrete, allows having functions which aren't bijective, i.e. for a single input ##x## you can get several (even an infinite amount of) outputs ##y##; this violates uniqueness and therefore makes doing calculus impossible.

The second point is that there are several kinds of prototypical iterative mapping techniques which to the physicist are literally tools, in the same sense like how e.g. the small angle approximation and perturbation theory are merely tools. These prototypical iterative mapping techniques are
- the Lorenz map, constructable using only one input variable as I described before.
- the Poincaré map, which is a section through the attractor which maps input points (i.e. the flow on a loop) ##x_n## within this section to subsequent input points ##x_{n+1}## which pass through this same section.
- the Henon map, which is unlike the other two literally just a discrete analog of a NDS, consisting of two coupled difference equations with two parameters; in contrast to the continuous case, attractors in this map can already display chaos in just a two dimensional state space.

For completeness, in order to understand the numerical parameters themselves better from a physics perspective, check out this post. I'll fully read and reply to the rest of your post later.
 
  • #117
Jimster41 said:
@Auto-Didact Once again, Thanks. The fact you could understand and answer my questions so clearly means a lot to me. Very encouraging.
No problem.
Jimster41 said:
I read Sync. by Strogatz. Does he have others? It was quite good, fascinating. Though I wish he'd gone deeper into describing more of the math of the chase - sort of as you do above. IOW It was a bit pop. I bought and delved into Winfree's "Geometry of Biological Time" absolutely beautiful book. His 3D helix of fruit fly eclosion and the examples of sync and singularities he gives in the first few chapters is worth the price alone but it becomes a real practitioners bible pretty quickly.
Strogatz' masterpiece is his textbook on nonlinear dynamics and chaos theory. Coincidentally, Winfree's book was put on my to read list after I read Sync a few years ago; the problem is my list is ever expanding, but I'll move it up a bit since you say it's more than pop.
Jimster41 said:
The only part of your reply above that makes my knee jerk is the statement "iterated maps are just a tool to study dynamical systems..." I get that is the context in which the math was invented, the bauble of value supposedly being the continuous NDS.
In my previous post I addressed how some maps (like the Lorenz and Poincaré maps) are 'just tools' just like how perturbation theory is merely a tool, but I'll add to that the statement that the attractors in some actually simplified and discretized versions of the continuous NDS (like the two-dimensional Henon map) can have problems at the edges of the attractor with values going off to infinity; in proper attractors, i.e. in the continuous case with three or more dimensions, such problems do not occur, which shows that the discretized reduced versions are nothing but idealized approximations in some limit.
Jimster41 said:
But back to the topic of this thread (maybe flipping it's title while at the same time finding a lot of agreement in content). Don't discrete lattice, triangulation and causal loop models of space-time imply, perhaps, that continuous NDS's exist in appearance only, from a distance, because iterated maps are fundamental...
Perhaps, but unlikely since those are all discrete models of spacetime, not of state space. Having said that, discrete state space is a largely unexplored topic at the cutting edge intersection of NLD, statistical mechanics and network theory, called 'dynamical networks' or more broadly 'network science'; incidentally Strogatz, his former student Watts and a guy named Barabasi are pioneers in this new field. For a textbook on this subject, search for "Network Science" by Barabasi.
Jimster41 said:
I just started Rovelli's book "Reality Is Not What It Seems". Word to the wise - he starts of with a (really prettily written) review of the the philosophical history behind the particle/field duality; Theodosius, Democritus et. al. I am taking my time and expecting a really nice ride. It looks painfully brief tho.
I read it awhile ago, back to back with some of his other works, see here.
Jimster41 said:
You ever heard of, read Nowak, "Evolutionary Dynamics". It's one of those few Shcroeder-like ones. And fascinating. After Rovelli's reminder on Einstien's important work re Brownian motion and the "Atomic Theory" I am wrestling with the question of whether Einstien's method isn't the same thing Nowak lays out in his chapter on evolutionary drift - which really took me some time to grok - blowing my mind as it did.
I'll put it on the list.
Jimster41 said:
I stopped reading that book halfway through partly because that chapter seemed to me to describe spontaneous symmetry breaking - using just an assertion of discrete iteration. Which made me sure I had misunderstood - since spontaneous symmetry breaking seems to require a lot more fuss than that.
In my opinion, all the fuss behind spontaneous symmetry breaking is actually far less deep than what is conventionally conveyed by particle physicists, but my point of view is clearly an unconventional one among physicists because I think QT is not fundamental i.e. that the presumed fundamentality of operator algebra and group theory in physics is a hopelessly misguided misconception.
Jimster41 said:
Looking forward to "First Man" though I just don't think it's fair that Ryan Gosling gets to play "Officer K" and "Neil Armstrong". That's just too much cool...
It wasn't bad, but I was expecting more; I actually saw 'Bohemian Rhapsody' the same day. They are both dramatized biography films, with clearly different subjects, but if I had to recommend one, especially if you are going with others, I'd say go watch Bohemian Rhapsody instead of First Man.
 
  • Like
Likes Jimster41
  • #118
Auto-Didact said:
Perhaps, but unlikely since those are all discrete models of spacetime, not of state space. Having said that, discrete state space is a largely unexplored topic at the cutting edge intersection of NLD, statistical mechanics and network theory, called 'dynamical networks' or more broadly 'network science'; incidentally Strogatz, his former student Watts and a guy named Barabasi are pioneers in this new field. For a textbook on this subject, search for "Network Science" by Barabasi.

Well, I hadn't considered the difference to be honest and in hindsight I can see why it's important to distinguish...
But I'm really going to have a think, I think, on just what the distinction implies. It sharpens my confusion w/respect to how a continuous support can spontaneously generate discrete stuff vs. the seemingly intuitive nature of things going the other way - where discrete stuff creates an illusion of continuity.

The book you mention looks right on target...

I assume you knew his site existed (an on-line version of the book). I just found it but I'm a bit afraid to post the link here. I think I will have to own the actual book tho...

I am also really looking forward to Bohemian Rhapsody.
 
  • #119
Okay I meant to come back to this. As I said I agree with you in the main. It's more I'm just not sure what you're actually disagreeing with and I think you're being very dismissive of a field without providing much reason.

Auto-Didact said:
Its more important than you realize as it makes or breaks everything even given the truth of the 5 other assumptions you are referring to. If for example unitarity is not actually 100% true in nature, then many no-go theorems lose their validity.
Which no-go theorems? Not PBR, not Bell's, not the Kochen-Specker, not Hardy's baggage theorem, not the absence of maximally epistemic theories. What are these many theorems?

Auto-Didact said:
Bell's theorem for example would survive, because it doesn't make the same assumptions/'mistakes' some of the other do.
Most of the major no-go theorems take place in the same framework as Bell's theorem, e.g. Kochen-Specker, Hardy. What's an example of one that could fail while Bell's would still stand?

Auto-Didact said:
I think you are misunderstanding me, but maybe only slightly. The reason I asked about the properties of the resulting state space is to discover if these properties are necessarily part of all models which are extensions of QM. It seems very clear to me that being integrable isn't the most important property
No it mightn't be, but nobody is saying that is. It more highlights an interesting possibility, that you might need an unmeasurable space and those are never really looked at.

Auto-Didact said:
Yes, definitely.
Sorry, but you really think most of the no-go theorems are nonsense that's as useful as saying "physics uses numbers"? The PBR theorem, the Pusey-Leifer theorem, etc are just contentless garbage? If not could you tell me which are?

I still don't think taking the state space to be "at least measurable" is devoid of content and as meaningful as saying "physics uses numbers". It's setting out what models are considered. In fact I would say it strengthens the theorems considering how weak an assumption it is.

Also I still don't understand how it is necessarily epistemic. A measurable space might be put to an epistemic use, but I don't see how it is intrinsically so.

Auto-Didact said:
A model moving beyond QM may either change the axioms of QM or not. These changes may be non-trivial or not. Some of these changes may not yet have been implemented in the particular version of that model for whatever reason (usually 'first study the simple version, then the harder version'). It isn't clear to me whether some (if not most) of the no-go theorems are taking such factors into account.
So your main objection to the framework is that it might unfairly eliminate a model in the early stages of development? In other words, an earlier simpler version of an idea might have some interesting insights, but it's early form, being susceptible to the no-go theorems, might be unfairly dismissed without being given time to advance to a form that doesn't and might help us understand/supersede QM?
 
  • #120
This is an intriguing proposition. As noted, self-organizing dynamics occur on a myriad of scales, are robust and have an extensive mathematical basis. Speaking with a very superficial understanding, it feels organic rather than mechanistic and potentially rooted in a new foundational paradigm. Having just read something about Bohmian mechanics it feels like the two might go together.
 
  • Like
Likes Auto-Didact

Similar threads

  • · Replies 16 ·
Replies
16
Views
6K
  • · Replies 10 ·
Replies
10
Views
4K
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 26 ·
Replies
26
Views
5K
  • · Replies 21 ·
Replies
21
Views
6K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 62 ·
3
Replies
62
Views
10K