A Quantization isn't fundamental

Paul Colby

Gold Member
944
196
The research program should be initially limited to 10 years; if no empirical results are reached in 5 years, the budget should be halved. Another 5 years without anything but mathematical discoveries and it should be abandoned.
Well, things don't work that way and I'm kind of glad they don't. The literature is littered with less than successful ideas and programs people push and try to sell. String theory will go away if we run out of string theorists. I always had a soft spot for Chew's bootstrap program. Everything from unitarity and analyticity. The only problem is, it's an incomplete idea. Super symmetry doesn't work, not because it's not a great thought, but because nature doesn't work that way as far as I can tell. One reason to persist in my questions is to see if there is anything to work with here. I don't see it. No shame in that and no problem either. Carry on.
 
630
389
Well, things don't work that way and I'm kind of glad they don't. The literature is littered with less than successful ideas and programs people push and try to sell.
You're more lenient than I am; perhaps 'export to the mathematics department' is the correct euphemism.
There are other sciences that actually do work more or less in the way that I describe. There are literally mountains of empirical data on things like this. Such strategies of course have pros and cons:

Pros:
- Discourages adherents to remain loyal to some framework/theory
- Makes everyone involved in the field at least somewhat familiar with all current frameworks
- Increases marginal innovation rate due to luck by constantly exposing all aspects of a framework to a huge diversity of specialized views and methodologies
- Increases the likelihood of discoveries contingent upon the smooth operation of this system, i.e. "teamwork"

Cons:
- Time consuming in comparison with the current system
- Slow-down of particular projects, speed-up of others
- Less freedom to work on what you want just because you want to work on that
- Teamwork can lead to increased human errors, through miscommunication, frustration, misunderstanding, etc especially if one or more parties do not want to work together

Despite the cons, I think it may be a good idea to try and implement the strategy in the practice of theoretical physics. I will illustrate this by way of an example:

I said earlier (in route 1) that precise time measurements of extremely high-field electrodynamics is necessary, while I - having never worked in that field - know next to nothing about doing such measurements, nor about the state of the art of such measurements; there are two choices: carry on this part of the research myself or consult/defer this part of the research to another person.

If I "don't want to share the credit" I'll do it myself, with the danger that I'll continuously be adding more work for myself, certainly if I'll have to learn some new mathematics along the way. On the other hand, it is almost a guarantee that there might actually be other theorists who do already have some experience in that field and/or are in direct contact with those that do.

A strategy like the one I described would make such a possible meeting not accidental but a mandatory next step in the scientific process. This means theorists would think twice before writing papers making any big claims, because all such big claims would have to get chased down immediately. This would probably lead to a new performance index, namely not just a citation count but also a 'boy who cried wolf'-count.
 

DarMM

Science Advisor
Gold Member
2,233
1,267
@Auto-Didact , I see your points now and I think we are in agreement. I'm restricted in my ability to reply for the next few days, but I think we're on the same lines just using different terminology. I'll write a longer post when I'm free.

Apologies for getting heated in the previous post, I was mischaracterising you.
 
630
389
@Auto-Didact , I see your points now and I think we are in agreement. I'm restricted in my ability to reply for the next few days, but I think we're on the same lines just using different terminology. I'll write a longer post when I'm free.
:)
Apologies for getting heated in the previous post, I was mischaracterising you.
No damage done, to be fair I have probably done some mischaracterization along the way as well.
 
630
389
Update:
Manasson has two new papers, expanding upon the ideas of his original 2008 preprint "Are Particles Self-Organized Systems?" which has been discussed so far in this thread.

The first new paper was published in a Chinese journal, in July 2017, it is titled:
An Emergence of a Quantum World in a Self-Organized Vacuum—A Possible Scenario

The second new paper is a preprint from Oct 2018, it is titled:
Emergence of Standard-Model-Like Fields and Particles in Gravitation-Diffusion Vacuum
 
630
389
@Paul Colby the dynamics of the underlying system, i.e. the vacuum, is described in a bit more detail in Manasson's 2017 paper linked above. I haven't read the 2018 paper yet.

There happens to be another version of QED called Stochastic Electrodynamics (SED) which is based on de Broglie-Bohm theory; SED encorporates the ground state of the EM vacuum as the pilot wave. SED is an explicitly non-local hidden variables theory and particles immersed in this vacuum display highly nonlinear behavior.

The SED approach on the face of it sounds very similar to what Manasson has described in his 2017 paper linked above; this might actually represent a direct route to what you asked here:
So, it should be fairly straight forward to reproduce the observed energy levels of a hydrogen atom. Please include hyperfine splitting and the Lamb shift in the analysis. How would such a calculation proceed?
 

Paul Colby

Gold Member
944
196
@Auto-Didact Well, honest opinion, what I see of the 2017 paper so far is disappointing. Reads like numerology where each calculation seems independent of the previous one and finely crafted to "work." Can't help but feel the only thing appearing out of the vacuum are the papers equations. Just my opinion and off the cuff impression.
 
630
389
@Auto-Didact Well, honest opinion, what I see of the 2017 paper so far is disappointing. Reads like numerology where each calculation seems independent of the previous one and finely crafted to "work." Can't help but feel the only thing appearing out of the vacuum are the papers equations. Just my opinion and off the cuff impression.
I haven't finished reading it, but I agree. His 2008 paper is of higher quality, in my opinion.

That said, the 2017 paper, just like the earlier one, naturally seems to construct several important concepts - both the Fermi-Dirac and Bose-Einstein statistics without even assuming the existence of identical particles - seemingly completely out of thin air. The whole treatment in 3.1 reeks of an extension of the Kuramoto model playing a role here; if this is true it alone would already make the entire thing worthwhile in terms of mathematics.

For now, I want to end on something that Feynman said about the art of doing theoretical physics:
Feynman said:
One of the most important things in this ‘guess - compute consequences - compare with experiment’ business is to know when you are right. It is possible to know when you are right way ahead of checking all the consequences. You can recognize truth by its beauty and simplicity. It is always easy when you have made a guess, and done two or three little calculations to make sure that it is not obviously wrong, to know that it is right. When you get it right, it is obvious that it is right - at least if you have any experience - because usually what happens is that more comes out than goes in. Your guess is, in fact, that something is very simple. If you cannot see immediately that it is wrong, and it is simpler than it was before, then it is right.

The inexperienced, and crackpots, and people like that, make guesses that are simple, but you can immediately see that they are wrong, so that does not count. Others, the inexperienced students, make guesses that are very complicated, and it sort of looks as if it is all right, but I know it is not true because the truth always turns out to be simpler than you thought. What we need is imagination, but imagination in a terrible strait-jacket. We have to find a new view of the world that has to agree with everything that is known, but disagree in its predictions somewhere, otherwise it is not interesting. And in that disagreement it must agree with nature.

If you can find any other view of the world which agrees over the entire range where things have already been observed, but disagrees somewhere else, you have made a great discovery. It is very nearly impossible, but not quite, to find any theory which agrees with experiments over the entire range in which all theories have been checked, and yet gives different consequences in some other range, even a theory whose different consequences do not turn out to agree with nature. A new idea is extremely difficult to think of. It takes a fantastic imagination.
 

Jimster41

Gold Member
738
79
In the later paper I like how he invokes continuity but then pretty much immediately jumps to an "iterated map" approach to get to some notion of cellular evolution.

What's the difference between that and a causal lattice representing evolution of space time geometry - especially an n dimensional one inhabiting an n+1 dimensional space (the thread/paper I referenced above)?

Both seem to be saying that non-linearity is hallmark and basically identical to "discrete" though there must be some coherent support (i.e. differentiable-manifold-like) to support the non-linear dynamics.

I mean you could put the label "self-gravitation vs. self-diffusion?" on the edge between two lattice nodes...
 
Last edited:

Jimster41

Gold Member
738
79
I think his stuff is pretty interesting. It reminds me a lot of Winfree with his tori. I get it's out there but why no peer review even if said review was very critical?

[edit] I see he refs Strogatz.
 
Last edited:

Paul Colby

Gold Member
944
196
I get it's out there but why no peer review even if said review was very critical?
IMO, because these papers are not even wrong. If one started with a complete identifiable system, like a classical field theory for instance, and systematically extracted results, a reviewable paper would result even if the results themselves were wrong. A development that begins with "imagine a charge fluctuation" isn't a development. Just my 2 cents.
 
630
389
In the later paper I like how he invokes continuity but then pretty much immediately jumps to an "iterated map" approach to get to some notion of cellular evolution.

What's the difference between that and a causal lattice representing evolution of space time geometry - especially an n dimensional one inhabiting an n+1 dimensional space (the thread/paper I referenced above)?
There is a huge difference: lattice models are simplified (often regular) discretizations of continuous spaces which are exactly solvable, making approximation schemes such as perturbation theory superfluous (NB: Heisenberg incidentally wrote a very good piece about this very topic in Physics Today 1967). In other words, lattice models are simplifications that help to solve a small subset of the full nonlinear problem based on certain 'nice' properties of the problem such as symmetry, periodicity, isotropy, etc.

On the other hand, iterative maps (also known as recurrence relations) are simply discrete differential equations, i.e. difference equations. Things that can be immensely difficult to analytically work out for nonlinear differential equations can sometimes become trivially easy for difference equations; the results of this discrete analysis can then be directly compared to the numerical analysis of the continuous case carried out by a computer. The generalisation of this discrete analysis to the full continuous case, can then often be made using several techniques and theorems. In other words, the entire nonlinear problem can actually get solved by cleverly utilizing numerical techniques, computers and mathematics.
Both seem to be saying that non-linearity is hallmark and basically identical to "discrete" though there must be some coherent support (i.e. differentiable-manifold-like) to support the non-linear dynamics.

I mean you could put the label "self-gravitation vs. self-diffusion?" on the edge between two lattice nodes...
You misunderstand it. I will let you in on the best kept secret in nonlinear dynamics, which seems to make most physicists uncomfortable: Feigenbaum universality, when applicable, can predict almost everything about the extremely complicated physics of a system, without knowing almost anything about the physics of that system, or indeed, anything about physics whatsoever; even worse, this can almost be carried out entirely using mosty high school level mathematics.

I will give you an example, to make things more clear: Iterative maps can be used to carry out stability analysis of the fixed points and so describe the dynamics of a system. There are multiple theorems which shows that all unimodal map (such as a negative parabola or even a ##\Lambda## shape) have qualitatively identical dynamics and quantitatively almost the same dynamics (up to numerical factors and renormalization).

Importantly, all unimodal maps follow the same period doubling route to chaos and the Feigenbaum constant ##\delta## is the universal mathematical constant characterizing this concept, very similar to how ##\pi## characterizes circularity. It cannot be stressed enough that ##\delta## naturally appears in all kinds of systems, putting it on the same status of importance in mathematics such as ##\pi##, ##e## and ##i##.

Now the thing to realize is that period doubling bifurcations do not only occur in discrete systems; they can also occur in continuous systems. The only criteria such continuous systems need to satisfy are:
  1. be at least three dimensional (due to the existence and uniqueness theorem of analysis) i.e. three coupled partial differential equations (PDEs)
  2. have a nonlinearity in at least one of these PDEs
  3. have a tunable parameter in at least one of these (N)PDEs.
Given that the above criteria hold, one can then numerically integrate one of these PDEs in time and then use the Lorenz map technique to construct a discrete recurrence map of the local maxima over time of the numerical integration.

This is where the miracle occurs: if the resulting Lorenz map of the continuous system is unimodal for a given parameter, then the continuous system will display period doubling. This mapping doesn't even have to be approximatable by a proper function i.e. uniqueness isn't required!

Incidentally, this unimodal Lorenz map miracle as I have described it only directly applies for any strange attractor with fractal dimension close to 2 and Lorenz map dimension close to 1. It can be generalized, but that requires more experience and a little bit more sophisticated mathematics.
IMO, because these papers are not even wrong. If one started with a complete identifiable system, like a classical field theory for instance, and systematically extracted results, a reviewable paper would result even if the results themselves were wrong. A development that begins with "imagine a charge fluctuation" isn't a development. Just my 2 cents.
That's too harsh and it doesn't nearly adequately describe our modern world of scientific superspecialization, especially from the point of view of interdisciplinary researchers. There are today many other factors which can prohibit a publication from happening. For example, papers by applied mathematicians often tend to get refused by physics journals and vice versa due to different interoperable standards; the solution is to then settle for interdisciplinary journals, but depending on the subject matter, these journals then either tend be extremely obscure or simply non-existent.

The right credentials and connections are sometimes practically necessary to get taken seriously, especially if you go as left field as Manasson is going, and he obviously isn't in academia. Remember the case of Faraday, one of the greatest physicists ever, who was untrained in mathematics yet invented the field concept, purely by intuition and experiment; today he would get rubbished by physicists to no end simply because he couldn't state what he was doing mathematically. Going through the trouble of getting published therefore sometimes just isn't worth the trouble; this is why we are extremely lucky online preprint services like the arxiv exist.
 

Jimster41

Gold Member
738
79
@Auto-Didact Thanks for such a substantial reply. Really.

Is there a notion of Feigenbaum Universality associated with multi-parameter iterated maps? Or does his proof fall apart for cases other than the one d, single quadratic maximum?

Maybe another way of asking the same question, do I understand correctly that Feigenbaum Universality dictates there is periodicity (structure) to the mixture of order and chaos in non-linear maps that switch back and forth not just the rate of convergence (to chaos) of maps that... just converge to chaos?

[Edit] You know never mind. Those aren't very good questions. I Just spent some more time on the wiki chaos pages. I need to find another book (besides Schroeder's) on chaotic systems. Most are either silly or real text books. Schroeder's was something rare... in between. I'd like to understand the topic of non-linear dynamics, chaos, fractals, mo' better.
 
Last edited:
630
389
@Auto-Didact Thanks for such a substantial reply. Really.
My pleasure. I should say that during my physics undergraduate days, there were only three subjects I really fell in love with: Relativistic Electrodynamics, General Relativity and Nonlinear Dynamics. They required so little, yet produce so much; it is a real shame in my opinion that neither of the last two seem to be standard part of the undergrad physics curricula (none of the other physics majors took it in my year, nor the three subsequent years under my year).

Each of these subjects simultaneously both deepened my understanding of physics and widened my view of (classical pure and modern applied) mathematics in ways that none of the other subjects in physics ever seemed to be capable of doing (in particular what neither QM nor particle physics were ever able to achieve for me aesthetically in the classical pure mathematics sense). It saddens me to no end that more physicists don't seem to have taken the subject of nonlinear dynamics in its full glory.
Is there a notion of Feigenbaum Universality associated with multi-parameter iterated maps? Or does his proof fall apart for cases other than the one d, single quadratic maximum
To once again clarify, it doesn't just apply to iterative maps; it directly applies to systems of differential equations i.e. to dynamical systems. Feigenbaum universality directly applies to the dynamics of any system of 3 or more coupled NDEs with any amount of parameters.

The iterative map is just a tool to study the dynamical system, by studying a section of that system: you could use more parameters but one parameter is all one actually needs, so why bother? Once you start using more than one, you might as well just directly study the dynamical system.

In fact, you would need to be very lucky to find a nonlinear dynamical system (NDS) which only has one parameter! I only know of one example of an NDS with only one nonlinearity yet it has 3 parameters, namely the Rössler system:
##\dot x=-y-z##
##\dot y=x+ay##
##\dot z=b+z(x-c)##

In order to actually carry out the Lorenz map technique I described earlier on this system, we need to numerically keep two of the 3 parameters ##a##, ##b## and ##c## constant to even attempt an analysis! Knowing which one needs to be constant and which one needs to be varied is an art that you learn by trial and error.

To analyze any amount of parameters simultaneously is beyond the capabilities of present day mathematics, because it requires simultaneously varying, integrating and solving for several parameters; fully understanding turbulence for example requires this. This kind of mathematics doesn't actually seem to exist yet; inventing such mathematics would directly lead to a resolution of proving existence and uniqueness of the Navier-Stokes equation.

Luckily, we can vary each parameter independently while keeping the others fixed and there are even several powerful theorems which help us get around the practical limitations such as "the mathematics doesn't exist yet"; moreover, I'm optimistic that some kind of neural network might eventually actually be capable of doing this.
Maybe another way of asking the same question, do I understand correctly that Feigenbaum Universality dictates the periodicity of order and chaos in non-linear maps that switch back and forth not just the rate of convergence to chaos?
Yes, if by periodicity of order and chaos you mean how the system goes into and out of chaotic dynamics.
Or at least that there is some geometry (logic) of the parameter space that controls the periodicity of switching...
Yes, for an iterative map the points on the straight line ##x_{n+1}=x_n## intersects with the graph of the iterative map; these intersections define fixed points and so induce a vector field on this line. Varying the parameter ##r## directly leads to the creation and annihilation of fixed points; these fixed points constitute the bifurcation diagram in the parameter space (##r,x##).

For the full continuous state space of the NDS, i.e. in the differential equations case, the periodicity is equal to the amount of 'loops' in the attractor characterizing the NDS; if the loops double by varying parameters, there will be chaos beyond some combination of parameters, i.e. an infinite amount of loops i.e. a fractal i.e. a strange attractor.

This special combination of parameters is a nondimensionalisation of all relevant physical quantities; this is why all of this seems to be completely independent of any physics of the system. In other words, a mathematical scheme for going back from these dimensionless numbers to a complete description of the physics is "mathematics which doesn't exist yet".

The attractor itself is embedded within a topological manifold, i.e. a particular subset of the state space. All of this is completely clear visually by just looking at the attractors while varying parameters. This can all be naturally described using symplectic geometry.

To state things more bluntly, attractor analysis in nonlinear dynamics is a generalization of Hamiltonian dynamics by studying the evolution of Hamiltonian vector fields in phase space; the main difference being that the vector fields need not be conservative nor satisfy the Liouville theorem during time evolution.
You know never mind.
Too late! I went to the movies (First Man) and didn't refresh the tab before I finished the post.
Those aren't very good questions. I Just spent some more time on the wiki chaos pages. I need to find another book (besides Schroeder's) on chaotic systems. Most are either silly or real text books. Schroeder's was something rare... in between. I'd like to understand the topic of non-linear dynamics, chaos, fractals, mo' better.
Glad to hear that, I recommend Strogatz and the historical papers. To my other fellow physicists: I implore thee, take back what is rightfully yours from the mathematicians!
 

Jimster41

Gold Member
738
79
@Auto-Didact Once again, Thanks. The fact you could understand and answer my questions so clearly means a lot to me. Very encouraging.

I read Sync. by Strogatz. Does he have others? It was quite good, fascinating. Though I wish he'd gone deeper into describing more of the math of the chase - sort of as you do above. IOW It was a bit pop. I bought and delved into Winfree's "Geometry of Biological Time" absolutely beautiful book. His 3D helix of fruit fly eclosion and the examples of sync and singularities he gives in the first few chapters is worth the price alone but it becomes a real practitioners bible pretty quickly.

The only part of your reply above that makes my knee jerk is the statement "iterated maps are just a tool to study dynamical systems..." I get that is the context in which the math was invented, the bauble of value supposedly being the continuous NDS. But back to the topic of this thread (maybe flipping it's title while at the same time finding a lot of agreement in content). Don't discrete lattice, triangulation and causal loop models of space-time imply, perhaps, that continuous NDS's exist in appearance only, from a distance, because iterated maps are fundamental...

I just started Rovelli's book "Reality Is Not What It Seems". Word to the wise - he starts of with a (really prettily written) review of the the philosophical history behind the particle/field duality; Theodosius, Democritus et. al. I am taking my time and expecting a really nice ride. It looks painfully brief tho.

You ever heard of, read Nowak, "Evolutionary Dynamics". It's one of those few Shcroeder-like ones. And fascinating. After Rovelli's reminder on Einstien's important work re Brownian motion and the "Atomic Theory" I am wrestling with the question of whether Einstien's method isn't the same thing Nowak lays out in his chapter on evolutionary drift - which really took me some time to grok - blowing my mind as it did. I stopped reading that book halfway through partly because that chapter seemed to me to describe spontaneous symmetry breaking - using just an assertion of discrete iteration. Which made me sure I had misunderstood - since spontaneous symmetry breaking seems to require a lot more fuss than that.

Looking forward to "First Man" though I just don't think it's fair that Ryan Gosling gets to play "Officer K" and "Neil Armstrong". That's just too much cool...
 
Last edited:
630
389
Quick reply, since I wasn't entirely satisfied with this either:
The iterative map is just a tool to study the dynamical system, by studying a section of that system: you could use more parameters but one parameter is all one actually needs, so why bother? Once you start using more than one, you might as well just directly study the dynamical system.
I should clarify this; saying that the iterative map is "just a tool" is a very physics oriented way of looking at things, but it is essential (also partially because of the possibility to carry out experiments) to be able to look at it in this way; physicists trump mathematicians in being capable of doing this.

The first point is that iterative maps, being discrete, allows having functions which aren't bijective, i.e. for a single input ##x## you can get several (even an infinite amount of) outputs ##y##; this violates uniqueness and therefore makes doing calculus impossible.

The second point is that there are several kinds of prototypical iterative mapping techniques which to the physicist are literally tools, in the same sense like how e.g. the small angle approximation and perturbation theory are merely tools. These prototypical iterative mapping techniques are
- the Lorenz map, constructable using only one input variable as I described before.
- the Poincaré map, which is a section through the attractor which maps input points (i.e. the flow on a loop) ##x_n## within this section to subsequent input points ##x_{n+1}## which pass through this same section.
- the Henon map, which is unlike the other two literally just a discrete analog of a NDS, consisting of two coupled difference equations with two parameters; in contrast to the continuous case, attractors in this map can already display chaos in just a two dimensional state space.

For completeness, in order to understand the numerical parameters themselves better from a physics perspective, check out this post. I'll fully read and reply to the rest of your post later.
 
630
389
@Auto-Didact Once again, Thanks. The fact you could understand and answer my questions so clearly means a lot to me. Very encouraging.
No problem.
I read Sync. by Strogatz. Does he have others? It was quite good, fascinating. Though I wish he'd gone deeper into describing more of the math of the chase - sort of as you do above. IOW It was a bit pop. I bought and delved into Winfree's "Geometry of Biological Time" absolutely beautiful book. His 3D helix of fruit fly eclosion and the examples of sync and singularities he gives in the first few chapters is worth the price alone but it becomes a real practitioners bible pretty quickly.
Strogatz' masterpiece is his textbook on nonlinear dynamics and chaos theory. Coincidentally, Winfree's book was put on my to read list after I read Sync a few years ago; the problem is my list is ever expanding, but I'll move it up a bit since you say it's more than pop.
The only part of your reply above that makes my knee jerk is the statement "iterated maps are just a tool to study dynamical systems..." I get that is the context in which the math was invented, the bauble of value supposedly being the continuous NDS.
In my previous post I addressed how some maps (like the Lorenz and Poincaré maps) are 'just tools' just like how perturbation theory is merely a tool, but I'll add to that the statement that the attractors in some actually simplified and discretized versions of the continuous NDS (like the two-dimensional Henon map) can have problems at the edges of the attractor with values going off to infinity; in proper attractors, i.e. in the continuous case with three or more dimensions, such problems do not occur, which shows that the discretized reduced versions are nothing but idealized approximations in some limit.
But back to the topic of this thread (maybe flipping it's title while at the same time finding a lot of agreement in content). Don't discrete lattice, triangulation and causal loop models of space-time imply, perhaps, that continuous NDS's exist in appearance only, from a distance, because iterated maps are fundamental...
Perhaps, but unlikely since those are all discrete models of spacetime, not of state space. Having said that, discrete state space is a largely unexplored topic at the cutting edge intersection of NLD, statistical mechanics and network theory, called 'dynamical networks' or more broadly 'network science'; incidentally Strogatz, his former student Watts and a guy named Barabasi are pioneers in this new field. For a textbook on this subject, search for "Network Science" by Barabasi.
I just started Rovelli's book "Reality Is Not What It Seems". Word to the wise - he starts of with a (really prettily written) review of the the philosophical history behind the particle/field duality; Theodosius, Democritus et. al. I am taking my time and expecting a really nice ride. It looks painfully brief tho.
I read it awhile ago, back to back with some of his other works, see here.
You ever heard of, read Nowak, "Evolutionary Dynamics". It's one of those few Shcroeder-like ones. And fascinating. After Rovelli's reminder on Einstien's important work re Brownian motion and the "Atomic Theory" I am wrestling with the question of whether Einstien's method isn't the same thing Nowak lays out in his chapter on evolutionary drift - which really took me some time to grok - blowing my mind as it did.
I'll put it on the list.
I stopped reading that book halfway through partly because that chapter seemed to me to describe spontaneous symmetry breaking - using just an assertion of discrete iteration. Which made me sure I had misunderstood - since spontaneous symmetry breaking seems to require a lot more fuss than that.
In my opinion, all the fuss behind spontaneous symmetry breaking is actually far less deep than what is conventionally conveyed by particle physicists, but my point of view is clearly an unconventional one among physicists because I think QT is not fundamental i.e. that the presumed fundamentality of operator algebra and group theory in physics is a hopelessly misguided misconception.
Looking forward to "First Man" though I just don't think it's fair that Ryan Gosling gets to play "Officer K" and "Neil Armstrong". That's just too much cool...
It wasn't bad, but I was expecting more; I actually saw 'Bohemian Rhapsody' the same day. They are both dramatized biography films, with clearly different subjects, but if I had to recommend one, especially if you are going with others, I'd say go watch Bohemian Rhapsody instead of First Man.
 

Jimster41

Gold Member
738
79
Perhaps, but unlikely since those are all discrete models of spacetime, not of state space. Having said that, discrete state space is a largely unexplored topic at the cutting edge intersection of NLD, statistical mechanics and network theory, called 'dynamical networks' or more broadly 'network science'; incidentally Strogatz, his former student Watts and a guy named Barabasi are pioneers in this new field. For a textbook on this subject, search for "Network Science" by Barabasi.
Well, I hadn't considered the difference to be honest and in hindsight I can see why it's important to distinguish...
But I'm really going to have a think, I think, on just what the distinction implies. It sharpens my confusion w/respect to how a continuous support can spontaneously generate discrete stuff vs. the seemingly intuitive nature of things going the other way - where discrete stuff creates an illusion of continuity.

The book you mention looks right on target...

I assume you knew his site existed (an on-line version of the book). I just found it but I'm a bit afraid to post the link here. I think I will have to own the actual book tho...

I am also really looking forward to Bohemian Rhapsody.
 

DarMM

Science Advisor
Gold Member
2,233
1,267
Okay I meant to come back to this. As I said I agree with you in the main. It's more I'm just not sure what you're actually disagreeing with and I think you're being very dismissive of a field without providing much reason.

Its more important than you realize as it makes or breaks everything even given the truth of the 5 other assumptions you are referring to. If for example unitarity is not actually 100% true in nature, then many no-go theorems lose their validity.
Which no-go theorems? Not PBR, not Bell's, not the Kochen-Specker, not Hardy's baggage theorem, not the absence of maximally epistemic theories. What are these many theorems?

Bell's theorem for example would survive, because it doesn't make the same assumptions/'mistakes' some of the other do.
Most of the major no-go theorems take place in the same framework as Bell's theorem, e.g. Kochen-Specker, Hardy. What's an example of one that could fail while Bell's would still stand?

I think you are misunderstanding me, but maybe only slightly. The reason I asked about the properties of the resulting state space is to discover if these properties are necessarily part of all models which are extensions of QM. It seems very clear to me that being integrable isn't the most important property
No it mightn't be, but nobody is saying that is. It more highlights an interesting possibility, that you might need an unmeasurable space and those are never really looked at.

Yes, definitely.
Sorry, but you really think most of the no-go theorems are nonsense that's as useful as saying "physics uses numbers"? The PBR theorem, the Pusey-Leifer theorem, etc are just contentless garbage? If not could you tell me which are?

I still don't think taking the state space to be "at least measurable" is devoid of content and as meaningful as saying "physics uses numbers". It's setting out what models are considered. In fact I would say it strengthens the theorems considering how weak an assumption it is.

Also I still don't understand how it is necessarily epistemic. A measurable space might be put to an epistemic use, but I don't see how it is intrinsically so.

A model moving beyond QM may either change the axioms of QM or not. These changes may be non-trivial or not. Some of these changes may not yet have been implemented in the particular version of that model for whatever reason (usually 'first study the simple version, then the harder version'). It isn't clear to me whether some (if not most) of the no-go theorems are taking such factors into account.
So your main objection to the framework is that it might unfairly eliminate a model in the early stages of development? In other words, an earlier simpler version of an idea might have some interesting insights, but it's early form, being susceptible to the no-go theorems, might be unfairly dismissed without being given time to advance to a form that doesn't and might help us understand/supersede QM?
 
40
2
This is an intriguing proposition. As noted, self-organizing dynamics occur on a myriad of scales, are robust and have an extensive mathematical basis. Speaking with a very superficial understanding, it feels organic rather than mechanistic and potentially rooted in a new foundational paradigm. Having just read something about Bohmian mechanics it feels like the two might go together.
 
630
389
I assume you knew his site existed (an on-line version of the book). I just found it but I'm a bit afraid to post the link here. I think I will have to own the actual book tho...
Whose book is online?
It more highlights an interesting possibility, that you might need an unmeasurable space and those are never really looked at.
Now this is indeed an intriguing possibility.
Sorry, but you really think most of the no-go theorems are nonsense that's as useful as saying "physics uses numbers"?
I was being a bit derisive of them, they clearly aren't mere nonsense, but I would say that you yourself are making light of the statement that physics uses numbers; the fact that physics uses real numbers and complex numbers is quite profound in its own right, perhaps more so than the state space being measurable.

My point is that no-go theorems which are about theories instead of about physical phenomena aren't actually theorems belonging to physics, but instead theorems belonging to logic, mathematics and philosophy; see e.g. Gleason's theorem for another such extra-physical theorem pretending to be physics proper.

There is no precedent whatsoever within the practice of physics for such kind of theorems which is why it isn't clear at all that the statistical utility of such theorems for non-empirical theory selection is actually a valid methodology, and there is a good reason for that; how would the sensitivity and specificity w.r.t. the viability of theories be accounted for if the empirically discriminatory test is a non-empirical theorem?

It is unclear whether such a non-empirical tool is epistemologically - i.e. scientifically - coherently capable of doing anything else except demonstrating consistency with unmodified QM/QFT. If this is all the theorems are capable of, sure they aren't useless, but they aren't nearly as interesting if QM is in fact in need of modification, just like all known theories in physics so far were also in need of modification.

Physics is not mathematics, philosophy or logic; it is an empirical science, which means that all of this would have to be answered before advising or encouraging theorists to practically use such theorems in order to select the likelihood of a theory beyond QM in such a statistical manner. To put it bluntly, scientifically these theorems might just end up proving to be 'not even wrong'.
If not could you tell me which are?
I'll get back to this.
Also I still don't understand how it is necessarily epistemic. A measurable space might be put to an epistemic use, but I don't see how it is intrinsically so.
If some necessary particular mathematical ingredients such as geometric or topological aspects are removed, physical content may be removed as well; what randomly ends up getting left may just turn out to be irrelevant fluff, physically speaking.
So your main objection to the framework is that it might unfairly eliminate a model in the early stages of development? In other words, an earlier simpler version of an idea might have some interesting insights, but it's early form, being susceptible to the no-go theorems, might be unfairly dismissed without being given time to advance to a form that doesn't and might help us understand/supersede QM?
Partially yes, especially given the lack of precedent for using theorems (which might belong more to mathematics or to philosophy instead of to physics) in such a non-empirical statistical selection procedure.
This is an intriguing proposition. As noted, self-organizing dynamics occur on a myriad of scales, are robust and have an extensive mathematical basis. Speaking with a very superficial understanding, it feels organic rather than mechanistic and potentially rooted in a new foundational paradigm. Having just read something about Bohmian mechanics it feels like the two might go together.
There seems to be at least one link with BM, namely that Manasson's model seems to be fully consistent with Nelson's fully Bohmian program of stochastic electrodynamics.
 
630
389
To get back to this:
It more highlights an interesting possibility, that you might need an unmeasurable space and those are never really looked at.
I said earlier that that was an intriguing possibility, but this is actually my entire point: monkeying with the topology and/or the fractality of (a subset of a) space may influence its measurability.

Therefore prematurely excluding theories purely on the basis of their state spaces being (or locally seeming) measurable, is in theoretical practice almost guaranteed to lead to a high degree of false positive exclusions.
 

Fra

3,073
142
I agree that beeing "measurable" is a key topic in this discussion. In particular to consider the physical basis of what beeing measureable is. In a probabilistic inference the measure is essential in order to quantify and rate empirical evidence. This is essential to the program, so i would say that the insight is not to release ourselves from requirements of measurability, that would be a mistake in the wrong direction. I think they insight must be that what is measurable relative to one observer, need not be measurable with respect to another observers. This all begs for a new intrinsic framework for probabilistic inference, that lacks global or observer invariant measures.

If we think about how intrinsic geometry originated from asking how a life form not beeing aware of en embedding geometry can infer geometry from local experiments within the surface; and translate that to asking how an information processing agent not beeing aware of the embedding truth, can infer things from incomplete knowledge confined only to its limited processing power: What kind of mathematics will that yield us? Then lets try to phrase or reconsturct QM in these terms. note this this would forbid things like infinite ensembles or infinite repeats of experiments. It will force us to formulate QM foundations with the same constraints we live with for cosmological theories.

A side note: Merry Christmas :)

/Fredrik
 
114
59
The author convincingly demonstrates that practically everything known about particle physics, including the SM itself, can be derived from first principles by treating the electron as an evolved self-organized open system in the context of dissipative nonlinear systems. Moreover, the dissipative structure gives rise to discontinuities within the equations and so unintentionally also gives an actual prediction/explanation of state vector reduction, i.e. it offers an actual resolution of the measurement problem of QT
Unless I seriously missed something in that article, it isn't very convincing at all. In particular, he describes this self organization as a self organization of the vacuum. However, without quantum field theory, you have nothing which defines a vacuum state and nothing to self organize.
 
630
389
In particular, he describes this self organization as a self organization of the vacuum. However, without quantum field theory, you have nothing which defines a vacuum state and nothing to self organize.
The author - without planning to do so - makes a (seemingly) unrelated mathematical argument based on a clear hypothesis and then spontaneously goes on to derive the complete dynamical spinor state set i.e. the foundation of Dirac theory from first principle by doing pure mathematics in state space based on purely empirical grounds.

Quantum field theory, despite being the original context in which vacuum states were predicted theoretically and discovered experimentally, certainly isn't the only possible theory capable of describing the vacuum.

After experimental discovery has taken place, theorists are free to extend the modelling of any emperically occurring phenomenon using any branch of mathematics which seems fit to do so: this is how physics has always worked.

For the vacuum this proliferation of models has already occurred, i.e. the vacuum isn't a unique feature of QFT anymore; any theory aiming to go beyond QFT has to describe the vacuum as part of nature; how it does so depends on the underlying mathematics.
 

Want to reply to this thread?

"Quantization isn't fundamental" You must log in or register to reply here.

Related Threads for: Quantization isn't fundamental

Replies
16
Views
3K
Replies
12
Views
3K
Replies
2
Views
868
Replies
15
Views
3K
  • Posted
Replies
13
Views
4K
  • Posted
2
Replies
49
Views
12K
  • Posted
Replies
10
Views
802

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top