A Quantization isn't fundamental

Auto-Didact
Messages
747
Reaction score
558
This thread is a direct shoot-off of this post from the thread Atiyah's arithmetic physics.

Manasson V. 2008, Are Particles Self-Organized Systems?
Abstract said:
Elementary particles possesses quantized values of charge and internal angular momentum or spin. These characteristics do not change when the particles interact with other particles or fields as long as they preserve their entities. Quantum theory does not explain this quantization. It is introduced into the theory a priori. An interacting particle is an open system and thus does not obey conservation laws. However, an open system may create dynamically stable states with unchanged dynamical variables via self-organization. In self-organized systems stability is achieved through the interplay of nonlinearity and dissipation. Can self-organization be responsible for particle formation? In this paper we develop and analyze a particle model based on qualitative dynamics and the Feigenbaum universality. This model demonstrates that elementary particles can be described as self-organized dynamical systems belonging to a wide class of systems characterized by a hierarchy of period-doubling bifurcations. This semi-qualitative heuristic model gives possible explanations for charge and action quantization, and the origination and interrelation between the strong, weak, and electromagnetic forces, as well as SU(2) symmetry. It also provides a basis for particle taxonomy endorsed by the Standard Model. The key result is the discovery that the Planck constant is intimately related to elementary charge.

The author convincingly demonstrates that practically everything known about particle physics, including the SM itself, can be derived from first principles by treating the electron as an evolved self-organized open system in the context of dissipative nonlinear systems. Moreover, the dissipative structure gives rise to discontinuities within the equations and so unintentionally also gives an actual prediction/explanation of state vector reduction, i.e. it offers an actual resolution of the measurement problem of QT.

However, this paper goes much further: quantization itself, which is usually assumed a priori as fundamental, is here on page 6 shown to originate naturally as a dissipative phenomenon emerging from the underlying nonlinear dynamics of the system being near stable superattractors, i.e. the origin of quantization is a limiting case of interplay between nonlinearity and dissipation.

Furthermore, using standard tools from nonlinear dynamics and chaos theory, in particular period doubling bifurcations and the Feigenbaum constant ##\delta##, the author then goes on to derive:
- the origin of spin half and other SU(2) symmetries
- the origin of the quantization of action and charge
- the coupling constants for strong, weak and EM interactions
- the number and types of fields
- a explanation of the fine structure constant ##\alpha##:
$$ \alpha = (2\pi\delta^2) \cong \frac {1} {137}$$
- a relationship between ##\hbar## and ##e##:$$ \hbar = \frac {\delta^2 e^2} {2} \sqrt {\frac {\mu_0} {\epsilon_0}}$$
In particular the above equation suggests a great irony about the supposed fundamentality of quantum theory itself; as the author puts it himself:
page 10 said:
Ironically, the two most fundamental quantum constants, ##\hbar## and ##e##, are linked through the Feigenbaum ##\delta##, a constant that belongs to the physics of deterministic chaos and is thus exclusively non-quantum.

Our results are assonant with ’t Hooft’s proposal that the theory underlying quantum mechanics may be dissipative [15]. They also suggest that quantum theory, albeit being both powerful and beautiful, may be just a quasi-linear approximation to a deeper theory describing the non-linear world of elementary particles. As one of the founders of quantum theory, Werner Heisenberg once stated, “. . . it may be that. . . the actual treatment of nonlinear equations can be replaced by the study of infinite processes concerning systems of linear differential equations with an arbitrary number of variables, and the solution of the nonlinear equation can be obtained by a limiting process from the solutions of linear equations. This situation resembles the other one. . . where by an infinite process one can approach the nonlinear three-body problem in classical mechanics from the linear three-body problem of quantum mechanics.”[11]
Suffice to say, this paper is a must-read. Many thanks to @mitchell porter for linking it and to Sir Michael Atiyah for reigniting the entire discussion in the first place.
 
Physics news on Phys.org
Btw, if there is any doubt, it should be clear that I realize that Manasson's specific model isn't necessarily correct, and I am in no way promulgating his views here as being absolutely true. I however distinctly believe that Manasson's theory isn't just your mere run of the mill pseudoscientific mumbo jumbo. Instead I believe that what he is saying is just so highly non-traditional that most readers - especially those deeply familiar with QT but relatively unfamiliar with either the practice of or literature on dynamical systems analysis - have an extremely high probability of just outright calling it heresy with regard to established contemporary physics. Manasson is after all literally proposing that conservation laws in QT might be an emergent phenomenon and that therefore everything physicists think and claim to know about the fundamental nature of symmetries, the Noether theorem, gauge theory and group theory is hopelessly misguided; if this doesn't strike one as heresy in the contemporary practice of physics, I don't know what does!

There are other things which should be noted as well. Reading his paper critically for example, it is obvious his specific equations may be off by a small numerical factor; this however is almost always the case when constructing a preliminary model based on dimensional analysis and therefore shouldn't be a case for outright dismissal. His equations as of yet seem to have no known interpretation as geometric or as some known dimensionless group; this already shows that his specific equations are tentative instead of definitive. Also this should be obvious because he hasn't actually published this paper in a journal despite having submitted it 10 years ago to the arxiv. Curiously he doesn't have any other publications either on the arxiv and only one recent publication in a Chinese journal.

Regardless of exact numerics however, Manasson's approach in this paper i.e. an approach to reformulating the axioms of QT based on identifying QT's a priori mathematical properties, as resulting from a limiting case of an exactly linearizable dissipitave nonlinear dynamical system is a very promising theoretical direction which is very much a novel but distinctly mainstream science approach (see e.g. Dieter Schuh, 2018 "Quantum Theory from a Nonlinear Perspective: Riccati Equations in Fundamental Physics"). Moreover, this particular qualitative theoretical method for example is constantly used with other theories, both within mainstream science and mainstream soft matter and biophysics but it is especially popular outside of physics; importantly, in the practice of dynamical systems this is literally the method of empirically establishing the qualitative mathematical properties of some extremely complicated system after which a particular NDE or some class of NDE may be guessed.

The reason I am posting Manasson's theory, is because I couldn't find any earlier threads on it and I believe that this approach definitely warrants further investigation, whether or not it will turn out to be correct in the end. Lastly, other very prominent theorists have suggested very similar lines of reasoning as Manasson is doing in his paper without actually giving a mathematical model, in particular both Roger Penrose (1. in the form of a nonlinear reformulation of the linear characteristics of QM being necessary in order to unite it with GR, and 2. the idea that the fundamental laws of physics should be less symmetrical instead of more symmetrical) and Gerard 't Hooft (in the form of the idea that a dissipative theory might underly QM); moreover as both Richard Feynman and Lee Smolin have remarked, what may turn out to be wrong with the practice of theoretical physics is the assumption of the timelessness/ahistorical nature of (some) physical laws; dynamically evolved laws, e.g. in the form that Manasson is proposing here, would address these points as well. These coincidences only make me more curious about Manasson's proposal. Incidentally, it also reminds of something Feynman said about symmetry:
Feynman Lectures said:
We have, in our minds, a tendency to accept symmetry as some kind of perfection. In fact it is like the old idea of the Greeks that circles were perfect, and it was rather horrible to believe that the planetary orbits were not circles, but only nearly circles. The difference between being a circle and being nearly a circle is not a small difference, it is a fundamental change so far as the mind is concerned.

There is a sign of perfection and symmetry in a circle that is not there the moment the circle is slightly off—that is the end of it—it is no longer symmetrical. Then the question is why it is only nearly a circle—that is a much more difficult question. The actual motion of the planets, in general, should be ellipses, but during the ages, because of tidal forces, and so on, they have been made almost symmetrical.

Now the question is whether we have a similar problem here. The problem from the point of view of the circles is if they were perfect circles there would be nothing to explain, that is clearly simple. But since they are only nearly circles, there is a lot to explain, and the result turned out to be a big dynamical problem, and now our problem is to explain why they are nearly symmetrical by looking at tidal forces and so on.

So our problem is to explain where symmetry comes from. Why is nature so nearly symmetrical? No one has any idea why.
The Character of Physical Law said:
Another problem we have is the meaning of the partial symmetries. These symmetries, like the statement that neutrons and protons are nearly the same but are not the same for electricity, or the fact that the law of reflection symmetry is perfect except for one kind of reaction, are very annoying. The thing is almost symmetrical but not completely.

Now two schools of thought exist. One will say that it is really simple, that they are really symmetrical but that there is a little complication which knocks it a bit cock-eyed [NB: symmetry breaking]. Then there is another school of thought, which has only one representative, myself, which says no, the thing may be complicated and become simple only through the complications.

The Greeks believed that the orbits of the planets were circles. Actually they are ellipses. They are not quite symmetrical, but they are very close to circles. The question is, why are they very close to circles? Why are they nearly symmetrical? Because of a long complicated effect of tidal friction, a very complicated idea.

It is possible that nature in her heart is completely unsymmetrical in these things, but in the complexities of reality it gets to look approximately as if it is symmetrical, and the ellipses look almost like circles. That is another possibility; but nobody knows, it is just guesswork.
As we can see from Feynman's points, it would definitely not be the first time in the history of physics that an ideal such as symmetry would end up having to be replaced; not simply by some small fix-up like symmetry breaking, but more radically, by finding some underlying dynamical theory. It goes without saying that chaos theory and nonlinear dynamics only really came into their own as large highly interdisciplinary fields of science after Feynman stopped doing physics/passed away; suffice to say it would have been incredibly interesting to know what he would've thought about them.
 
Last edited:
  • Like
Likes Fra
Its a bit depressing to always have too little time for things, I didnt yet read the paper in detail and i have no opinion of the author but but your description here makes me bite.

First a general comment: If questioning the constructing principles following from timeless symmetries on which most modern understanding of physics rests on is heresy, they I would say the next genious that physics needs to trigger a paradigm shift needed to solve open questios are likely a heretic by definition. So no shame per see to be labeled a heretic.
Auto-Didact said:
Regardless of exact numerics however, Manasson's approach in this paper i.e. an approach to reformulating the axioms of QT based on identifying QT's a priori mathematical properties, as resulting from a limiting case of an exactly linearizable dissipitave nonlinear dynamical system is a very promising theoretical direction which is very much a novel but distinctly mainstream science approach (see e.g. Dieter Schuh, 2018 "Quantum Theory from a Nonlinear Perspective: Riccati Equations in Fundamental Physics"). Moreover, this particular qualitative theoretical method for example is constantly used with other theories, both within mainstream science and mainstream soft matter and biophysics but it is especially popular outside of physics; importantly, in the practice of dynamical systems this is literally the method of empirically establishing the qualitative mathematical properties of some extremely complicated system after which a particular NDE or some class of NDE may be guessed.
...
The reason I am posting Manasson's theory, is because I couldn't find any earlier threads on it and I believe that this approach definitely warrants further investigation, whether or not it will turn out to be correct in the end. Lastly, other very prominent theorists have suggested very similar lines of reasoning as Manasson is doing in his paper without actually giving a mathematical model, in particular both Roger Penrose (1. in the form of a nonlinear reformulation of the linear characteristics of QM being necessary in order to unite it with GR, and 2. the idea that the fundamental laws of physics should be less symmetrical instead of more symmetrical) and Gerard 't Hooft (in the form of the idea that a dissipative theory might underly QM); moreover as both Richard Feynman and Lee Smolin have remarked, what may turn out to be wrong with the practice of theoretical physics is the assumption of the timelessness/ahistorical nature of (some) physical laws; dynamically evolved laws, e.g. in the form that Manasson is proposing here, would address these points as well. These coincidences only make me more curious about Manasson's proposal. Incidentally, it also reminds of something Feynman said about symmetry:
...
As we can see from Feynman's points, it would definitely not be the first time in the history of physics that an ideal such as symmetry would end up having to be replaced; not simply by some small fix-up like symmetry breaking, but more radically, by finding some underlying dynamical theory. It goes without saying that chaos theory and nonlinear dynamics only really came into their own as large highly interdisciplinary fields of science after Feynman stopped doing physics/passed away; suffice to say it would have been incredibly interesting to know what he would've thought about them.

I have a different quantitative starting point but some of the core conceptual ideas of the paper are fully in line with my thinking. And indeed its very hard to convince different thinkers of the plausability of these ideas, because they are indeed a heresy to much of constructing principles of modern physics. And even discussing this unavoidable gets into philosophy of science discussions which immediately makes some people stop listening. This is why, what it takes is for some of the bold heretics to make progress in the silent, and then publish it at a state that is mature enough to knock doubters off their chairs although that is an almost unhuman task to accomplish for a single person.

1) The idea that elementary particles (or any stable system for that matter) are the result of self organisation in a chaotic environment, is exactly what evolution of law also implies. In essence the population of elemetary particles and their properties, implicitly encode the physical laws. So the stability of laws in time, are just another side of the coin of determining the particle zoo.

The challange for this program is to first of all, explain the stability of physical law, if we claim its fundamentally chaotic - here is where the supersattractors come in. Attractors is a better word than equilibrium, but they are related. Ie. one needs to relaxe the laws, without introducing chaos at the macrolevel, and here SOS is the key. I fully share this view. But the mathemtcail details are a different story.

One way to understand also the stabilizing mechanis of how elemetary particles encode, reduced information about its environment, is to consider the topic of compression sensing, which is commonly used technique in signal processing (fourier analysis) and also how neuroscientists believe the brain works, ie. the brain is not a datarecorder, it reencodes information and stores it in a way that increases the chance of survival considering the expected future.

This is the exact analogy if how conceptually an elementary particles internal structure, mass and charges are tuned to be maximally stable agains a hostile (noisy, challaning) environment. Once you really understand the depth of this i find it hard not to be comitted.

This if course also another way to integrate the renormalization processes with physical interactions, and with that the formation of stable self organised information processing agents we later label elemetary particles.

My personal approach here is that i have pretty most abandoned the idea of trying to convince who do not want to understand. Instead i have taken on the impossible task of trying to work this out on my own.And likely there are a bunch of other heretics out there that have similar thinking, and hopefully someone will get time to mature things into something worth publishing. The risk of pulishing something too early is obvious.

What is needed is to take ideas from toy models and toy dimensions and make contact to contemporary physics and make explicit pre or postdictions about relations between some of the parameters of the standard models or find a consistent framework of QG that is still lacking etc. Anything less than this is likely beeing shoot down immediately by those that defend the conventional paradigm. The first impression from just skimming the paper is that it seems to be a qualitiative toy model mor ethan anything else. But like Auto-Didact says, there is no reason to let incomplete details haze our view of a clear conceptual vision.

/Fredrik
 
  • Like
Likes Andrea Panza, akvadrako and Auto-Didact
Auto-Didact said:
Regardless of exact numerics however, Manasson's approach in this paper i.e. an approach to reformulating the axioms of QT based on identifying QT's a priori mathematical properties, as resulting from a limiting case of an exactly linearizable dissipitave nonlinear dynamical system is a very promising theoretical direction which is very much a novel but distinctly mainstream science approach (see e.g. Dieter Schuh, 2018 "Quantum Theory from a Nonlinear Perspective: Riccati Equations in Fundamental Physics").

I will throw in some conceptual things here in how i understand this:

I think of the the "dissipation" of a small system in a chaotic environment as a consequence of that a small information processing agent can not encode and retain all information about its own "observations", it will rather - just like we think human brains do - make a compressed sensing - and retain what is most importat for survuval, and dissipate the rest (discard). This what is discarded is what this observer considers to the "random noise". This has deep implications also for understanding micro black holes, bcause the "randomness" of the radation, is observer dependent. This is no such thing as objective randomness. Unly unabilities of particular observers to decode it.

During this process the information processing agent i evolving and either gets destabilised, or stabilised. Over time, the surviving structures will in some sense be in a steady state where simply information processing agent are in a kind of agreement with the chaotic environment; relative to the existing communication channel. In this sense once can consider the "theory" encoded by the small elemetary particle a naked version of the renormalized theory, that is encoded in the environment. So the question for unificaiton, where we think that the laws will the unified at TOE energies, are the analog of this. The TOE energy scale is where the energy is so high that there are evolved information processing agents are disintegrated. So evolution of law is what happens during cooling down the universe, with the exception that is probably wrong to thinkg of this cooling process in terms of thermodynamics, because there is no outside observer. The trouble is that we have only an inside view of this. This is where one often sees fallacious reasoning as one tries to describe this as per a Newtonian schema (to use smolins words). It's how i think of it. The "formation" of the first "proto-observers" in this process as we increase complexity of observers, is where one needs to make contact eventually to elementary particles. So there should be a one-2-one relatrion between the particle zoo, and the laws of physics. MY personal twist here is that i also associate the "particle zoo" with the "observer zoo". This is how you get the intertwining mix of quantum foundations and self organisation.

These ideas themselves does not rule out say string theory, these ideas could be compatible to string theory as well if you understand the comments in the context of evolution in the landscape and maye even as a pre-string era.

This way of thinking leads to a number or "problems" though, such as a circular reasoning, and the problem of meta law. See smolins writings on this to see what he means by meta law. How do you get a grip on this? this is a problem, and not an easy one. This is why its easier to put things in by hand, so at least you have a starting point. In an evolutionary picture, what is the natural starting point?

/Fredrik
 
It seems to me that any attempt to explain quantum mechanics in terms of something more fundamental would run into the problem of accounting for Bell's Theorem. How can self-organizing systems explain EPR-type nonlocal correlations?
 
stevendaryl said:
It seems to me that any attempt to explain quantum mechanics in terms of something more fundamental would run into the problem of accounting for Bell's Theorem. How can self-organizing systems explain EPR-type nonlocal correlations?
Glad you asked. Back in 2016 there was an experiment by Neill et al. which seemed to show an experimentally based mathematical correspondence between the entanglement entropy of a few superconducting qubits on the one hand and the chaotic phase space dynamics in the classical limit on the other hand. This implies that entanglement and deterministic chaos are somehow linked through ergodicity and offers interesting insights into non-equilibrium thermodynamics, directly relating it to the notion of an open dissipative nonlinear dynamical system as proposed here.

I posted a thread on this particular experiment a year ago, but unfortunately got no replies (link is here), and I haven't done any followup reading since then to see if there has been any new experimental developments. In any case, if you want to read the paper describing that particular experiment I'd love to hear your thoughts. In any case, as is, I think it might already go a fairly far way in 'explaining' entanglement purely on chaotic and ergodic grounds.
 
If you go through Bell's argument leading to his inequality, it seems that the class of theories that are ruled out by EPR would include the sort of self-organizing systems that you're talking about, as long as the interactions are all local. Whether the dynamics is chaotic or not doesn't seem to come into play.
 
  • Like
Likes DrClaude and anorlunda
That simply isn't necessarily true if:
a) there is a nonlinear reformulation of QM which is inherently chaotic in some spacetime reformulation like 2-spinors or some holomorphic extension thereof like twistor theory, which after geometric quantization can reproduce the phenomenon of entanglement,
b) there exists a mathematical correspondence between entanglement and chaos,
c) some special combination of the above.

The only way one can argue against this point is to assume that linearity and unitarity are unique, necessary axioms of QT and then view QT as an unalterable, completed theory, something which it almost certainly isn't. This is of course the standard argument most contemporary physicists do make, i.e. they rely on a premature axiomatization of physics based on unitarity and then elevating symmetry to a fundamental notion.

The problem with such an axiomatic stance w.r.t. unitarity is that physical theory is incomplete and - because physics is an experimental science wherein the outcomes of future experiments are unknown - there can never truly be a point where one may validly conclude that physical theory has actually become complete. Therefore such an in principle axiomatization will in practice almost always be invalid reasoning in the context of (fundamental) physics; this is extremely confusing because the exact same argument is valid reasoning in the context of mathematics, precisely because mathematics is completely unempirical in stark contrast to physics.
 
Auto-Didact said:
That simply isn't necessarily true if:
a) there is a nonlinear reformulation of QM which is inherently chaotic in some spacetime reformulation like 2-spinors or some holomorphic extension thereof like twistor theory, which after geometric quantization can reproduce the phenomenon of entanglement,
b) there exists a mathematical correspondence between entanglement and chaos,
c) some special combination of the above.

I don't see how Bell's proof is affected by chaos.
 
  • #10
stevendaryl said:
I don't see how Bell's proof is affected by chaos.
You misunderstand my point: entanglement wouldn't be affected by chaos, instead entanglement would be itself a chaotic phenomenon instead of a quantum phenomenon, because all quantum mathematics would actually be linear approximations of chaotic mathematics. Bell's theorem doesn't exclude non-local hidden variables; in essence some kind of Lorentzian i.e. conformal spacetime formulation like projective twistor space would enable exactly such non-locality per GR.

This is entirely possible given that dynamical systems theory/chaos theory is still an immensely large open field of mathematics where daily new mathematical objects are being discovered pretty much everyday, unifying widely different branches of mathematics ranging from complex analysis, to fractal geometry, to non-equilibrium statistical mechanics, to modern network theory, to bifurcation theory, to universality theory, to renormalization group theory, to conformal geometry, and so on.

Manasson's point is precisely that the actual underlying origin of quantization i.e. quantumness of phenomenon in nature is really at bottom a very particular kind of chaotic phenomenon based in supercycle attractors, i.e. and that standard linear quantum mechanics is simply a limiting linear case of this underlying nonlinear theory.
 
  • #11
I only share traits of the idea mentioned in the paper, but at least what I had in mind, does not involve restoring realism at all. Actually relaxing the deductive structure (which i consider to follow from what i wrote before) suggests removing even more of realism. QM removes some realism, but we still have realism left as in the objectivity of physical law, which is assume timeless, eternal truth. The "realism" conditions in the bell argument is a deductive rule from a a hidden variable to outcome. What i have in mind means that this deductive link itself is necessarily unceratain, not to mention what a totally chaotic dependence would imply. It would imply that the function, representing realism in the proof would not be inferrable, due to chaos. To just assume it exists, while its obvious that its not inferrable from experiments, is to me an invalid assumption.

Anyway, in my eyes this was not the interesting part of the idea. I do not see lack of realism as a problem. I rather see the realism of physical law as we know it, as a problem because it is put in there in an ad hoc way by theorist ;-) Emergence aims to increase explanatory power by finding evolutionary and selforganisational reasons for why the laws are what they are.

/Fredrik
 
  • #12
Auto-Didact said:
You misunderstand my point: entanglement wouldn't be affected by chaos, instead entanglement would be itself a chaotic phenomenon instead of a quantum phenomenon, because all quantum mathematics would actually be linear approximations of chaotic mathematics.
I agree with @stevendaryl , Bell's theorem doesn't really use QM, just the assumptions of:
  1. Single Outcomes
  2. Lack of super-determinism
  3. Lack of retrocausality
  4. Presence of common causes
  5. Decorrelating Explanations (combination of 4. and 5. normally called Reichenbach's Common Cause principle)
  6. Relativistic causation (no interactions beyond light cone)
I don't see how chaotic dynamics gets around this.
 
  • Like
Likes no-ir and Demystifier
  • #13
I don't see that the purpose is to attack Bells theorem.

As I see is, the idea of particles and thus physical law that evolves to represent a kind fo maximally compressed encoding of their chaotic environment, leads to the insight that there can be observer dependent information that are fully decoupled and effectively isolated from the environment due to the mechanism of compressed encoding. But this does not mean that one can draw the conclusion that hidden variable theories are useful, it rather oppositely says that they are indistinguishable from non-existing hidden variables, from the point of view of inference, and part of the evolved compressed sensing paradigm means that any inference is "truncated" respecting the limited computational capacity.

This rather has the potential to EXPLAIN quantum weirdness, but NOT in terms of a regular hidden variable theory that are forbidden by bells theorem, but in terms of a picture where subsystems are selforganised compressed sensing structures which means that information can be genuinley hidden as in observer dependent.

To FULLY explain this will require nothing less than solving the problem of course which contais a lot of hard subproblems. But I personally think its easy to see the conceptual visions here, but i also understand that different people have different incompatible visions.

/Fredrik
 
  • #14
Fra said:
This rather has the potential to EXPLAIN quantum weirdness, but NOT in terms of a regular hidden variable theory that are forbidden by bells theorem, but in terms of a picture where subsystems are selforganised compressed sensing structures which means that information can be genuinley hidden as in observer dependent.

I guess I'm repeating myself, but it seems to me that self-organizing systems IS a regular hidden variable theory. So it's already ruled out by Bell. Unless it's nonlocal (or superdeterministic).
 
  • #15
Fra said:
I don't see that the purpose is to attack Bells theorem.
I definitely don't think that he is attacking Bell's theorem, it's just that in a sense Bell inequality violating correlations are the most perplexing feature of QM. We know other aspects of quantum mechanics, e.g. superposition, interference, teleportation, super dense coding, indistinguishably of non-orthogonal states, non-commutativity of measurements, measurements unavoidably disturbing the system, etc can be replicated by a local hidden variable theory. However post-classical correlations cannot be.

So anything claiming to replicate QM should first explain how it achieves these post-classical correlations. Replicating anything else is known to pose no problems.
 
Last edited:
  • Like
Likes no-ir, Demystifier and Auto-Didact
  • #16
Ie the reason that
stevendaryl said:
I guess I'm repeating myself, but it seems to me that self-organizing systems IS a regular hidden variable theory. So it's already ruled out by Bell. Unless it's nonlocal (or superdeterministic).
If this is your view, I understand your comments; so at this level we have not disagreement.

But what I have in mind with evolved partilces is NOT a regular hidden variable theory. Let me think how i can briefly explain better.

/Fredrik
 
Last edited:
  • #17
Fra said:
But what I have in mind with evolved partilces is NOT a regular hidden variable theory. It is rather something extremely non-linear.

But "regular hidden variable" theory INCLUDES "extremely non-linear" systems. Bell's notion of a hidden-variables theory allows arbitrary interactions, as long as they are local. Nonlinearity is not ruled out.

(Actually, "nonlinear" by itself doesn't mean anything. You have to say what is linear in what.)
 
  • #18
DarMM said:
I agree with @stevendaryl , Bell's theorem doesn't really use QM, just the assumptions of:
  1. Single Outcomes
  2. Lack of super-determinism
  3. Lack of retrocausality
  4. Presence of common causes
  5. Decorrelating Explanations (combination of 4. and 5. normally called Reichenbach's Common Cause principle)
  6. Relativistic causation (no interactions beyond light cone)
I don't see how chaotic dynamics gets around this.

Again chaotic dynamics doesn't have to get around any of that. I will try to answer each point, by stating what I am assuming/hypothesizing:

1) Analyticity/holomorphicity making single outcomes a necessity per the conventional uniqueness and existence arguments.
2) No assumption of superdeterminism.
3) No assumption of retrocausality of matter or information. What may however 'travel' in either direction in time is quantum information; this is a very unfortunate misnomer because quantum information is not a form of information!
4 & 5) The physical common cause is the specific spacetime pathway connecting some EPR pair: quantum information (which is not information!) 'travels' on this path. Actually 'traveling' is an incorrect term, the quantum information merely exists non-locally on this spacetime path. An existing mathematical model capable of capturing this concept is spin network theory. For those who need reminding, spin network theory is a fully spaceless and timeless description wherein quantum information exists on the edges of the network.
6) All matter and information can only follow timelike and lightlike curves, respectively traveling within or on the light cones. Quantum information existing non-locally across the entire spacetime path connecting any entangled EPR pair doesn't violate this, being neither a form of matter or information. A relativistic generalization of spin network theory capable of describing this - in which this non-locality is inherently explicit - is twistor theory. Twistor theory moreover utilizes a symmetry group which is both compatible with relativity theory and has a representation theory which is semi-simple (in contrast to the Poincaré group), namely the conformal group which is an extension of the Poincaré group.

An important point to make is that Manasson's theory, being dissipative in nature, then automatically provides an underlying theory capable of explaining state vector reduction i.e. wave function collapse, implying that is a physically real phenomenon as well. This would automatically imply a psi-ontic interpretation i.e. that the wave function is a really existing phenomenon in nature.

Notice that I am not saying that the above arguments are correct per se, but merely a logically valid possibility which is mathematically speaking completely conceivable and possibly even already directly constructable using existing mathematics.
 
Last edited:
  • #19
stevendaryl said:
I guess I'm repeating myself, but it seems to me that self-organizing systems IS a regular hidden variable theory. So it's already ruled out by Bell. Unless it's nonlocal (or superdeterministic).
It seems you are prematurely viewing the concept of self-organization from a very narrow viewpoint.

Moreover, entanglement is or at least can be understood fully as a non-local phenomenon; this isn't inconsistent with GR either.
stevendaryl said:
(Actually, "nonlinear" by itself doesn't mean anything. You have to say what is linear in what.)
In order to make this argument one actually doesn't need to explicitly specify linearity in what per se, but instead merely assume that the correct equation is or correct equations are nonlinear maps e.g. (some class of coupled) nonlinear PDEs instead of a linear PDE like the Schrodinger equation or coupled linear PDEs like the Dirac equation.
 
  • #20
Auto-Didact said:
It seems you are viewing the concept of self-organization from a very narrow view.

I haven't made any assumptions about self-organization, so I'm viewing it by the very broadest view---it could mean anything at all. Bell's theorem doesn't make any assumptions about whether the dynamics is self-organizing, or not.
 
  • #21
Auto-Didact said:
Again chaotic dynamics doesn't have to get around any of that. I will try to answer each point, by stating what I am assuming/hypothesizing:

1) Analyticity/holomorphicity making single outcomes a necessity per the conventional uniqueness and existence arguments.
2) No assumption of superdeterminism.
3) No assumption of retrocausality of matter or information.
4 & 5) The physical common cause is the specific spacetime pathway connecting some EPR pair
6) All matter and information can only follow timelike and lightlike curves...

Notice that I am not saying that the above arguments are correct per se, but merely a logically valid possibility which is mathematically speaking completely conceivable and possibly even already directly constructable using existing mathematics.
This will not simulate non-classical correlations then. You either need to have superdeterminism, retrocausality (i.e. actual information carrying physical interaction moving back in time), nonlocality (actual interaction at a distance), rejection of Reichenbach's principle (specifically the no decorrelating explanation part), rejection of single outcomes or reject the concept of the fundamental dynamics being mathematical (i.e. anti scientific realism).

It doesn't matter if the dynamics is chaotic, dissipative and more nonlinear than anything ever conceived, unless one of these things is true Bell's theorem guarantees it will fail to replicate non-classical correlations.
 
  • #22
If that is your point of view then this doesn't follow:
stevendaryl said:
it seems to me that self-organizing systems IS a regular hidden variable theory
'Regular' strange attractors are already infinitely complicated due to topological mixing. Supercycle attractors on the other hand, seem to increase the degree of complexity of this topological mixing by some arbitrarily high amount such that the entire space taken up by the dense orbits of the entire original strange attractor - after bifurcating in this particular way - 'collapses' onto a very particular set of discrete orbitals - becoming in the context of QM indistinguishable from discrete quantum orbits.
 
Last edited:
  • #23
DarMM said:
This will not simulate non-classical correlations then. You either need to have superdeterminism, retrocausality (i.e. actual information carrying physical interaction moving back in time), nonlocality (actual interaction at a distance), rejection of Reichenbach's principle (specifically the no decorrelating explanation part), rejection of single outcomes or reject the concept of the fundamental dynamics being mathematical (i.e. anti scientific realism).

It doesn't matter if the dynamics is chaotic, dissipative and more nonlinear than anything ever conceived, unless one of these things is true Bell's theorem guarantees it will fail to replicate non-classical correlations.
? Did you miss that I specifically said that the entire scheme can consistently be made non-local using spin network theory or (the mathematics of) twistor theory?

Manasson's theory only explains quantisation; it isn't a theory of everything. Just adding spin networks to Manasson's preliminary model alone already seems to solve all the problems regarding being able to reproduce QM entirely.
 
  • #24
Auto-Didact said:
If that is your point of view then this doesn't follow:
'Regular' strange attractors are already infinitely complicated due to topological mixing. Supercycle attractors on the other hand, seems to increase the degree of complexity of this topological mixing by an arbitrarily high amount such that the space taken up by the dense orbits of an entire attractor - after bifurcating through self-organisation - 'collapses' onto a very particular set of discrete orbitals - in the context of QM becoming indistinguishable from discrete quantum orbits.

If that is your point of view then this doesn't follow:
'Regular' strange attractors are already infinitely complicated due to topological mixing. Supercycle attractors on the other hand, seem to increase the degree of complexity of this topological mixing by some arbitrarily high amount such that the entire space taken up by the dense orbits of the entire original strange attractor - after bifurcating in this particular way - 'collapses' onto a very particular set of discrete orbitals - becoming in the context of QM indistinguishable from discrete quantum orbits.

I think that you are misunderstanding my point. I don't care how complicated the dynamics are because Bell's theorem doesn't make any assumptions about complexity.
 
  • #25
As I have stated multiple times now, consistently adding something like spin networks or twistor theory to Manasson's theory immediately makes the resulting theory non-local, thereby removing the complaints you have regarding Bell's theorem. I see no reason why this cannot be done.
 
  • #26
Auto-Didact said:
? Did you miss that I specifically said that the entire scheme can consistently be made non-local using spin network theory or (the mathematics of) twistor theory?

Manasson's theory only explains quantisation; it isn't a theory of everything. Just adding spin networks to Manasson's preliminary model alone already seems to solve all the problems regarding being able to reproduce QM entirely.
I saw it, but I was confining discussion to Manasson's theory explicitly, possible modifications are hard to discuss if they are not developed.

However to discuss that, I get that it might seem that spin network theory will solve the problem, but I would suggest reading up on current work in Quantum Foundations. All realist models (any of nonlocal, Many-Worlds, Retrocausal and Superdeterminism models) all display fine tuning problems as shown in the Wood-Spekkens and the Pusey-Leifer theorems for example. It's not enough that something seems to solve the problem. If you try an unnatural fine tuning will emerge.
 
  • #27
Manasson's theory is clearly preliminary; just because it has not yet reproduced entanglement or Bell inequalities doesn't mean that it is wrong or of no value whatsoever. It is way too early to expect that from the theory.

The fact that it - in its very preliminary form - seems to be able to directly reproduce so much (quantisation, spinors, coupling constants of strong/weak/EM, resolve measurement problem) using so little, is what one should be focusing on.

No one ever said it would as is immediately reproduce QM fully, but instead that it gives an explanation for where quantization itself comes from, which implies QM is not the fundamental theory of nature.

Complaining that a preliminary model which explains the origin of some phenomenon without fully reproducing the phenomenon as well is wrong/not worth considering because it doesn't immediately reproduce the entire phenomenon is making a serious categorical error. That would be analogous to a contemporary of Newton dismissing Newton and his work because Newton didn't invent a full theory of relativistic gravity and curved spacetime in one go.
DarMM said:
However to discuss that, I get that it might seem that spin network theory will solve the problem, but I would suggest reading up on current work in Quantum Foundations. All realist models (any of nonlocal, Many-Worlds, Retrocausal and Superdeterminism models) all display fine tuning problems as shown in the Wood-Spekkens and the Pusey-Leifer theorems for example. It's not enough that something seems to solve the problem. If you try an unnatural fine tuning will emerge.
Apart from the possible issue with finetuning, this part sounds thoroughly confused. QM itself can essentially be viewed as a non-local theory, this is what Bell's theorem shows. From what I understood before from Pusey and Leifer's paper was that QM may not just be non-local but has an element of retrocausality as well, i.e. quantum information through entanglement can travel backwards in time while not being a form of signalling i.e. quantum information not being information. How is this any different from what I am arguing for?
 
  • #28
Auto-Didact said:
this is a very unfortunate misnomer because quantum information is not a form of information!
Hi Auto-Didact:

I would appreciate it if you would elaborate on this concept. Wikipedia
says
In physics and computer science, quantum information is information that is held in the state of a quantum system. Quantum information is the basic entity of study in quantum information theory, and can be manipulated using engineering techniques known as quantum information processing. Much like classical information can be processed with digital computers, transmitted from place to place, manipulated with algorithms, and analyzed with the mathematics of computer science, so also analogous concepts apply to quantum information. While the fundamental unit of classical information is the bit, in quantum information it is the qubit.In physics and computer science, quantum information is information that is held in the state of a quantum system. Quantum information is the basic entity of study in quantum information theory, and can be manipulated using engineering techniques known as quantum information processing. Much like classical information can be processed with digital computers, transmitted from place to place, manipulated with algorithms, and analyzed with the mathematics of computer science, so also analogous concepts apply to quantum information. While the fundamental unit of classical information is the bit, in quantum information it is the qubit.​
Is your difference with Wikipedia simply a vocabulary matter, or is there some deeper meaning?

Regards,
Buzz
 
  • #29
DarMM said:
It's not enough that something seems to solve the problem. If you try an unnatural fine tuning will emerge.
I'm pretty sure you are aware that Sabine Hossenfelder wrote an entire book about the complete irrelevance of numbers seeming unnatural i.e. that naturalness arguments have no proper scientific basis and holding to them blindly are actively counter-productive for the progress of theoretical physics.

Moreover, I'm not entirely convinced by it, but I recently read a paper by Strumia et al. (yes, that Strumia) which argues quite convincingly that demonstrating near-criticality can make anthropic arguments and arguments based on naturalness practically obsolete.
Buzz Bloom said:
Is your difference with Wikipedia simply a vocabulary matter, or is there some deeper meaning?
Read this book.
Quantum information is a horrible misnomer, it is not a form of information in the Shannon information theoretic/signal processing sense i.e. the known and universally accepted definition of information from mathematics and computer science.

This fully explains why entanglement doesn't work by faster than light signalling, i.e. it isn't transmitting information in the first place, but something else. It is unfortunate this something else can be easily referred to colloquially as information as well, which is exactly what happened when someone came up with the term.

The continued usage is as bad if not worse than laymen confusing the concept of velocity with that of force, especially because computer scientists/physicists actually came up with the name!
 
  • #30
Auto-Didact said:
Read this book.
Hi Auto-Didact:

Thanks for the citation.
Quantum (Un)speakables
Editors: Bertlmann, R.A., Zeilinger, A.
Publication date: 01 Sep 2002
Publisher: Springer-Verlag Berlin and Heidelberg GmbH & Co. KG
List Price: US $129​
Neither my local library, nor the network of libraries it belongs to, has the book.
I did download the Table of Contents, 10 pages. Can you cite a particular part (or parts) of the book that deals with the question I asked about "quantum information vs. information"? The local reference librarian may be able to get me a copy of just the part(s) I need.

Regards,
Buzz
 
Last edited:
  • #31
Auto-Didact said:
I'm pretty sure you are aware that Sabine Hossenfelder wrote an entire book about the complete irrelevance of numbers seeming unnatural i.e. that naturalness arguments have no proper scientific basis and holding to them blindly are actively counter-productive for the progress of theoretical physics.

Moreover, I'm not entirely convinced by it, but I recently read a paper by Strumia et al. (yes, that Strumia) which argues quite convincingly that demonstrating near-criticality can make anthropic arguments and arguments based on naturalness practically obsolete
Well these aren't just numbers, unless fine tuned realistic models will have their unusual features become noticeable, i.e. in Retrocausal theories if you don't fine tune them then the retrocausal signals are noticeable and useable macroscopically, similarly for nonlocal theories. This could be correct, but it's something to keep in mind. It isn't fine-tuning in the sense you are thinking of (special parameter values), but the presence of superluminal (etc) signalling for these theories outside very specific initial conditions.

Auto-Didact said:
No one ever said it would as is immediately reproduce QM fully, but instead that it gives an explanation for where quantization itself comes from, which implies QM is not the fundamental theory of nature.
There are a few models that do that.
Auto-Didact said:
Complaining that a preliminary model which explains the origin of some phenomenon without fully reproducing the phenomenon as well is wrong/not worth considering because it doesn't immediately reproduce the entire phenomenon is making a serious categorical error.
I think this is overblown, I'm not saying it shouldn't be considered, I'm just saying that the features of QM it does solve (e.g. measurement problem, quantisation) are easily done, even in toy models. It would be the details of how it explains entanglement that would need to be seen and in advance we know it will involve fine-tuning in its initial conditions. Whether that is okay/worth it could then be judged in light of all the other features it may have. What I was discussing is that "solving" entanglement is known to take much more than this and have unpleasant features.
 
Last edited:
  • Like
Likes Auto-Didact
  • #32
Buzz Bloom said:
Hi Auto-Didact:

Thanks for the citation.
Quantum (Un)speakables
Editors: Bertlmann, R.A., Zeilinger, A.
Publication date 01 Sep 2002
Publisher Springer-Verlag Berlin and Heidelberg GmbH & Co. KG
List Price: US $129​
Neither my local library, nor the network of libraries it belongs to, has the book.
I did download the Table of Contents, 10 pages. Can you cite a particular part (or parts) of the book that deals with the question I asked about "quantum information vs. information"? The local reference librarian may be able to get me a copy of just the part(s) I need.

Regards,
Buzz
Its been awhile, I can't remember exactly. What I do remember however is that the book is definitely worth reading. It isn't merely some book on QM foundations, but a book on quantum information theory and a partial biography of John Bell as well. Just check the list of authors if you feel you need any convincing. In any case, check your conversations.
DarMM said:
It isn't fine-tuning in the sense you are thinking of (special parameter values), but the presence of superluminal (etc) signalling for these theories outside very specific initial conditions.
Might as well just say superluminal signalling etc; referring to these problems as finetuning is another very unfortunate misnomer, especially given the way more familiar fine tuning arguments for life/earth/the universe etc.

Btw I am actively keeping in mind what you are calling finetuning problems in so far as I'm aware of them. This is my current main go-to text for trying to see what a new theory needs to both solve and take into account w.r.t the known issues in the foundations of QM, and this is the text which in my opinion best explains how the "nonlinear reformulation of QM" programme is trying to achieve solving the above problem, which moreover uses a specific kind of preliminary prototype model illustrating the required mathematical properties.
DarMM said:
There are a few models that do that.
Some references would be nice, pretty much every other model/theory I have ever seen beside this was obviously wrong or completely unbelievable (in the bad sense of the word).
 
  • #33
stevendaryl said:
But "regular hidden variable" theory INCLUDES "extremely non-linear" systems. Bell's notion of a hidden-variables theory allows arbitrary interactions, as long as they are local. Nonlinearity is not ruled out.

(Actually, "nonlinear" by itself doesn't mean anything. You have to say what is linear in what.)
You are right, non-linear was the wrong phrase (which i realized and changed it, but too late). I was trying to give a quick answer.

Bells theorem is about probabilities, and my view is that any P-measure, or system of P-measures, to necessarily be conditional upon, or even identified with an observers, and they i of course take a observer dependent bayesian view on P-measures. (with observer here, read particle as a generalisation of measurement device, not the human scientist. In my view the generalized notion of observer is NOT necessarily a classical device, that is the twist. And the P-measures are hidden in the sense that no other obserer can observer the naked expectations of another observer, and there is no simple renormalization scheme you can use iether. This comparasion is simply indistinguishable from the normal physical interaction. One observer can only try to adbuce the naked expectations of another system by means of its observer actions, from the perspective of the other observer.

This is loosely analogous (given that analogies are never perfect) to how geometry guides matter, and matter evolves geometry. What we have here is an evolutionary process where theory (as encoded in a particles internal structure) guides the action of the particles, but the action of the population of particles similarly evolve theory. If you complain this is not precise enough mathematically that's correct, but i am trying to save the vision here, in despite of the admittedly incomplete and even confusing and almost contradictory details.

Its this evolution of law - as identified with tuning of elementary particles - that informally can be thought of as a random walk in a similarly evolving theory space, that is self-organising. The task is to find the explicits here, and show that there are stable preferred attractors, and that these correspond to the standard model. IF this totally fails, then we can dissmiss this crazy idea, but not sooner i think.

Once we are at the attractor, we have business at usual with symmetries etc. I am not suggesting to restore realism, neither do i suggest a simply self-organising classical chaos to explain QM! It is not enough, that is agreed, but this not what imean.

/Fredrik
 
  • #34
stevendaryl said:
Bell's theorem doesn't make any assumptions about complexity.

I agree that what will not work is any underlying observer invariant classical probability model, that with some crazy nonlinear chaotic deductions and where transitions follow some simple conditional probability. This will not work because the whole idea of an observer independent probability space is deeply confused.

This is my opinon, and tha each interacting subsystem implicitly encodes its own version of the P-spaces. Such models are to my knowledge not excluded by bells theorem. Because the P-measures used in the theorem are not fixed, they are evolving, and one has to define which observer is making the bell inferences.

So the conjecture is not to explain QM as a classical HV model (no matter how chaotic), where the experimenter is simply ignorant about these. The conjecture would be to explain QM as interacting information processing agents (elemetary particles to refer to the paper) that self-organize their "P-spaces" to reflect maximal stability. Any interaction between two systems take place at two levels, a regular residual interaction where observers evolved and agreement on disagreement, but that leaves them both stable. And a more desctructive level which evolves the P-measures. QM as we know should be emergent as residual interactions, but the evolutionary mechanisms are what is needed to understand unification. Ie. the KEY is to include the "observer", the encoder of the expectations, in the actual interactions.

But wit this said the link to the original paper that ia connected to was that in an approximate sense, one can probably "explain" an elementary particle as an evolved information processing agent, in a chaotic environment. Here the chaos is relevant as it demonstrates the particles insufficent computational complexity to decode the environment. And this fact, determines the properties of it - or so goes the conjecture. There is still not actual model for this yet.

I feel i may be drifting a bit here, but my only point in this thread was to support a kind of "hidden variable" model, but which is really just the observer dependent information, so it does not have the structure of classical realism that is rejected by bells theorem. And this will then have generic traits such as beeing evolved, and the exact symmetries we are used to would correspond to attractors, but not attractors in a simple fixed theory spcae, but attractors in an evolving theory space. This latter things is a key, as otherwise we run into all kinds of fine tuning problems well known to any Newtonian schema.

Sorry for the ramblings, on my way off air for sometime, so i will not interfere more the next days.

/Fredrik
 
  • #35
Auto-Didact said:
Might as well just say superluminal signalling etc; referring to these problems as finetuning is another very unfortunate misnomer, especially given the way more familiar fine tuning arguments for life/earth/the universe etc.
Fine tuning has long been used for both initial condition tuning and parameter tuning, I don't think parameter tuning has any special claim on the phrase. Besides it's standard usage in Quantum Foundations to refer to this as "Fine-Tuning" and I prefer to use terms as they are used in the relevant fields.

It couldn't be called "superluminal singalling" as the fine tuning is the solution to why we don't observe superluminal (or retrocausal, etc) signalling at macroscopic scales in realist models.

Auto-Didact said:
Some references would be nice, pretty much every other model/theory I have ever seen beside this was obviously wrong or completely unbelievable (in the bad sense of the word).
Well a simple toy model that shows a huge amount of quantum mechanical features result purely from a fundamental epistemic limit is here:
https://arxiv.org/abs/quant-ph/0401052

It's just a toy model, there are much more developed ones, but you can see the basic fact of how easy it is to replicate a huge amount of QM, except for entanglement. Which is why entanglement is the key feature one has to explain.
 
  • #36
stevendaryl said:
Bell's theorem doesn't make any assumptions about whether the dynamics is self-organizing, or not.
Bell's theorem assumes the absence of superdeterminism. I wonder, could perhaps self-organization create some sort of superdeterminism? In fact, I think that the 't Hooft's proposal can be understood that way.
 
  • Like
Likes eloheim and Auto-Didact
  • #37
This article's train of thought regarding 1/2 -> 1 -> 2 spin particles and their coupling leads to a prediction that graviton's coupling should be ~4 times stronger than color force. This is obviously not the case.

Just observing that families of particles seem to "bifurcate" when we look at their various properties seems to be a too tenuous reason to apply dissipative reasoning.
 
  • #38
Auto-Didact said:
QM itself can essentially be viewed as a non-local theory, this is what Bell's theorem shows.

Bell’s theorem states that in a situation which involves the correlation of measurements on two spatially separated, entangled systems, no “local realistic theory” can predict experimental results identical to those predicted by quantum mechanics. The theorem says nothing about the character of quantum theory.
 
  • #39
DarMM said:
Fine tuning has long been used for both initial condition tuning and parameter tuning, I don't think parameter tuning has any special claim on the phrase. Besides it's standard usage in Quantum Foundations to refer to this as "Fine-Tuning" and I prefer to use terms as they are used in the relevant fields.
I don't doubt that, but I think you are missing the point that the other usage of fine tuning is old, centuries old. Newton himself even used the same fine tuning argument to argue that the three body problem was insoluble due to infinite complexity and that therefore the mechanistic universe must be the work of God. The same arguments were and are still being used in biology since Darwin to this very day.

In any case, I will grant your usage of this unfortunate standard terminology in the novel and relatively secluded area of research that is the foundations of QM.
DarMM said:
Well a simple toy model that shows a huge amount of quantum mechanical features result purely from a fundamental epistemic limit is here:
https://arxiv.org/abs/quant-ph/0401052

It's just a toy model, there are much more developed ones, but you can see the basic fact of how easy it is to replicate a huge amount of QM, except for entanglement. Which is why entanglement is the key feature one has to explain.
I understand that this toy model is or may just be some random example, but I seriously think a few key points are in order. I will start by making clear that my following comments are regarding mathematical models in scientific theories of empirical phenomenon, but I digress.

I do hope you realize that there is an enormous qualitative difference between these kind of theoretical models and a theoretical model like Manasson's. This can be seen at multiple levels:
- First, the easiest way to spot this difference is to compare the underlying mathematics of the old and new models: the mathematics of this new model (causal discovery analysis, a variant of root cause analysis) is very close to the underlying mathematics of QM, while the mathematics underlying Manasson's model is almost diametrically opposite to the mathematics underlying QM.
- The second point is the focus of a new model - due to the underlying mathematics - on either accuracy or precision: similar underlying mathematics between models tends to lead quickly to good precision without necessarily being accurate, while a novel model based in completely different mathematics - and still being capable of reproducing things of an older model - initially has to focus on accuracy before focusing on precision.
- The third - and perhaps most important - point is the conceptual shift required to go between the old and the new model; if apart from the mathematics, the conceptual departure from old to new isn't radical, then the new model isn't likely to be able to go beyond the old. This is actually a consequence of the first and second point, because a small difference with high precision is easily fully constructed, implying low accuracy and therefore easily falsified. On the other hand, it is almost impossible that huge differences will lead to similar consequences, meaning both models are accurate with the older being typically more precise than the newer, at least until the newer matures and either replaces the old or gets falsified.

To illustrate these points even further we can again use the historical example of going from Newtonian gravity to Einsteinian gravity; all three points apply there quite obviously; I won't go into that example any further seeing there are tonnes of threads and books on this topic, i.e. MTW Gravitation.

What I do need to say is that the above mentioned differences are important for any new mathematical model of some empirical phenomenon based in scientific reasoning, not just QM; I say this because there is another way to create a new mathematical model of an empirical phenomenon, namely by making an analogy based on similar mathematics. A (partially) successful new model using an analogy based on similar mathematics usually tends to be only incrementally different or evolutionary, while a successful new model based on scientific reasoning tends to be revolutionary.

Evolution of a model merely requires successful steps of cleverness, while revolution requires nothing short of genius and probably a large dose of luck, i.e. being in the right place at the right time. This is the problem with all psi-epistemic models; they are practically all incrementally different or a small evolution in terms of being mathematically cleaner than the old model - which is of course why they are available a dime a dozen. It takes hardly any mathematical insight or scientific creativity at all to make one. For new QM models, this is because such models tend to be based in probability theory, information theory, classical graph theory and/or linear algebra. These topics in mathematics are in comparison with say geometry or analysis relatively "sterile" (not quantitatively in applications but qualitatively in mathematical structure).

All of these critique points w.r.t. theorisation of empirically based scientific models do not merely apply to the toy model you posted, but to all psi-epistemic models of QM. This is also why we see so much of such models and practically none of the other; making psi-epistemic models is a low-risk/low-payout strategy, while making psi-ontic models is a high-risk/high-payout strategy.

When I said earlier, that I've never seen a new model which wasn't obviously wrong or completely unbelievable, I wasn't even counting such incrementally different models because they tend to be nowhere near even interesting enough to consider seriously as a candidate that will possibly supersede QM. Sure, such a model may even almost directly have way more applications; that however is frankly speaking completely irrelevant w.r.t. foundational issues. W.r.t. the foundations of QM, this leaves us with searching for psi-ontic models.

Make no mistake; the foundational goal of reformulating QM based on another model is not to find new applications but to go beyond QM; based on all psi-ontic attempts so far this goal is extremely difficult. On the other hand, as I have illustrated, finding a reformulation of QM based on a psi-epistemic model tends to be neither mathematically challenging nor scientifically interesting for any (under)grad student with sufficient training; one can almost literally blindly open any textbook on statistics, decision theory, operation research and/or data science and find some existing method which one could easily strip down to its mathematical core and try to construct an incrementally different model of QM.

So again, if you do know of some large collection of new psi-ontic (toy) models which do not quickly fall to fine-tuning and aren't obviously wrong, please, some references would be nice.
 
  • #40
nikkkom said:
This article's train of thought regarding 1/2 -> 1 -> 2 spin particles and their coupling leads to a prediction that graviton's coupling should be ~4 times stronger than color force. This is obviously not the case.
It actually need not imply such a thing at all. The article doesn't assume that gravity needs to be quantized.
nikkkom said:
Just observing that families of particles seem to "bifurcate" when we look at their various properties seems to be a too tenuous reason to apply dissipative reasoning.
Bifurcating particle taxonomy isn't the reason to apply dissipative reasoning, instead virtual particles based in the Heisenberg uncertainty principle is.

The very concept of virtual particles implies an open i.e. dissipative system, and therefore perhaps the necessity of a non-equilibrium thermodynamics approach a la [URL='https://www.physicsforums.com/insights/author/john-baez/']John Baez.[/URL]
Lord Jestocost said:
Bell’s theorem states that in a situation which involves the correlation of measurements on two spatially separated, entangled systems, no “local realistic theory” can predict experimental results identical to those predicted by quantum mechanics. The theorem says nothing about the character of quantum theory.
Your conclusion is incorrect. If local hidden variables can not reproduce QM predictions, non-local hidden variables might still be able to, i.e. Bell's theorem also clearly implies that non-locality may reproduce QM's predictions, implying again that QM - or a completion of QM - is itself in some sense inherently non-local. This was indeed Bell's very own point of view.

None of this is nothing new, it is well-known in the literature that entanglement is or can be viewed as a fully non-local phenomenon. Moreover, as you probably already know, there is actually a very well-known explicitly non-local hidden variable theory, namely Bohmian mechanics (BM) which fully reproduces the predictions of standard QM; in terms of QM interpretation, this makes BM a psi-ontic model which actually goes beyond QM.
 
  • #41
Auto-Didact said:
I don't doubt that, but I think you are missing the point that the other usage of fine tuning is old, centuries old...

In any case, I will grant your usage of this unfortunate standard terminology in the novel and relatively secluded area of research that is the foundations of QM.
The other usage is centuries old as well, going back to at least Gibbs and Boltzmann and it's used in Statistical Mechanics and Cosmology as well. So both usages are prevalent in modern physics and centuries old. I don't know which is older, but I also don't see why this point matters if both are in common usage now and have been for used for centuries.

Auto-Didact said:
I understand that this toy model is or may just be some random example, but I seriously think a few key points are in order. I will start by making clear that my following comments are regarding mathematical models in scientific theories of empirical phenomenon, but I digress.

I do hope you realize that there is an enormous qualitative difference between these kind of theoretical models and a theoretical model like Manasson's. This can be seen at multiple levels:
You're treating this like a serious proposal, remember the context in which I brought this up. This toy model isn't intended to be a scientific advance. It's intended to show how simple it is to replicate all the features of QM except for entanglement, i.e. post-classical correlations. The model isn't even remotely realistic and is mathematically trivial and it can still replicate them.

The reason I brought up such toy models was to focus on the fact that things like quantised values, superposition, solving the measurement problem, etc can be done quite easily and this model is just the simplest such model demonstrating that (more complex ones exist).

What isn't easy is replicating breaking of the Bell inequalities and any model that really attempts to explain QM should focus on that primarily, as the toy model (and others) show that the other features are easy.

Auto-Didact said:
All of these critique points w.r.t. theorisation of empirically based scientific models do not merely apply to the toy model you posted, but to all psi-epistemic models of QM. This is also why we see so much of such models and practically none of the other; making psi-epistemic models is a low-risk/low-payout strategy, while making psi-ontic models is a high-risk/high-payout strategy.
There are less psi-epistemic models though, they are very hard to construct, especially now in light of the PBR theorem. I really don't understand this.

Auto-Didact said:
When I said earlier, that I've never seen a new model which wasn't obviously wrong or completely unbelievable, I wasn't even counting such incrementally different models because they tend to be nowhere near even interesting enough to consider seriously as a candidate that will possibly supersede QM. Sure, such a model may even almost directly have way more applications; that however is frankly speaking completely irrelevant w.r.t. foundational issues. W.r.t. the foundations of QM, this leaves us with searching for psi-ontic models.
I didn't present the toy model as a candidate to replace QM, but as a demonstration of how easily all non-entanglement features can be replicated.

Auto-Didact said:
Make no mistake; the foundational goal of reformulating QM based on another model is not to find new applications but to go beyond QM; based on all psi-ontic attempts so far this goal is extremely difficult. On the other hand, as I have illustrated, finding a reformulation of QM based on a psi-epistemic model tends to be neither mathematically challenging nor scientifically interesting for any (under)grad student with sufficient training
Again this is counter to virtually everything I've read in quantum foundations. Making Psi-Epistemic models is extremely difficult in light of the PBR theorem.

Auto-Didact said:
one can almost literally blindly open any textbook on statistics, decision theory, operation research and/or data science and find some existing method which one could easily strip down to its mathematical core and try to construct an incrementally different model of QM.
I don't think so, again not in light of the PBR theorem.

Auto-Didact said:
So again, if you do know of some large collection of new psi-ontic (toy) models which do not quickly fall to fine-tuning and aren't obviously wrong, please, some references would be nice.
This is what I am saying:
  1. Replicating non-entanglement features of Quantum Mechanics is very simple as all one needs is a classical theory with an epistemic limit. The toy model presented is an example of how simple this is.
  2. Hence something that replicates QM should explain how it replicates entanglement first, as the other aspects are easy
  3. However we already know that realist models will encounter fine-tuning from the Wood-Spekkens and Pusey-Leifer theorems.
One of the points in my previous posts tells you that I can't give you what you're asking for here because it has been proven not to exist, all realist models require fine tunings. That's actually one of my reasons for being skeptical regarding these sort of models, we already know they will develop unpleasant features. People present these models as if they will escape what they don't like about Bohmian Mechanics, however we know now that these features of Bohmian Mechanics are general to all such models.

The only really different theories would be superdeterministic, retrocausal or Many-Worlds, but all of those have fine tunings as well.

Acausal models might be different (i.e. where physics concerns multiscale 4D constraints), but they are truly different theories with little analysis on them as of now.
 
Last edited:
  • Like
Likes dextercioby and Auto-Didact
  • #42
Auto-Didact said:
> This article's train of thought regarding 1/2 -> 1 -> 2 spin particles and their coupling leads to a prediction that graviton's coupling should be ~4 times stronger than color force. This is obviously not the case.

It actually need not imply such a thing at all. The article doesn't assume that gravity needs to be quantized.

It did this for color force, here:

w.png


Why the same should not apply to "the next next level" of gravitons?
 

Attachments

  • w.png
    w.png
    15.5 KB · Views: 1,124
  • #43
nikkkom said:
It did this for color force, here:

View attachment 233220

Why the same should not apply to "the next next level" of gravitons?
The question is 'why should it'? You seem to be reading this particular bit without controlling for your cognitive expectation bias, i.e. you are assuming based on the fact that quantization of gravity is a standard hypothesis in many models, that it is therefore also a hypothesis of this model.

It is pretty clear that this model is compatible with either hypothesis w.r.t. gravitation. That is to say this model is completely independent of the hypothesis whether or not gravity should be quantized in the same manner as the rest of the forces in physics i.e. following the standard form of quantization for particle physics.

This is bolstered by the fact that this is a phenomenological model i.e. it is constructed upon only empirically observed phenomenon. The form of quantization this model is attempting to explain is precisely the form known from experimental particle physics; no experiment has ever suggested that gravity is also quantized in this manner.

In contrast to common perception, both the mathematical physics and quantum gravity phenomenology literature actually respectively, give very good mathematical arguments and empirical arguments to believe that this hypothesis is actually false to begin with; this wouldn't necessarily mean that gravitation is not quantized at all, but that if it is, it is probably not in quantized in exactly the same manner as the other forces, making any conclusions that it probably is at worst completely misguided and at best highly premature because it is non-empirical.
 
  • #44
Auto-Didact said:
Your conclusion is incorrect. If local hidden variables can not reproduce QM predictions, non-local hidden variables might still be able to, i.e. Bell's theorem also clearly implies that non-locality may reproduce QM's predictions, implying again that QM - or a completion of QM - is itself in some sense inherently non-local.

Bell's theorem might imply that a “non-local realistic theory” might predict the correlations of measurements on entangeld systems. Regarding QM, there are other options.
 
  • #45
Lord Jestocost said:
Bell's theorem might imply that a “non-local realistic theory” might predict the correlations of measurements on entangeld systems. Regarding QM, there are other options.
Non-local hidden variable theories are a subset of non-local realistic theories, i.e. this discussion is moot.

The non-locality of QM - i.e. the non-local nature of entanglement - has been in the literature since Schrodinger himself.
Aspect concluded in 2000 that there is experimental support for the non-locality of entanglement, saying
Alain Aspect said:
It may be concluded that quantum mechanics has some nonlocality in it, and that this nonlocal character is vindicated by experiments [45]. It is very important, however, to note that such a nonlocality has a very subtle nature, and in particular that it cannot be used for faster-than-light telegraphy. It is indeed simple to show [46] that, in a scheme where one tries to use EPR correlations to send a message, it is necessary to send complementary information (about the orientation of a polarizer) via a normal channel, which of course does not violate causality. This is similar to the teleportation schemes [47] where a quantum state can be teleported via a nonlocal process provided that one also transmits classical information via a classical channel. In fact, there is certainly a lot to understand about the exact nature of nonlocality, by a careful analysis of such schemes [48].

When it is realized that this quantum nonlocality does not allow one to send any useful information, one might be tempted to conclude that in fact there is no real problem and that all these discussions and experimental efforts are pointless. Before rushing to this conclusion, I would suggest an ideal experiment done in the following way is considered (Fig. 9.17): On each side of the experiment of Fig. 9.1, using variable analysers, there is a monitoring system that registers the detection events in channels + or -with their exact dates. We also suppose that the orientation of each polarizer is changed at random times, also monitored by the system of the corresponding side. It is only when the experiment is completed that the two sets of data, separately collected on each side, are brought together in order to extract the correlations. Then, looking into the data that were collected previously and that correspond to paired events that were space-like separated when they happened, one can see that indeed the correlation did change at the very moment when the relative orientation of the polarizers changed.

So when one takes the point of view of a delocalized observer, which is certainly not inconsistent when looking into the past, it must be acknowledged that there is nonlocal behaviour in the EPR correlations. Entanglement is definitely a feature going beyond any space time description a la Einstein: a pair of entangled photons must be considered to be a single global object that we cannot consider to be made of individual objects separated in spacetime with well-defined properties.
Referenced sources are:
[45] J .S. Bell, Atomic cascade photons and quantum-mechanical nonlocality, Comm. Atom. M01. Phys. 9, 121 (1981)

[46] A. Aspect, Expériences basées sur les inégalités de Bell, J . Phys. Coll. C 2, 940 (1981)

[47] CH. Bennet, G. Brassard, C. Crépeau, R. Josza, A. Peres, W.K. Wootters, Phys. Rev. Lett. 70, 1895 (1993)
D. Bouwmeester, J .-W. Pan, K. Mattle, M. Eibl, H. Weinfurter, A. Zeilinger, Experimental quantum teleportation, Nature 390, 575 (1997)
D. Boschi, S. Branca, F. De Martini, L. Hardy, S. Popescu, Experimental realization of teleporting an unknown pure quantum state via dual classical and Einstein-Podolsky-Rosen channels, submitted to Phys. Rev. Lett. (1997) A. Furusawa, J .L. Sorensen, S.L. Braunstein, C.A. Fuchs, H.J. Kimble, E.S. Polzik, Unconditional quantum teleportation, Science 282, 706 (1998)

[48] S. Popescu, Bell’s inequalities versus teleportation: what is non-locality? Phys. Rev. Lett. 72, 797 (1994)
 
Last edited:
  • #46
Every theory can be reproduced by a non-local model, but that doesn't mean every theory is non-local. Say you have a computer which measures the temperature once a second and outputs the difference from the previous measurement. You can build a non-local model for this phenonema by storing the previous measurement at a remote location, which must be accessed on each iteration.

Does that make this a non-local phenomena? Clearly not - since you can also model it by storing the previous measurement locally. To show that QM is non-local, you need to show that it can't be reproduced with any local model, even one with multiple outcomes. Bell's theorem doesn't do that; it requires additional assumptions.

There is a very confusing thing some physicists do which is to use the phrase "non-locality" to mean something called "Bell non-locality", which isn't the same thing at all.
 
  • Like
Likes Auto-Didact
  • #47
As Alain Aspect says (A. Aspect, “To be or not to be local,” Nature (London), 446, 866 (2007)):

"The experimental violation of mathematical relations known as Bell’s inequalities sounded the death-knell of Einstein’s idea of ‘local realism’ in quantum mechanics. But which concept, locality or realism, is the problem?"
 
  • #48
Lord Jestocost said:
As Alain Aspect says (A. Aspect, “To be or not to be local,” Nature (London), 446, 866 (2007)):

"The experimental violation of mathematical relations known as Bell’s inequalities sounded the death-knell of Einstein’s idea of ‘local realism’ in quantum mechanics. But which concept, locality or realism, is the problem?"
As I mentioned up thread it's not really between locality or realism, but:
  1. Single Outcomes
  2. Lack of super-determinism
  3. Lack of retrocausality
  4. Presence of common causes
  5. Decorrelating Explanations (combination of 4. and 5. normally called Reichenbach's Common Cause principle)
  6. Relativistic causation (no interactions beyond light cone)
You have to drop one, but locality (i.e. Relativistic Causation) and realism (Decorrelating Explanations) are only two possibilities.
 
  • #49
DarMM said:
The other usage is centuries old as well, going back to at least Gibbs and Boltzmann and it's used in Statistical Mechanics and Cosmology as well. So both usages are prevalent in modern physics and centuries old. I don't know which is older, but I also don't see why this point matters if both are in common usage now and have been for used for centuries.
Newton lived in the 1600s, he was literally the first classical theoretical physicist - as well as first serious mathematical physicist - practically initiating the entire subject of physics as we know it today. Boltzmann and Gibbs lived much later (1800s until early 1900s). But let's not turn this into a measuring contest any further than it already is lol.

In any case, as I said before, if that is the standard terminology of the field, then you are correct to use it, no matter how unfortunate I or anyone else may find the terminology. This paper that you linked however defines fine-tuning on page 9 again exactly as parameter fine-tuning, i.e. the same definition that I am using...
DarMM said:
You're treating this like a serious proposal, remember the context in which I brought this up. This toy model isn't intended to be a scientific advance. It's intended to show how simple it is to replicate all the features of QM except for entanglement, i.e. post-classical correlations. The model isn't even remotely realistic and is mathematically trivial and it can still replicate them.

The reason I brought up such toy models was to focus on the fact that things like quantised values, superposition, solving the measurement problem, etc can be done quite easily and this model is just the simplest such model demonstrating that (more complex ones exist).

What isn't easy is replicating breaking of the Bell inequalities and any model that really attempts to explain QM should focus on that primarily, as the toy model (and others) show that the other features are easy.
Yes, you are correct, I'm approaching the matter somewhat seriously, it is a topic I am truly passionate about and one I really want to see an answer found for. This is for multiple reasons, most importantly:

I) following the psi-ontic literature for the last few years, I have come across a few mathematical schemes which seem to be 'sectioned off' parts of full theories. These mathematical schemes (among others, twistor theory and spin network theory) themselves aren't actually full physical theories - exactly like AdS/CFT isn't a full theory - but simply possibly useful mathematical models of particular aspects of nature based in experimental phenomenology, i.e. these schemes are typically models based in phenomenology through the use of very particular not-necessarily-traditional mathematics for physicists.

II) these schemes all have in common that they are - taken at face value - incomplete frameworks of full physical theories. Being based mostly in phenomenology, they therefore tend to be consistent with the range of experiments performed so far at least and yet - because of their formulation using some particular nonstandard mathematics - they seem to be capable of making predictions which agree with what is already known but might disagree with what is still unknown.

III) to complete these theories - i.e. what needs to be added to these mathematical schemes in order to transform them into full physical theories - what tends to be required is the addition of a dynamical model which can ultimately explain some phenomenon using dynamics. QM in the psi-ontic view is precisely such a mathematical scheme which requires completion; this is incidentally what Einstein, Dirac et al. meant by saying QM - despite it's empirical success - cannot be anything more than an incomplete theory and therefore ultimately provisional instead of fundamental.

IV) there actually aren't that many psi-ontic schemes which have been combined with dynamic models transforming them into completed physical theories. Searching for the correct dynamical model - which isn't obviously incorrect (NB: much easier said than done) - given some particular scheme therefore should be a productive Bayesian strategy for identifying new promising dynamical theories and hopefully ultimately finding a more complete novel physical theory.

I cannot stress the importance of the above points - especially point III and IV - enough; incidentally Feynman vehemently argued for practicing theory (or at least that he himself practiced theory) in this way. This is essentially the core business of physicists looking for psi-ontic foundations of QM.
DarMM said:
I didn't present the toy model as a candidate to replace QM, but as a demonstration of how easily all non-entanglement features can be replicated.
I recently made this very argument in another thread, so I'll just repost it here: There is a larger theme in the practice of theoretical science here where theoretical calculations done using highly preliminary models of some hypothesis, prior to any experiment being done/possible, leading to very strong claims against some particular hypothesis.

These strong claims against the hypothesis then often later turn out to be incorrect due to them resting on - mathematically speaking - seemingly trivial assumptions, which actually are conceptually - i.e. if understood correctly in physical terms - clearly unjustifiable. The problem is then that a hypothesis can incorrectly be discarded prematurely due to taking the predictions of toy models of said hypothesis at face value; i.e. a false positive falsification if you will.

This seems to frequently occur when a toy model of some hypothesis is a particular kind of idealization which is actually a completely inaccurate representation of the actual hypothesis, purely due to the nature of the particular idealization itself.
DarMM said:
There are less psi-epistemic models though, they are very hard to construct, especially now in light of the PBR theorem. I really don't understand this.
W.r.t. the large amount of psi-epistemic models, scroll down and see point 1).
DarMM said:
Again this is counter to virtually everything I've read in quantum foundations. Making Psi-Epistemic models is extremely difficult in light of the PBR theorem.
It is only difficult if you want to include entanglement, i.e. non-locality. Almost all psi-epistemic models don't do this, making them trivially easy to construct. I agree that psi-ontic models, definitely after they have passed the preliminary stage, need to include entanglement.

In either case, a general remark on these no-go theorems is in order: Remember that these "proofs" should always be approached with caution - recall how von Neumann's 'proof' literally held back progress in this very field for decades until Bohm and Bell showed that his proof was based on (wildly) unrealistic assumptions.

The fact of the matter is that the assumptions behind the proofs of said theorems may actually be unjustified when given the correct conceptual model, invalidating their applicability as in the case of von Neumann. (NB: I have nothing against von Neumann, I might possibly even be his biggest fan on this very forum!)
DarMM said:
I don't think so, again not in light of the PBR theorem.
Doesn't the PBR theorem literally state that any strictly psi-epistemic interpretation of QM literally contradicts the predictions of QM? This implies that a psi-ontic interpretation of QM is actually a necessity! Can you please rephrase the PBR theorem in your own words?
DarMM said:
This is what I am saying:
  1. Replicating non-entanglement features of Quantum Mechanics is very simple as all one needs is a classical theory with an epistemic limit. The toy model presented is an example of how simple this is.
  2. Hence something that replicates QM should explain how it replicates entanglement first, as the other aspects are easy
  3. However we already know that realist models will encounter fine-tuning from the Wood-Spekkens and Pusey-Leifer theorems.
1) The ease of replicating QM without entanglement seems to only hold for psi-epistemic models, not for psi-ontic models.
2) Fully agreed if we are talking about psi-epistemic models. Disagree or do not necessarily agree for psi-ontic models, especially not in the case of Manasson's model which lacks a non-local scheme.
3) Based on this paper, the critiques from those theorems seem to apply not to realist models but to a psi-epistemic interpretation of QM.
Even worse; even if they did apply to realistic models (i.e. psi-ontic models) they would only apply to a certain subset of all possible realist models, not all possible realist models. To then assume based on this that all realist models are therefore unlikely is to commit the base rate fallacy; indeed, the very existence of Bohmian Mechanics makes such an argument extremely suspect.
DarMM said:
One of my the points in my previous posts tells you that I can't give you what you're asking for here because it has been proven not to exist, all realist models require fine tunings. That's actually one of my reasons for being skeptical regarding these sort of models, we already know they will develop unpleasant features. People present these models as if they will escape what they don't like about Bohmian Mechanics, however we know now that these features of Bohmian Mechanics are general to all such models.
I understand your reservations and that it may seem strange that I seem to be arguing against what seems to be most likely. The thing is I am actually - in contrast to how most physicists seem to usually judge likelihood of correctness of a theory - just both arguing and judging using a very different interpretative methodology to the one popular in the practice of physics - one in which (priorly-assumed-to-be) low probability events can actually become more likely, given the conditional adherence to certain criteria. In other words, I am consciously using Bayesian reasoning - instead of frequentist reasoning - to evaluate the likelihood that particular theories are or aren't likely (more) correct, because I have realized that these probabilities are actually degrees of belief not statistical frequencies.

I suspect that approaching the likelihood of the correctness of a theory w.r.t. open problems with very little empirics using frequentist reasoning is highly misleading and possibly itself a problematic phenomenon - literally fueling the bandwagon effect among theoretical physicists. This characterization seems to apply to most big problems in the foundations of physics; among others, the problem of combining QM with GR, the measurement problem and the foundational problems of QM.

While foundational problems seem to be able to benefit strongly by adapting a Bayesian strategy for theory construction, open problems in non-foundational physics on the other hand do tend to be easily solveable using frequentist reasoning. I suspect that this is precisely where the high confidence in frequentist reasoning w.r.t. theory evaluation among most physicists stems from: that is the only method of practical probablistic inference they have learned in school.

That said, going over your references as well as my own it seems to me that you have seriously misunderstood what you have read in the literature, but perhaps it is I who is the one who is mistaken. You definitely wouldn't be the first (I presume) physicist I have met who makes such interpretative errors when reading long complicated texts; it is as if subtlety is somehow shunned or discarded at every turn in favor of explicitness. I suspect that this might be due to the fact that most physicists today do not have any training in philosophy or argumentative reasoning at all (in stark contrast to the biggest names such as Newton, Einstein and the founders of QM).

In my view, you seem to be a) frequently confusing (psi-)ontology with (psi-)epistemology, b) interpreting certain extremely subtle arguments at face value and therefore incorrectly (e.g. Valentini's argument for BM, on the face of it goes against accepted wisdom in contemporary physics, this in no way invalidates his argument, his argument is a logically valid argument), c) interpreting no-go theorems possibly based on shaky assumptions as actual definitive demonstrations and d) attributing concepts deemed impossible within contemporary physics (e.g. superdeterminism, superluminality, retrocausation) as effects of fine-tuning based arguments in the form of c).

This is made clearer when you automatically generalize the validity of proofs using concepts defined in a certain context as if the proof covers all contexts - seemingly purely because it is a theorem - something you have no doubt learned to trust from your experience of using/learning mathematics. This should become clearer through the following example: Superdeterminism, superluminality and retrocausation would only necessarily be effects of fine-tuning given that causal discovery analysis is sufficient to explain the violation of Bell inequalities; Wood and Spekkens clearly state that this is false, i.e that causal discovery analysis is insufficient to understand QM! (NB: see abstract and conclusion of this paper). Most important to understand is that they aren't actually effects of fine-tuning in principle!

Furthermore, Wood and Spekkens are through the same paper (page 27) clearly trying to establish a (toy model) definition of causality independent of temporal ordering - just like what spin network theory or causal dynamical triangulations already offer; this is known as background independence, something which I'm sure you are aware Smolin has argued for for years.

And as I argued before, Hossenfelder convincingly argues that finetuning isn't a real problem, especially w.r.t. foundational issues. The only way one can interpret the Wood-Spekkens paper as arguing against psi-ontic models is to argue against parameter finetuning and take the accepted wisdom of contemporary physics at face value - which can be interpreted as using Occam's razor. I will argue every time again that using Einstein's razor is the superior strategy.
DarMM said:
The only really different theories would be superdeterministic, retrocausal or Many-Worlds, but all of those have fine tunings as well.
I'm pretty well aware of these ideas themselves being problematic taken at face value, which is exactly why I selectively exclude such ideas during preliminary theoretical modelling/evaluating existing models using Bayesian reasoning. I should say again though that retrocausality is only problematic if we are referring to matter or information, not correlation; else entanglement itself wouldn't be allowed either.
DarMM said:
Acausal models might be different (i.e. where physics concerns multiscale 4D constraints), but they are truly different theories with little analysis on them as of now.
All theories derived based on the Wheeler-deWitt equation are acausal in this sense, as are models based on spin networks or twistors. I suspect some - or perhaps many - models which seem retrocausal may actually be reformulated as acausal or worse, were actually acausal to begin with and just misinterpreted as retrocausal due to some cognitive bias (a premature deferral to accepted wisdom in contemporary physics).

Btw I'm really glad you're taking the time to answer me so thoroughly, this discussion has truly been a pleasure. My apologies if I come off as rude/offensive, I have heard that I tend to argue in a somewhat brash fashion the more passionate I get; to quote Bohr: "Every sentence I utter must be understood not as an affirmation, but as a question."
 
  • #50
Auto-Didact said:
Newton lived in the 1600s, he was literally the first classical theoretical physicist - as well as first serious mathematical physicist - practically initiating the entire subject of physics as we know it today. Boltzmann and Gibbs lived much later (1800s until early 1900s). But let's not turn this into a measuring contest any further than it already is lol.

In any case, as I said before, if that is the standard terminology of the field, then you are correct to use it, no matter how unfortunate I or anyone else may find the terminology. This paper that you linked however defines fine-tuning on page 9 again exactly as parameter fine-tuning, i.e. the same definition that I am using...
Genuinely I really don't get this line of discussion at all. I am not saying initial condition fine tuning is an older concept (I know when Newton or Boltzman lived) or that in Quantum Foundations they exclusively use fine tuning to mean initial condition fine tuning.

I am saying that fine-tuning has long been used to mean both in theoretical physics and Quantum Foundations like many other areas, uses fine tuning to mean both.

In that paper I linked they explicitly mean both as "causal parameters" includes both initial conditions and other parameters if you look at how they define it.

I really don't understand this at all, I'm simply using a phrase the way it has been used for over a century and a half in theoretical physics. What does it matter if using it on a subset of its current referents extends back further?

Auto-Didact said:
Doesn't the PBR theorem literally state that any strictly psi-epistemic interpretation of QM literally contradicts the predictions of QM? This implies that a psi-ontic interpretation of QM is actually a necessity! Can you please rephrase the PBR theorem in your own words?
No, it says that any Psi-Epistemic model obeying the onotological framework axioms and the principle of Preparation Independence for two systems cannot model QM.

Auto-Didact said:
The ease of replicating QM without entanglement seems to only hold for psi-epistemic models, not for psi-ontic models.
That's explicitly not true, coming up with Psi-Ontic models that model the non-entanglement part of QM is simple, even simpler than modelling it with Psi-Epistemic models. In fact Psi-Ontic models end up naturally replicating all of QM, you don't even have the blockade with modelling entanglement that you have with Psi-Epistemic models.

Auto-Didact said:
Based on this paper, the critiques from those theorems seem to apply not to realist models but to a psi-epistemic interpretation of QM.
That's not what the theorem demonstrates, it holds for both Psi-Ontic and Psi-Epistemic models. The class of models covered includes both.

Auto-Didact said:
Even worse; even if they did apply to realistic models (i.e. psi-ontic models) they would only apply to a certain subset of all possible realist models, not all possible realist models. To then assume based on this that all realist models are therefore unlikely is to commit the base rate fallacy; indeed, the very existence of Bohmian Mechanics makes such an argument extremely suspect.
Bohmian Mechanics needs to be fine tuned (Quantum Equilibrium hypothesis), it is known that out of equilibrium Bohmian Mechanics has superluminal signalling. In the Wood-Spekkens paper they are trying to see if that kind of fine-tuning is unique to Bohmian Mechanics or a general feature of all such theories.
It turns out to be a general feature of all Realist models. The only type they don't cover is Many-Worlds. However the Pusey-Leifer theorem then shows that Many-Worlds has fine-tuning.

Hence all Realist models have fine-tunings.

What one can now do is attempt to show the fine-tuning is dynamically generated, but you can't avoid the need for it.

Auto-Didact said:
That said, going over your references as well as my own it seems to me that you have seriously misunderstood what you have read in the literature, but perhaps it is I who is the one who is mistaken. You definitely wouldn't be the first (I presume) physicist I have met who makes such interpretative errors when reading long complicated texts; it is as if subtlety is somehow shunned or discarded at every turn in favor of explicitness. I suspect that this might be due to the fact that most physicists today do not have any training in philosophy or argumentative reasoning at all
I don't need a psychoanalysis or rating of what I do or do not understand. Tell me what I have misunderstood in the Pusey-Leifer or Wood-Spekkens papers. I've gone through the proofs and then rederived them myself to ensure I understood them, as well as seen the conclusion "All realist theories are fine-tuned" explicitly acknowledged in talks by Quantum Foundations experts like Matt Leifer.

See point nine of this slide:
http://mattleifer.info/wordpress/wp-content/uploads/2009/04/FQXi20160818.pdf

It's very easy to start talking about me and my comprehension, have you read the papers in depth yourself?
 

Similar threads

Back
Top