# A Quantization isn't fundamental

#### DarMM

Science Advisor
Gold Member
Again chaotic dynamics doesn't have to get around any of that. I will try to answer each point, by stating what I am assuming/hypothesizing:

1) Analyticity/holomorphicity making single outcomes a necessity per the conventional uniqueness and existence arguments.
2) No assumption of superdeterminism.
3) No assumption of retrocausality of matter or information.
4 & 5) The physical common cause is the specific spacetime pathway connecting some EPR pair
6) All matter and information can only follow timelike and lightlike curves....

Notice that I am not saying that the above arguments are correct per se, but merely a logically valid possibility which is mathematically speaking completely conceivable and possibly even already directly constructable using existing mathematics.
This will not simulate non-classical correlations then. You either need to have superdeterminism, retrocausality (i.e. actual information carrying physical interaction moving back in time), nonlocality (actual interaction at a distance), rejection of Reichenbach's principle (specifically the no decorrelating explanation part), rejection of single outcomes or reject the concept of the fundamental dynamics being mathematical (i.e. anti scientific realism).

It doesn't matter if the dynamics is chaotic, dissipative and more nonlinear than anything ever conceived, unless one of these things is true Bell's theorem guarantees it will fail to replicate non-classical correlations.

#### Auto-Didact

If that is your point of view then this doesn't follow:
it seems to me that self-organizing systems IS a regular hidden variable theory
'Regular' strange attractors are already infinitely complicated due to topological mixing. Supercycle attractors on the other hand, seem to increase the degree of complexity of this topological mixing by some arbitrarily high amount such that the entire space taken up by the dense orbits of the entire original strange attractor - after bifurcating in this particular way - 'collapses' onto a very particular set of discrete orbitals - becoming in the context of QM indistinguishable from discrete quantum orbits.

Last edited:

#### Auto-Didact

This will not simulate non-classical correlations then. You either need to have superdeterminism, retrocausality (i.e. actual information carrying physical interaction moving back in time), nonlocality (actual interaction at a distance), rejection of Reichenbach's principle (specifically the no decorrelating explanation part), rejection of single outcomes or reject the concept of the fundamental dynamics being mathematical (i.e. anti scientific realism).

It doesn't matter if the dynamics is chaotic, dissipative and more nonlinear than anything ever conceived, unless one of these things is true Bell's theorem guarantees it will fail to replicate non-classical correlations.
??? Did you miss that I specifically said that the entire scheme can consistently be made non-local using spin network theory or (the mathematics of) twistor theory?

Manasson's theory only explains quantisation; it isn't a theory of everything. Just adding spin networks to Manasson's preliminary model alone already seems to solve all the problems regarding being able to reproduce QM entirely.

#### stevendaryl

Staff Emeritus
Science Advisor
If that is your point of view then this doesn't follow:
'Regular' strange attractors are already infinitely complicated due to topological mixing. Supercycle attractors on the other hand, seems to increase the degree of complexity of this topological mixing by an arbitrarily high amount such that the space taken up by the dense orbits of an entire attractor - after bifurcating through self-organisation - 'collapses' onto a very particular set of discrete orbitals - in the context of QM becoming indistinguishable from discrete quantum orbits.

If that is your point of view then this doesn't follow:
'Regular' strange attractors are already infinitely complicated due to topological mixing. Supercycle attractors on the other hand, seem to increase the degree of complexity of this topological mixing by some arbitrarily high amount such that the entire space taken up by the dense orbits of the entire original strange attractor - after bifurcating in this particular way - 'collapses' onto a very particular set of discrete orbitals - becoming in the context of QM indistinguishable from discrete quantum orbits.
I think that you are misunderstanding my point. I don't care how complicated the dynamics are because Bell's theorem doesn't make any assumptions about complexity.

#### Auto-Didact

As I have stated multiple times now, consistently adding something like spin networks or twistor theory to Manasson's theory immediately makes the resulting theory non-local, thereby removing the complaints you have regarding Bell's theorem. I see no reason why this cannot be done.

#### DarMM

Science Advisor
Gold Member
??? Did you miss that I specifically said that the entire scheme can consistently be made non-local using spin network theory or (the mathematics of) twistor theory?

Manasson's theory only explains quantisation; it isn't a theory of everything. Just adding spin networks to Manasson's preliminary model alone already seems to solve all the problems regarding being able to reproduce QM entirely.
I saw it, but I was confining discussion to Manasson's theory explicitly, possible modifications are hard to discuss if they are not developed.

However to discuss that, I get that it might seem that spin network theory will solve the problem, but I would suggest reading up on current work in Quantum Foundations. All realist models (any of nonlocal, Many-Worlds, Retrocausal and Superdeterminism models) all display fine tuning problems as shown in the Wood-Spekkens and the Pusey-Leifer theorems for example. It's not enough that something seems to solve the problem. If you try an unnatural fine tuning will emerge.

#### Auto-Didact

Manasson's theory is clearly preliminary; just because it has not yet reproduced entanglement or Bell inequalities doesn't mean that it is wrong or of no value whatsoever. It is way too early to expect that from the theory.

The fact that it - in its very preliminary form - seems to be able to directly reproduce so much (quantisation, spinors, coupling constants of strong/weak/EM, resolve measurement problem) using so little, is what one should be focusing on.

No one ever said it would as is immediately reproduce QM fully, but instead that it gives an explanation for where quantization itself comes from, which implies QM is not the fundamental theory of nature.

Complaining that a preliminary model which explains the origin of some phenomenon without fully reproducing the phenomenon as well is wrong/not worth considering because it doesn't immediately reproduce the entire phenomenon is making a serious categorical error. That would be analogous to a contemporary of Newton dismissing Newton and his work because Newton didn't invent a full theory of relativistic gravity and curved spacetime in one go.
However to discuss that, I get that it might seem that spin network theory will solve the problem, but I would suggest reading up on current work in Quantum Foundations. All realist models (any of nonlocal, Many-Worlds, Retrocausal and Superdeterminism models) all display fine tuning problems as shown in the Wood-Spekkens and the Pusey-Leifer theorems for example. It's not enough that something seems to solve the problem. If you try an unnatural fine tuning will emerge.
Apart from the possible issue with finetuning, this part sounds thoroughly confused. QM itself can essentially be viewed as a non-local theory, this is what Bell's theorem shows. From what I understood before from Pusey and Leifer's paper was that QM may not just be non-local but has an element of retrocausality as well, i.e. quantum information through entanglement can travel backwards in time while not being a form of signalling i.e. quantum information not being information. How is this any different from what I am arguing for?

#### Buzz Bloom

Gold Member
this is a very unfortunate misnomer because quantum information is not a form of information!
Hi Auto-Didact:

I would appreciate it if you would elaborate on this concept. Wikipedia
says
In physics and computer science, quantum information is information that is held in the state of a quantum system. Quantum information is the basic entity of study in quantum information theory, and can be manipulated using engineering techniques known as quantum information processing. Much like classical information can be processed with digital computers, transmitted from place to place, manipulated with algorithms, and analyzed with the mathematics of computer science, so also analogous concepts apply to quantum information. While the fundamental unit of classical information is the bit, in quantum information it is the qubit.In physics and computer science, quantum information is information that is held in the state of a quantum system. Quantum information is the basic entity of study in quantum information theory, and can be manipulated using engineering techniques known as quantum information processing. Much like classical information can be processed with digital computers, transmitted from place to place, manipulated with algorithms, and analyzed with the mathematics of computer science, so also analogous concepts apply to quantum information. While the fundamental unit of classical information is the bit, in quantum information it is the qubit.​
Is your difference with Wikipedia simply a vocabulary matter, or is there some deeper meaning?

Regards,
Buzz

#### Auto-Didact

It's not enough that something seems to solve the problem. If you try an unnatural fine tuning will emerge.
I'm pretty sure you are aware that Sabine Hossenfelder wrote an entire book about the complete irrelevance of numbers seeming unnatural i.e. that naturalness arguments have no proper scientific basis and holding to them blindly are actively counter-productive for the progress of theoretical physics.

Moreover, I'm not entirely convinced by it, but I recently read a paper by Strumia et al. (yes, that Strumia) which argues quite convincingly that demonstrating near-criticality can make anthropic arguments and arguments based on naturalness practically obsolete.
Is your difference with Wikipedia simply a vocabulary matter, or is there some deeper meaning?
Read this book.
Quantum information is a horrible misnomer, it is not a form of information in the Shannon information theoretic/signal processing sense i.e. the known and universally accepted definition of information from mathematics and computer science.

This fully explains why entanglement doesn't work by faster than light signalling, i.e. it isn't transmitting information in the first place, but something else. It is unfortunate this something else can be easily referred to colloquially as information as well, which is exactly what happened when someone came up with the term.

The continued usage is as bad if not worse than laymen confusing the concept of velocity with that of force, especially because computer scientists/physicists actually came up with the name!

#### Buzz Bloom

Gold Member
Read this book.
Hi Auto-Didact:

Thanks for the citation.
Quantum (Un)speakables
Editors: Bertlmann, R.A., Zeilinger, A.
Publication date: 01 Sep 2002
Publisher: Springer-Verlag Berlin and Heidelberg GmbH & Co. KG
List Price: US $129​ Neither my local library, nor the network of libraries it belongs to, has the book. I did download the Table of Contents, 10 pages. Can you cite a particular part (or parts) of the book that deals with the question I asked about "quantum information vs. information"? The local reference librarian may be able to get me a copy of just the part(s) I need. Regards, Buzz Last edited: #### DarMM Science Advisor Gold Member I'm pretty sure you are aware that Sabine Hossenfelder wrote an entire book about the complete irrelevance of numbers seeming unnatural i.e. that naturalness arguments have no proper scientific basis and holding to them blindly are actively counter-productive for the progress of theoretical physics. Moreover, I'm not entirely convinced by it, but I recently read a paper by Strumia et al. (yes, that Strumia) which argues quite convincingly that demonstrating near-criticality can make anthropic arguments and arguments based on naturalness practically obsolete Well these aren't just numbers, unless fine tuned realistic models will have their unusual features become noticeable, i.e. in Retrocausal theories if you don't fine tune them then the retrocausal signals are noticeable and useable macroscopically, similarly for nonlocal theories. This could be correct, but it's something to keep in mind. It isn't fine-tuning in the sense you are thinking of (special parameter values), but the presence of superluminal (etc) signalling for these theories outside very specific initial conditions. No one ever said it would as is immediately reproduce QM fully, but instead that it gives an explanation for where quantization itself comes from, which implies QM is not the fundamental theory of nature. There are a few models that do that. Complaining that a preliminary model which explains the origin of some phenomenon without fully reproducing the phenomenon as well is wrong/not worth considering because it doesn't immediately reproduce the entire phenomenon is making a serious categorical error. I think this is overblown, I'm not saying it shouldn't be considered, I'm just saying that the features of QM it does solve (e.g. measurement problem, quantisation) are easily done, even in toy models. It would be the details of how it explains entanglement that would need to be seen and in advance we know it will involve fine-tuning in its initial conditions. Whether that is okay/worth it could then be judged in light of all the other features it may have. What I was discussing is that "solving" entanglement is known to take much more than this and have unpleasant features. Last edited: #### Auto-Didact Hi Auto-Didact: Thanks for the citation. Quantum (Un)speakables Editors: Bertlmann, R.A., Zeilinger, A. Publication date 01 Sep 2002 Publisher Springer-Verlag Berlin and Heidelberg GmbH & Co. KG List Price: US$129​
Neither my local library, nor the network of libraries it belongs to, has the book.
I did download the Table of Contents, 10 pages. Can you cite a particular part (or parts) of the book that deals with the question I asked about "quantum information vs. information"? The local reference librarian may be able to get me a copy of just the part(s) I need.

Regards,
Buzz
Its been awhile, I can't remember exactly. What I do remember however is that the book is definitely worth reading. It isn't merely some book on QM foundations, but a book on quantum information theory and a partial biography of John Bell as well. Just check the list of authors if you feel you need any convincing. In any case, check your conversations.
It isn't fine-tuning in the sense you are thinking of (special parameter values), but the presence of superluminal (etc) signalling for these theories outside very specific initial conditions.
Might as well just say superluminal signalling etc; referring to these problems as finetuning is another very unfortunate misnomer, especially given the way more familiar fine tuning arguments for life/earth/the universe etc.

Btw I am actively keeping in mind what you are calling finetuning problems in so far as I'm aware of them. This is my current main go-to text for trying to see what a new theory needs to both solve and take into account w.r.t the known issues in the foundations of QM, and this is the text which in my opinion best explains how the "nonlinear reformulation of QM" programme is trying to achieve solving the above problem, which moreover uses a specific kind of preliminary prototype model illustrating the required mathematical properties.
There are a few models that do that.
Some references would be nice, pretty much every other model/theory I have ever seen beside this was obviously wrong or completely unbelievable (in the bad sense of the word).

#### Fra

But "regular hidden variable" theory INCLUDES "extremely non-linear" systems. Bell's notion of a hidden-variables theory allows arbitrary interactions, as long as they are local. Nonlinearity is not ruled out.

(Actually, "nonlinear" by itself doesn't mean anything. You have to say what is linear in what.)
You are right, non-linear was the wrong phrase (which i realized and changed it, but too late). I was trying to give a quick answer.

Bells theorem is about probabilities, and my view is that any P-measure, or system of P-measures, to necessarily be conditional upon, or even identified with an observers, and they i of course take a observer dependent bayesian view on P-measures. (with observer here, read particle as a generalisation of measurement device, not the human scientist. In my view the generalized notion of observer is NOT necessarily a classical device, that is the twist. And the P-measures are hidden in the sense that no other obserer can observer the naked expectations of another observer, and there is no simple renormalization scheme you can use iether. This comparasion is simply indistinguishable from the normal physical interaction. One observer can only try to adbuce the naked expectations of another system by means of its observer actions, from the perspective of the other observer.

This is loosely analogous (given that analogies are never perfect) to how geometry guides matter, and matter evolves geometry. What we have here is an evolutionary process where theory (as encoded in a particles internal structure) guides the action of the particles, but the action of the population of particles similarly evolve theory. If you complain this is not precise enough mathematically thats correct, but i am trying to save the vision here, in despite of the admittedly incomplete and even confusing and almost contradictory details.

Its this evolution of law - as identified with tuning of elementary particles - that informally can be thought of as a random walk in a similarly evolving theory space, that is self-organising. The task is to find the explicits here, and show that there are stable preferred attractors, and that these correspond to the standard model. IF this totally fails, then we can dissmiss this crazy idea, but not sooner i think.

Once we are at the attractor, we have business at usual with symmetries etc. I am not suggesting to restore realism, neither do i suggest a simply self-organising classical chaos to explain QM! It is not enough, that is agreed, but this not what imean.

/Fredrik

#### Fra

Bell's theorem doesn't make any assumptions about complexity.
I agree that what will not work is any underlying observer invariant classical probability model, that with some crazy nonlinear chaotic deductions and where transitions follow some simple conditional probability. This will not work because the whole idea of an observer independent probability space is deeply confused.

This is my opinon, and tha each interacting subsystem implicitly encodes its own version of the P-spaces. Such models are to my knowledge not excluded by bells theorem. Because the P-measures used in the theorem are not fixed, they are evolving, and one has to define which observer is making the bell inferences.

So the conjecture is not to explain QM as a classical HV model (no matter how chaotic), where the experimenter is simply ignorant about these. The conjecture would be to explain QM as interacting information processing agents (elemetary particles to refer to the paper) that self-organize their "P-spaces" to reflect maximal stability. Any interaction between two systems take place at two levels, a regular residual interaction where observers evolved and agreement on disagreement, but that leaves them both stable. And a more desctructive level which evolves the P-measures. QM as we know should be emergent as residual interactions, but the evolutionary mechanisms are what is needed to understand unification. Ie. the KEY is to include the "observer", the encoder of the expectations, in the actual interactions.

But wit this said the link to the original paper that ia connected to was that in an approximate sense, one can probably "explain" an elementary particle as an evolved information processing agent, in a chaotic environment. Here the chaos is relevant as it demonstrates the particles insufficent computational complexity to decode the environment. And this fact, determines the properties of it - or so goes the conjecture. There is still not actual model for this yet.

I feel i may be drifting a bit here, but my only point in this thread was to support a kind of "hidden variable" model, but which is really just the observer dependent information, so it does not have the structure of classical realism that is rejected by bells theorem. And this will then have generic traits such as beeing evolved, and the exact symmetries we are used to would correspond to attractors, but not attractors in a simple fixed theory spcae, but attractors in an evolving theory space. This latter things is a key, as otherwise we run into all kinds of fine tuning problems well known to any newtonian schema.

Sorry for the ramblings, on my way off air for sometime, so i will not interfere more the next days.

/Fredrik

#### DarMM

Science Advisor
Gold Member
Might as well just say superluminal signalling etc; referring to these problems as finetuning is another very unfortunate misnomer, especially given the way more familiar fine tuning arguments for life/earth/the universe etc.
Fine tuning has long been used for both initial condition tuning and parameter tuning, I don't think parameter tuning has any special claim on the phrase. Besides it's standard usage in Quantum Foundations to refer to this as "Fine-Tuning" and I prefer to use terms as they are used in the relevant fields.

It couldn't be called "superluminal singalling" as the fine tuning is the solution to why we don't observe superluminal (or retrocausal, etc) signalling at macroscopic scales in realist models.

Some references would be nice, pretty much every other model/theory I have ever seen beside this was obviously wrong or completely unbelievable (in the bad sense of the word).
Well a simple toy model that shows a huge amount of quantum mechanical features result purely from a fundamental epistemic limit is here:
https://arxiv.org/abs/quant-ph/0401052

It's just a toy model, there are much more developed ones, but you can see the basic fact of how easy it is to replicate a huge amount of QM, except for entanglement. Which is why entanglement is the key feature one has to explain.

#### Demystifier

Science Advisor
2018 Award
Bell's theorem doesn't make any assumptions about whether the dynamics is self-organizing, or not.
Bell's theorem assumes the absence of superdeterminism. I wonder, could perhaps self-organization create some sort of superdeterminism? In fact, I think that the 't Hooft's proposal can be understood that way.

#### nikkkom

This article's train of thought regarding 1/2 -> 1 -> 2 spin particles and their coupling leads to a prediction that graviton's coupling should be ~4 times stronger than color force. This is obviously not the case.

Just observing that families of particles seem to "bifurcate" when we look at their various properties seems to be a too tenuous reason to apply dissipative reasoning.

#### Lord Jestocost

Gold Member
2018 Award
QM itself can essentially be viewed as a non-local theory, this is what Bell's theorem shows.
Bell’s theorem states that in a situation which involves the correlation of measurements on two spatially separated, entangled systems, no “local realistic theory” can predict experimental results identical to those predicted by quantum mechanics. The theorem says nothing about the character of quantum theory.

#### Auto-Didact

Fine tuning has long been used for both initial condition tuning and parameter tuning, I don't think parameter tuning has any special claim on the phrase. Besides it's standard usage in Quantum Foundations to refer to this as "Fine-Tuning" and I prefer to use terms as they are used in the relevant fields.
I don't doubt that, but I think you are missing the point that the other usage of fine tuning is old, centuries old. Newton himself even used the same fine tuning argument to argue that the three body problem was insoluble due to infinite complexity and that therefore the mechanistic universe must be the work of God. The same arguments were and are still being used in biology since Darwin to this very day.

In any case, I will grant your usage of this unfortunate standard terminology in the novel and relatively secluded area of research that is the foundations of QM.
Well a simple toy model that shows a huge amount of quantum mechanical features result purely from a fundamental epistemic limit is here:
https://arxiv.org/abs/quant-ph/0401052

It's just a toy model, there are much more developed ones, but you can see the basic fact of how easy it is to replicate a huge amount of QM, except for entanglement. Which is why entanglement is the key feature one has to explain.
I understand that this toy model is or may just be some random example, but I seriously think a few key points are in order. I will start by making clear that my following comments are regarding mathematical models in scientific theories of empirical phenomenon, but I digress.

I do hope you realize that there is an enormous qualitative difference between these kind of theoretical models and a theoretical model like Manasson's. This can be seen at multiple levels:
- First, the easiest way to spot this difference is to compare the underlying mathematics of the old and new models: the mathematics of this new model (causal discovery analysis, a variant of root cause analysis) is very close to the underlying mathematics of QM, while the mathematics underlying Manasson's model is almost diametrically opposite to the mathematics underlying QM.
- The second point is the focus of a new model - due to the underlying mathematics - on either accuracy or precision: similar underlying mathematics between models tends to lead quickly to good precision without necessarily being accurate, while a novel model based in completely different mathematics - and still being capable of reproducing things of an older model - initially has to focus on accuracy before focusing on precision.
- The third - and perhaps most important - point is the conceptual shift required to go between the old and the new model; if apart from the mathematics, the conceptual departure from old to new isn't radical, then the new model isn't likely to be able to go beyond the old. This is actually a consequence of the first and second point, because a small difference with high precision is easily fully constructed, implying low accuracy and therefore easily falsified. On the other hand, it is almost impossible that huge differences will lead to similar consequences, meaning both models are accurate with the older being typically more precise than the newer, at least until the newer matures and either replaces the old or gets falsified.

To illustrate these points even further we can again use the historical example of going from Newtonian gravity to Einsteinian gravity; all three points apply there quite obviously; I won't go into that example any further seeing there are tonnes of threads and books on this topic, i.e. MTW Gravitation.

What I do need to say is that the above mentioned differences are important for any new mathematical model of some empirical phenomenon based in scientific reasoning, not just QM; I say this because there is another way to create a new mathematical model of an empirical phenomenon, namely by making an analogy based on similar mathematics. A (partially) successful new model using an analogy based on similar mathematics usually tends to be only incrementally different or evolutionary, while a succesful new model based on scientific reasoning tends to be revolutionary.

Evolution of a model merely requires successful steps of cleverness, while revolution requires nothing short of genius and probably a large dose of luck, i.e. being in the right place at the right time. This is the problem with all psi-epistemic models; they are practically all incrementally different or a small evolution in terms of being mathematically cleaner than the old model - which is of course why they are available a dime a dozen. It takes hardly any mathematical insight or scientific creativity at all to make one. For new QM models, this is because such models tend to be based in probability theory, information theory, classical graph theory and/or linear algebra. These topics in mathematics are in comparison with say geometry or analysis relatively "sterile" (not quantitatively in applications but qualitatively in mathematical structure).

All of these critique points w.r.t. theorisation of empirically based scientific models do not merely apply to the toy model you posted, but to all psi-epistemic models of QM. This is also why we see so much of such models and practically none of the other; making psi-epistemic models is a low-risk/low-payout strategy, while making psi-ontic models is a high-risk/high-payout strategy.

When I said earlier, that I've never seen a new model which wasn't obviously wrong or completely unbelievable, I wasn't even counting such incrementally different models because they tend to be nowhere near even interesting enough to consider seriously as a candidate that will possibly supersede QM. Sure, such a model may even almost directly have way more applications; that however is frankly speaking completely irrelevant w.r.t. foundational issues. W.r.t. the foundations of QM, this leaves us with searching for psi-ontic models.

Make no mistake; the foundational goal of reformulating QM based on another model is not to find new applications but to go beyond QM; based on all psi-ontic attempts so far this goal is extremely difficult. On the other hand, as I have illustrated, finding a reformulation of QM based on a psi-epistemic model tends to be neither mathematically challenging nor scientifically interesting for any (under)grad student with sufficient training; one can almost literally blindly open any textbook on statistics, decision theory, operation research and/or data science and find some existing method which one could easily strip down to its mathematical core and try to construct an incrementally different model of QM.

So again, if you do know of some large collection of new psi-ontic (toy) models which do not quickly fall to fine-tuning and aren't obviously wrong, please, some references would be nice.

#### Auto-Didact

This article's train of thought regarding 1/2 -> 1 -> 2 spin particles and their coupling leads to a prediction that graviton's coupling should be ~4 times stronger than color force. This is obviously not the case.
It actually need not imply such a thing at all. The article doesn't assume that gravity needs to be quantized.
Just observing that families of particles seem to "bifurcate" when we look at their various properties seems to be a too tenuous reason to apply dissipative reasoning.
Bifurcating particle taxonomy isn't the reason to apply dissipative reasoning, instead virtual particles based in the Heisenberg uncertainty principle is.

The very concept of virtual particles implies an open i.e. dissipative system, and therefore perhaps the necessity of a non-equilibrium thermodynamics approach a la John Baez.
Bell’s theorem states that in a situation which involves the correlation of measurements on two spatially separated, entangled systems, no “local realistic theory” can predict experimental results identical to those predicted by quantum mechanics. The theorem says nothing about the character of quantum theory.
Your conclusion is incorrect. If local hidden variables can not reproduce QM predictions, non-local hidden variables might still be able to, i.e. Bell's theorem also clearly implies that non-locality may reproduce QM's predictions, implying again that QM - or a completion of QM - is itself in some sense inherently non-local. This was indeed Bell's very own point of view.

None of this is nothing new, it is well-known in the literature that entanglement is or can be viewed as a fully non-local phenomenon. Moreover, as you probably already know, there is actually a very well-known explicitly non-local hidden variable theory, namely Bohmian mechanics (BM) which fully reproduces the predictions of standard QM; in terms of QM interpretation, this makes BM a psi-ontic model which actually goes beyond QM.

### Want to reply to this thread?

"Quantization isn't fundamental"

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving