Quantization isn't fundamental

  • #26
DarMM
Science Advisor
Gold Member
2,370
1,397
??? Did you miss that I specifically said that the entire scheme can consistently be made non-local using spin network theory or (the mathematics of) twistor theory?

Manasson's theory only explains quantisation; it isn't a theory of everything. Just adding spin networks to Manasson's preliminary model alone already seems to solve all the problems regarding being able to reproduce QM entirely.
I saw it, but I was confining discussion to Manasson's theory explicitly, possible modifications are hard to discuss if they are not developed.

However to discuss that, I get that it might seem that spin network theory will solve the problem, but I would suggest reading up on current work in Quantum Foundations. All realist models (any of nonlocal, Many-Worlds, Retrocausal and Superdeterminism models) all display fine tuning problems as shown in the Wood-Spekkens and the Pusey-Leifer theorems for example. It's not enough that something seems to solve the problem. If you try an unnatural fine tuning will emerge.
 
  • #27
721
507
Manasson's theory is clearly preliminary; just because it has not yet reproduced entanglement or Bell inequalities doesn't mean that it is wrong or of no value whatsoever. It is way too early to expect that from the theory.

The fact that it - in its very preliminary form - seems to be able to directly reproduce so much (quantisation, spinors, coupling constants of strong/weak/EM, resolve measurement problem) using so little, is what one should be focusing on.

No one ever said it would as is immediately reproduce QM fully, but instead that it gives an explanation for where quantization itself comes from, which implies QM is not the fundamental theory of nature.

Complaining that a preliminary model which explains the origin of some phenomenon without fully reproducing the phenomenon as well is wrong/not worth considering because it doesn't immediately reproduce the entire phenomenon is making a serious categorical error. That would be analogous to a contemporary of Newton dismissing Newton and his work because Newton didn't invent a full theory of relativistic gravity and curved spacetime in one go.
However to discuss that, I get that it might seem that spin network theory will solve the problem, but I would suggest reading up on current work in Quantum Foundations. All realist models (any of nonlocal, Many-Worlds, Retrocausal and Superdeterminism models) all display fine tuning problems as shown in the Wood-Spekkens and the Pusey-Leifer theorems for example. It's not enough that something seems to solve the problem. If you try an unnatural fine tuning will emerge.
Apart from the possible issue with finetuning, this part sounds thoroughly confused. QM itself can essentially be viewed as a non-local theory, this is what Bell's theorem shows. From what I understood before from Pusey and Leifer's paper was that QM may not just be non-local but has an element of retrocausality as well, i.e. quantum information through entanglement can travel backwards in time while not being a form of signalling i.e. quantum information not being information. How is this any different from what I am arguing for?
 
  • #28
Buzz Bloom
Gold Member
2,215
366
this is a very unfortunate misnomer because quantum information is not a form of information!
Hi Auto-Didact:

I would appreciate it if you would elaborate on this concept. Wikipedia
says
In physics and computer science, quantum information is information that is held in the state of a quantum system. Quantum information is the basic entity of study in quantum information theory, and can be manipulated using engineering techniques known as quantum information processing. Much like classical information can be processed with digital computers, transmitted from place to place, manipulated with algorithms, and analyzed with the mathematics of computer science, so also analogous concepts apply to quantum information. While the fundamental unit of classical information is the bit, in quantum information it is the qubit.In physics and computer science, quantum information is information that is held in the state of a quantum system. Quantum information is the basic entity of study in quantum information theory, and can be manipulated using engineering techniques known as quantum information processing. Much like classical information can be processed with digital computers, transmitted from place to place, manipulated with algorithms, and analyzed with the mathematics of computer science, so also analogous concepts apply to quantum information. While the fundamental unit of classical information is the bit, in quantum information it is the qubit.​
Is your difference with Wikipedia simply a vocabulary matter, or is there some deeper meaning?

Regards,
Buzz
 
  • #29
721
507
It's not enough that something seems to solve the problem. If you try an unnatural fine tuning will emerge.
I'm pretty sure you are aware that Sabine Hossenfelder wrote an entire book about the complete irrelevance of numbers seeming unnatural i.e. that naturalness arguments have no proper scientific basis and holding to them blindly are actively counter-productive for the progress of theoretical physics.

Moreover, I'm not entirely convinced by it, but I recently read a paper by Strumia et al. (yes, that Strumia) which argues quite convincingly that demonstrating near-criticality can make anthropic arguments and arguments based on naturalness practically obsolete.
Is your difference with Wikipedia simply a vocabulary matter, or is there some deeper meaning?
Read this book.
Quantum information is a horrible misnomer, it is not a form of information in the Shannon information theoretic/signal processing sense i.e. the known and universally accepted definition of information from mathematics and computer science.

This fully explains why entanglement doesn't work by faster than light signalling, i.e. it isn't transmitting information in the first place, but something else. It is unfortunate this something else can be easily referred to colloquially as information as well, which is exactly what happened when someone came up with the term.

The continued usage is as bad if not worse than laymen confusing the concept of velocity with that of force, especially because computer scientists/physicists actually came up with the name!
 
  • #30
Buzz Bloom
Gold Member
2,215
366
Read this book.
Hi Auto-Didact:

Thanks for the citation.
Quantum (Un)speakables
Editors: Bertlmann, R.A., Zeilinger, A.
Publication date: 01 Sep 2002
Publisher: Springer-Verlag Berlin and Heidelberg GmbH & Co. KG
List Price: US $129​
Neither my local library, nor the network of libraries it belongs to, has the book.
I did download the Table of Contents, 10 pages. Can you cite a particular part (or parts) of the book that deals with the question I asked about "quantum information vs. information"? The local reference librarian may be able to get me a copy of just the part(s) I need.

Regards,
Buzz
 
Last edited:
  • #31
DarMM
Science Advisor
Gold Member
2,370
1,397
I'm pretty sure you are aware that Sabine Hossenfelder wrote an entire book about the complete irrelevance of numbers seeming unnatural i.e. that naturalness arguments have no proper scientific basis and holding to them blindly are actively counter-productive for the progress of theoretical physics.

Moreover, I'm not entirely convinced by it, but I recently read a paper by Strumia et al. (yes, that Strumia) which argues quite convincingly that demonstrating near-criticality can make anthropic arguments and arguments based on naturalness practically obsolete
Well these aren't just numbers, unless fine tuned realistic models will have their unusual features become noticeable, i.e. in Retrocausal theories if you don't fine tune them then the retrocausal signals are noticeable and useable macroscopically, similarly for nonlocal theories. This could be correct, but it's something to keep in mind. It isn't fine-tuning in the sense you are thinking of (special parameter values), but the presence of superluminal (etc) signalling for these theories outside very specific initial conditions.

No one ever said it would as is immediately reproduce QM fully, but instead that it gives an explanation for where quantization itself comes from, which implies QM is not the fundamental theory of nature.
There are a few models that do that.


Complaining that a preliminary model which explains the origin of some phenomenon without fully reproducing the phenomenon as well is wrong/not worth considering because it doesn't immediately reproduce the entire phenomenon is making a serious categorical error.
I think this is overblown, I'm not saying it shouldn't be considered, I'm just saying that the features of QM it does solve (e.g. measurement problem, quantisation) are easily done, even in toy models. It would be the details of how it explains entanglement that would need to be seen and in advance we know it will involve fine-tuning in its initial conditions. Whether that is okay/worth it could then be judged in light of all the other features it may have. What I was discussing is that "solving" entanglement is known to take much more than this and have unpleasant features.
 
Last edited:
  • Like
Likes Auto-Didact
  • #32
721
507
Hi Auto-Didact:

Thanks for the citation.
Quantum (Un)speakables
Editors: Bertlmann, R.A., Zeilinger, A.
Publication date 01 Sep 2002
Publisher Springer-Verlag Berlin and Heidelberg GmbH & Co. KG
List Price: US $129​
Neither my local library, nor the network of libraries it belongs to, has the book.
I did download the Table of Contents, 10 pages. Can you cite a particular part (or parts) of the book that deals with the question I asked about "quantum information vs. information"? The local reference librarian may be able to get me a copy of just the part(s) I need.

Regards,
Buzz
Its been awhile, I can't remember exactly. What I do remember however is that the book is definitely worth reading. It isn't merely some book on QM foundations, but a book on quantum information theory and a partial biography of John Bell as well. Just check the list of authors if you feel you need any convincing. In any case, check your conversations.
It isn't fine-tuning in the sense you are thinking of (special parameter values), but the presence of superluminal (etc) signalling for these theories outside very specific initial conditions.
Might as well just say superluminal signalling etc; referring to these problems as finetuning is another very unfortunate misnomer, especially given the way more familiar fine tuning arguments for life/earth/the universe etc.

Btw I am actively keeping in mind what you are calling finetuning problems in so far as I'm aware of them. This is my current main go-to text for trying to see what a new theory needs to both solve and take into account w.r.t the known issues in the foundations of QM, and this is the text which in my opinion best explains how the "nonlinear reformulation of QM" programme is trying to achieve solving the above problem, which moreover uses a specific kind of preliminary prototype model illustrating the required mathematical properties.
There are a few models that do that.
Some references would be nice, pretty much every other model/theory I have ever seen beside this was obviously wrong or completely unbelievable (in the bad sense of the word).
 
  • #33
Fra
3,097
144
But "regular hidden variable" theory INCLUDES "extremely non-linear" systems. Bell's notion of a hidden-variables theory allows arbitrary interactions, as long as they are local. Nonlinearity is not ruled out.

(Actually, "nonlinear" by itself doesn't mean anything. You have to say what is linear in what.)
You are right, non-linear was the wrong phrase (which i realized and changed it, but too late). I was trying to give a quick answer.

Bells theorem is about probabilities, and my view is that any P-measure, or system of P-measures, to necessarily be conditional upon, or even identified with an observers, and they i of course take a observer dependent bayesian view on P-measures. (with observer here, read particle as a generalisation of measurement device, not the human scientist. In my view the generalized notion of observer is NOT necessarily a classical device, that is the twist. And the P-measures are hidden in the sense that no other obserer can observer the naked expectations of another observer, and there is no simple renormalization scheme you can use iether. This comparasion is simply indistinguishable from the normal physical interaction. One observer can only try to adbuce the naked expectations of another system by means of its observer actions, from the perspective of the other observer.

This is loosely analogous (given that analogies are never perfect) to how geometry guides matter, and matter evolves geometry. What we have here is an evolutionary process where theory (as encoded in a particles internal structure) guides the action of the particles, but the action of the population of particles similarly evolve theory. If you complain this is not precise enough mathematically thats correct, but i am trying to save the vision here, in despite of the admittedly incomplete and even confusing and almost contradictory details.

Its this evolution of law - as identified with tuning of elementary particles - that informally can be thought of as a random walk in a similarly evolving theory space, that is self-organising. The task is to find the explicits here, and show that there are stable preferred attractors, and that these correspond to the standard model. IF this totally fails, then we can dissmiss this crazy idea, but not sooner i think.

Once we are at the attractor, we have business at usual with symmetries etc. I am not suggesting to restore realism, neither do i suggest a simply self-organising classical chaos to explain QM! It is not enough, that is agreed, but this not what imean.

/Fredrik
 
  • #34
Fra
3,097
144
Bell's theorem doesn't make any assumptions about complexity.
I agree that what will not work is any underlying observer invariant classical probability model, that with some crazy nonlinear chaotic deductions and where transitions follow some simple conditional probability. This will not work because the whole idea of an observer independent probability space is deeply confused.

This is my opinon, and tha each interacting subsystem implicitly encodes its own version of the P-spaces. Such models are to my knowledge not excluded by bells theorem. Because the P-measures used in the theorem are not fixed, they are evolving, and one has to define which observer is making the bell inferences.

So the conjecture is not to explain QM as a classical HV model (no matter how chaotic), where the experimenter is simply ignorant about these. The conjecture would be to explain QM as interacting information processing agents (elemetary particles to refer to the paper) that self-organize their "P-spaces" to reflect maximal stability. Any interaction between two systems take place at two levels, a regular residual interaction where observers evolved and agreement on disagreement, but that leaves them both stable. And a more desctructive level which evolves the P-measures. QM as we know should be emergent as residual interactions, but the evolutionary mechanisms are what is needed to understand unification. Ie. the KEY is to include the "observer", the encoder of the expectations, in the actual interactions.

But wit this said the link to the original paper that ia connected to was that in an approximate sense, one can probably "explain" an elementary particle as an evolved information processing agent, in a chaotic environment. Here the chaos is relevant as it demonstrates the particles insufficent computational complexity to decode the environment. And this fact, determines the properties of it - or so goes the conjecture. There is still not actual model for this yet.

I feel i may be drifting a bit here, but my only point in this thread was to support a kind of "hidden variable" model, but which is really just the observer dependent information, so it does not have the structure of classical realism that is rejected by bells theorem. And this will then have generic traits such as beeing evolved, and the exact symmetries we are used to would correspond to attractors, but not attractors in a simple fixed theory spcae, but attractors in an evolving theory space. This latter things is a key, as otherwise we run into all kinds of fine tuning problems well known to any newtonian schema.

Sorry for the ramblings, on my way off air for sometime, so i will not interfere more the next days.

/Fredrik
 
  • #35
DarMM
Science Advisor
Gold Member
2,370
1,397
Might as well just say superluminal signalling etc; referring to these problems as finetuning is another very unfortunate misnomer, especially given the way more familiar fine tuning arguments for life/earth/the universe etc.
Fine tuning has long been used for both initial condition tuning and parameter tuning, I don't think parameter tuning has any special claim on the phrase. Besides it's standard usage in Quantum Foundations to refer to this as "Fine-Tuning" and I prefer to use terms as they are used in the relevant fields.

It couldn't be called "superluminal singalling" as the fine tuning is the solution to why we don't observe superluminal (or retrocausal, etc) signalling at macroscopic scales in realist models.

Some references would be nice, pretty much every other model/theory I have ever seen beside this was obviously wrong or completely unbelievable (in the bad sense of the word).
Well a simple toy model that shows a huge amount of quantum mechanical features result purely from a fundamental epistemic limit is here:
https://arxiv.org/abs/quant-ph/0401052

It's just a toy model, there are much more developed ones, but you can see the basic fact of how easy it is to replicate a huge amount of QM, except for entanglement. Which is why entanglement is the key feature one has to explain.
 
  • #36
Demystifier
Science Advisor
Insights Author
Gold Member
10,908
3,600
Bell's theorem doesn't make any assumptions about whether the dynamics is self-organizing, or not.
Bell's theorem assumes the absence of superdeterminism. I wonder, could perhaps self-organization create some sort of superdeterminism? In fact, I think that the 't Hooft's proposal can be understood that way.
 
  • Like
Likes eloheim and Auto-Didact
  • #37
This article's train of thought regarding 1/2 -> 1 -> 2 spin particles and their coupling leads to a prediction that graviton's coupling should be ~4 times stronger than color force. This is obviously not the case.

Just observing that families of particles seem to "bifurcate" when we look at their various properties seems to be a too tenuous reason to apply dissipative reasoning.
 
  • #38
Lord Jestocost
Gold Member
596
404
QM itself can essentially be viewed as a non-local theory, this is what Bell's theorem shows.
Bell’s theorem states that in a situation which involves the correlation of measurements on two spatially separated, entangled systems, no “local realistic theory” can predict experimental results identical to those predicted by quantum mechanics. The theorem says nothing about the character of quantum theory.
 
  • #39
721
507
Fine tuning has long been used for both initial condition tuning and parameter tuning, I don't think parameter tuning has any special claim on the phrase. Besides it's standard usage in Quantum Foundations to refer to this as "Fine-Tuning" and I prefer to use terms as they are used in the relevant fields.
I don't doubt that, but I think you are missing the point that the other usage of fine tuning is old, centuries old. Newton himself even used the same fine tuning argument to argue that the three body problem was insoluble due to infinite complexity and that therefore the mechanistic universe must be the work of God. The same arguments were and are still being used in biology since Darwin to this very day.

In any case, I will grant your usage of this unfortunate standard terminology in the novel and relatively secluded area of research that is the foundations of QM.
Well a simple toy model that shows a huge amount of quantum mechanical features result purely from a fundamental epistemic limit is here:
https://arxiv.org/abs/quant-ph/0401052

It's just a toy model, there are much more developed ones, but you can see the basic fact of how easy it is to replicate a huge amount of QM, except for entanglement. Which is why entanglement is the key feature one has to explain.
I understand that this toy model is or may just be some random example, but I seriously think a few key points are in order. I will start by making clear that my following comments are regarding mathematical models in scientific theories of empirical phenomenon, but I digress.

I do hope you realize that there is an enormous qualitative difference between these kind of theoretical models and a theoretical model like Manasson's. This can be seen at multiple levels:
- First, the easiest way to spot this difference is to compare the underlying mathematics of the old and new models: the mathematics of this new model (causal discovery analysis, a variant of root cause analysis) is very close to the underlying mathematics of QM, while the mathematics underlying Manasson's model is almost diametrically opposite to the mathematics underlying QM.
- The second point is the focus of a new model - due to the underlying mathematics - on either accuracy or precision: similar underlying mathematics between models tends to lead quickly to good precision without necessarily being accurate, while a novel model based in completely different mathematics - and still being capable of reproducing things of an older model - initially has to focus on accuracy before focusing on precision.
- The third - and perhaps most important - point is the conceptual shift required to go between the old and the new model; if apart from the mathematics, the conceptual departure from old to new isn't radical, then the new model isn't likely to be able to go beyond the old. This is actually a consequence of the first and second point, because a small difference with high precision is easily fully constructed, implying low accuracy and therefore easily falsified. On the other hand, it is almost impossible that huge differences will lead to similar consequences, meaning both models are accurate with the older being typically more precise than the newer, at least until the newer matures and either replaces the old or gets falsified.

To illustrate these points even further we can again use the historical example of going from Newtonian gravity to Einsteinian gravity; all three points apply there quite obviously; I won't go into that example any further seeing there are tonnes of threads and books on this topic, i.e. MTW Gravitation.

What I do need to say is that the above mentioned differences are important for any new mathematical model of some empirical phenomenon based in scientific reasoning, not just QM; I say this because there is another way to create a new mathematical model of an empirical phenomenon, namely by making an analogy based on similar mathematics. A (partially) successful new model using an analogy based on similar mathematics usually tends to be only incrementally different or evolutionary, while a succesful new model based on scientific reasoning tends to be revolutionary.

Evolution of a model merely requires successful steps of cleverness, while revolution requires nothing short of genius and probably a large dose of luck, i.e. being in the right place at the right time. This is the problem with all psi-epistemic models; they are practically all incrementally different or a small evolution in terms of being mathematically cleaner than the old model - which is of course why they are available a dime a dozen. It takes hardly any mathematical insight or scientific creativity at all to make one. For new QM models, this is because such models tend to be based in probability theory, information theory, classical graph theory and/or linear algebra. These topics in mathematics are in comparison with say geometry or analysis relatively "sterile" (not quantitatively in applications but qualitatively in mathematical structure).

All of these critique points w.r.t. theorisation of empirically based scientific models do not merely apply to the toy model you posted, but to all psi-epistemic models of QM. This is also why we see so much of such models and practically none of the other; making psi-epistemic models is a low-risk/low-payout strategy, while making psi-ontic models is a high-risk/high-payout strategy.

When I said earlier, that I've never seen a new model which wasn't obviously wrong or completely unbelievable, I wasn't even counting such incrementally different models because they tend to be nowhere near even interesting enough to consider seriously as a candidate that will possibly supersede QM. Sure, such a model may even almost directly have way more applications; that however is frankly speaking completely irrelevant w.r.t. foundational issues. W.r.t. the foundations of QM, this leaves us with searching for psi-ontic models.

Make no mistake; the foundational goal of reformulating QM based on another model is not to find new applications but to go beyond QM; based on all psi-ontic attempts so far this goal is extremely difficult. On the other hand, as I have illustrated, finding a reformulation of QM based on a psi-epistemic model tends to be neither mathematically challenging nor scientifically interesting for any (under)grad student with sufficient training; one can almost literally blindly open any textbook on statistics, decision theory, operation research and/or data science and find some existing method which one could easily strip down to its mathematical core and try to construct an incrementally different model of QM.

So again, if you do know of some large collection of new psi-ontic (toy) models which do not quickly fall to fine-tuning and aren't obviously wrong, please, some references would be nice.
 
  • #40
721
507
This article's train of thought regarding 1/2 -> 1 -> 2 spin particles and their coupling leads to a prediction that graviton's coupling should be ~4 times stronger than color force. This is obviously not the case.
It actually need not imply such a thing at all. The article doesn't assume that gravity needs to be quantized.
Just observing that families of particles seem to "bifurcate" when we look at their various properties seems to be a too tenuous reason to apply dissipative reasoning.
Bifurcating particle taxonomy isn't the reason to apply dissipative reasoning, instead virtual particles based in the Heisenberg uncertainty principle is.

The very concept of virtual particles implies an open i.e. dissipative system, and therefore perhaps the necessity of a non-equilibrium thermodynamics approach a la John Baez.
Bell’s theorem states that in a situation which involves the correlation of measurements on two spatially separated, entangled systems, no “local realistic theory” can predict experimental results identical to those predicted by quantum mechanics. The theorem says nothing about the character of quantum theory.
Your conclusion is incorrect. If local hidden variables can not reproduce QM predictions, non-local hidden variables might still be able to, i.e. Bell's theorem also clearly implies that non-locality may reproduce QM's predictions, implying again that QM - or a completion of QM - is itself in some sense inherently non-local. This was indeed Bell's very own point of view.

None of this is nothing new, it is well-known in the literature that entanglement is or can be viewed as a fully non-local phenomenon. Moreover, as you probably already know, there is actually a very well-known explicitly non-local hidden variable theory, namely Bohmian mechanics (BM) which fully reproduces the predictions of standard QM; in terms of QM interpretation, this makes BM a psi-ontic model which actually goes beyond QM.
 
  • #41
DarMM
Science Advisor
Gold Member
2,370
1,397
I don't doubt that, but I think you are missing the point that the other usage of fine tuning is old, centuries old....

In any case, I will grant your usage of this unfortunate standard terminology in the novel and relatively secluded area of research that is the foundations of QM.
The other usage is centuries old as well, going back to at least Gibbs and Boltzmann and it's used in Statistical Mechanics and Cosmology as well. So both usages are prevalent in modern physics and centuries old. I don't know which is older, but I also don't see why this point matters if both are in common usage now and have been for used for centuries.

I understand that this toy model is or may just be some random example, but I seriously think a few key points are in order. I will start by making clear that my following comments are regarding mathematical models in scientific theories of empirical phenomenon, but I digress.

I do hope you realize that there is an enormous qualitative difference between these kind of theoretical models and a theoretical model like Manasson's. This can be seen at multiple levels:
You're treating this like a serious proposal, remember the context in which I brought this up. This toy model isn't intended to be a scientific advance. It's intended to show how simple it is to replicate all the features of QM except for entanglement, i.e. post-classical correlations. The model isn't even remotely realistic and is mathematically trivial and it can still replicate them.

The reason I brought up such toy models was to focus on the fact that things like quantised values, superposition, solving the measurement problem, etc can be done quite easily and this model is just the simplest such model demonstrating that (more complex ones exist).

What isn't easy is replicating breaking of the Bell inequalities and any model that really attempts to explain QM should focus on that primarily, as the toy model (and others) show that the other features are easy.

All of these critique points w.r.t. theorisation of empirically based scientific models do not merely apply to the toy model you posted, but to all psi-epistemic models of QM. This is also why we see so much of such models and practically none of the other; making psi-epistemic models is a low-risk/low-payout strategy, while making psi-ontic models is a high-risk/high-payout strategy.
There are less psi-epistemic models though, they are very hard to construct, especially now in light of the PBR theorem. I really don't understand this.

When I said earlier, that I've never seen a new model which wasn't obviously wrong or completely unbelievable, I wasn't even counting such incrementally different models because they tend to be nowhere near even interesting enough to consider seriously as a candidate that will possibly supersede QM. Sure, such a model may even almost directly have way more applications; that however is frankly speaking completely irrelevant w.r.t. foundational issues. W.r.t. the foundations of QM, this leaves us with searching for psi-ontic models.
I didn't present the toy model as a candidate to replace QM, but as a demonstration of how easily all non-entanglement features can be replicated.

Make no mistake; the foundational goal of reformulating QM based on another model is not to find new applications but to go beyond QM; based on all psi-ontic attempts so far this goal is extremely difficult. On the other hand, as I have illustrated, finding a reformulation of QM based on a psi-epistemic model tends to be neither mathematically challenging nor scientifically interesting for any (under)grad student with sufficient training
Again this is counter to virtually everything I've read in quantum foundations. Making Psi-Epistemic models is extremely difficult in light of the PBR theorem.

one can almost literally blindly open any textbook on statistics, decision theory, operation research and/or data science and find some existing method which one could easily strip down to its mathematical core and try to construct an incrementally different model of QM.
I don't think so, again not in light of the PBR theorem.

So again, if you do know of some large collection of new psi-ontic (toy) models which do not quickly fall to fine-tuning and aren't obviously wrong, please, some references would be nice.
This is what I am saying:
  1. Replicating non-entanglement features of Quantum Mechanics is very simple as all one needs is a classical theory with an epistemic limit. The toy model presented is an example of how simple this is.
  2. Hence something that replicates QM should explain how it replicates entanglement first, as the other aspects are easy
  3. However we already know that realist models will encounter fine-tuning from the Wood-Spekkens and Pusey-Leifer theorems.
One of the points in my previous posts tells you that I can't give you what you're asking for here because it has been proven not to exist, all realist models require fine tunings. That's actually one of my reasons for being skeptical regarding these sort of models, we already know they will develop unpleasant features. People present these models as if they will escape what they don't like about Bohmian Mechanics, however we know now that these features of Bohmian Mechanics are general to all such models.

The only really different theories would be superdeterministic, retrocausal or Many-Worlds, but all of those have fine tunings as well.

Acausal models might be different (i.e. where physics concerns multiscale 4D constraints), but they are truly different theories with little analysis on them as of now.
 
Last edited:
  • Like
Likes dextercioby and Auto-Didact
  • #42
> This article's train of thought regarding 1/2 -> 1 -> 2 spin particles and their coupling leads to a prediction that graviton's coupling should be ~4 times stronger than color force. This is obviously not the case.

It actually need not imply such a thing at all. The article doesn't assume that gravity needs to be quantized.
It did this for color force, here:

w.png


Why the same should not apply to "the next next level" of gravitons?
 

Attachments

  • #43
721
507
It did this for color force, here:

View attachment 233220

Why the same should not apply to "the next next level" of gravitons?
The question is 'why should it'? You seem to be reading this particular bit without controlling for your cognitive expectation bias, i.e. you are assuming based on the fact that quantization of gravity is a standard hypothesis in many models, that it is therefore also a hypothesis of this model.

It is pretty clear that this model is compatible with either hypothesis w.r.t. gravitation. That is to say this model is completely independent of the hypothesis whether or not gravity should be quantized in the same manner as the rest of the forces in physics i.e. following the standard form of quantization for particle physics.

This is bolstered by the fact that this is a phenomenological model i.e. it is constructed upon only empirically observed phenomenon. The form of quantization this model is attempting to explain is precisely the form known from experimental particle physics; no experiment has ever suggested that gravity is also quantized in this manner.

In contrast to common perception, both the mathematical physics and quantum gravity phenomenology literature actually respectively, give very good mathematical arguments and empirical arguments to believe that this hypothesis is actually false to begin with; this wouldn't necessarily mean that gravitation is not quantized at all, but that if it is, it is probably not in quantized in exactly the same manner as the other forces, making any conclusions that it probably is at worst completely misguided and at best highly premature because it is non-empirical.
 
  • #44
Lord Jestocost
Gold Member
596
404
Your conclusion is incorrect. If local hidden variables can not reproduce QM predictions, non-local hidden variables might still be able to, i.e. Bell's theorem also clearly implies that non-locality may reproduce QM's predictions, implying again that QM - or a completion of QM - is itself in some sense inherently non-local.
Bell's theorem might imply that a “non-local realistic theory” might predict the correlations of measurements on entangeld systems. Regarding QM, there are other options.
 
  • #45
721
507
Bell's theorem might imply that a “non-local realistic theory” might predict the correlations of measurements on entangeld systems. Regarding QM, there are other options.
Non-local hidden variable theories are a subset of non-local realistic theories, i.e. this discussion is moot.

The non-locality of QM - i.e. the non-local nature of entanglement - has been in the literature since Schrodinger himself.
Aspect concluded in 2000 that there is experimental support for the non-locality of entanglement, saying
Alain Aspect said:
It may be concluded that quantum mechanics has some nonlocality in it, and that this nonlocal character is vindicated by experiments [45]. It is very important, however, to note that such a nonlocality has a very subtle nature, and in particular that it cannot be used for faster-than-light telegraphy. It is indeed simple to show [46] that, in a scheme where one tries to use EPR correlations to send a message, it is necessary to send complementary information (about the orientation of a polarizer) via a normal channel, which of course does not violate causality. This is similar to the teleportation schemes [47] where a quantum state can be teleported via a nonlocal process provided that one also transmits classical information via a classical channel. In fact, there is certainly a lot to understand about the exact nature of nonlocality, by a careful analysis of such schemes [48].

When it is realized that this quantum nonlocality does not allow one to send any useful information, one might be tempted to conclude that in fact there is no real problem and that all these discussions and experimental efforts are pointless. Before rushing to this conclusion, I would suggest an ideal experiment done in the following way is considered (Fig. 9.17): On each side of the experiment of Fig. 9.1, using variable analysers, there is a monitoring system that registers the detection events in channels + or -with their exact dates. We also suppose that the orientation of each polarizer is changed at random times, also monitored by the system of the corresponding side. It is only when the experiment is completed that the two sets of data, separately collected on each side, are brought together in order to extract the correlations. Then, looking into the data that were collected previously and that correspond to paired events that were space-like separated when they happened, one can see that indeed the correlation did change at the very moment when the relative orientation of the polarizers changed.

So when one takes the point of view of a delocalized observer, which is certainly not inconsistent when looking into the past, it must be acknowledged that there is nonlocal behaviour in the EPR correlations. Entanglement is definitely a feature going beyond any space time description a la Einstein: a pair of entangled photons must be considered to be a single global object that we cannot consider to be made of individual objects separated in spacetime with well-defined properties.
Referenced sources are:
[45] J .S. Bell, Atomic cascade photons and quantum-mechanical nonlocality, Comm. Atom. M01. Phys. 9, 121 (1981)

[46] A. Aspect, Expériences basées sur les inégalités de Bell, J . Phys. Coll. C 2, 940 (1981)

[47] CH. Bennet, G. Brassard, C. Crépeau, R. Josza, A. Peres, W.K. Wootters, Phys. Rev. Lett. 70, 1895 (1993)
D. Bouwmeester, J .-W. Pan, K. Mattle, M. Eibl, H. Weinfurter, A. Zeilinger, Experimental quantum teleportation, Nature 390, 575 (1997)
D. Boschi, S. Branca, F. De Martini, L. Hardy, S. Popescu, Experimental realization of teleporting an unknown pure quantum state via dual classical and Einstein-Podolsky-Rosen channels, submitted to Phys. Rev. Lett. (1997) A. Furusawa, J .L. Sorensen, S.L. Braunstein, C.A. Fuchs, H.J. Kimble, E.S. Polzik, Unconditional quantum teleportation, Science 282, 706 (1998)

[48] S. Popescu, Bell’s inequalities versus teleportation: what is non-locality? Phys. Rev. Lett. 72, 797 (1994)
 
Last edited:
  • #46
244
81
Every theory can be reproduced by a non-local model, but that doesn't mean every theory is non-local. Say you have a computer which measures the temperature once a second and outputs the difference from the previous measurement. You can build a non-local model for this phenonema by storing the previous measurement at a remote location, which must be accessed on each iteration.

Does that make this a non-local phenomena? Clearly not - since you can also model it by storing the previous measurement locally. To show that QM is non-local, you need to show that it can't be reproduced with any local model, even one with multiple outcomes. Bell's theorem doesn't do that; it requires additional assumptions.

There is a very confusing thing some physicists do which is to use the phrase "non-locality" to mean something called "Bell non-locality", which isn't the same thing at all.
 
  • Like
Likes Auto-Didact
  • #47
Lord Jestocost
Gold Member
596
404
As Alain Aspect says (A. Aspect, “To be or not to be local,” Nature (London), 446, 866 (2007)):

"The experimental violation of mathematical relations known as Bell’s inequalities sounded the death-knell of Einstein’s idea of ‘local realism’ in quantum mechanics. But which concept, locality or realism, is the problem?"
 
  • #48
DarMM
Science Advisor
Gold Member
2,370
1,397
As Alain Aspect says (A. Aspect, “To be or not to be local,” Nature (London), 446, 866 (2007)):

"The experimental violation of mathematical relations known as Bell’s inequalities sounded the death-knell of Einstein’s idea of ‘local realism’ in quantum mechanics. But which concept, locality or realism, is the problem?"
As I mentioned up thread it's not really between locality or realism, but:
  1. Single Outcomes
  2. Lack of super-determinism
  3. Lack of retrocausality
  4. Presence of common causes
  5. Decorrelating Explanations (combination of 4. and 5. normally called Reichenbach's Common Cause principle)
  6. Relativistic causation (no interactions beyond light cone)
You have to drop one, but locality (i.e. Relativistic Causation) and realism (Decorrelating Explanations) are only two possibilities.
 
  • #49
721
507
The other usage is centuries old as well, going back to at least Gibbs and Boltzmann and it's used in Statistical Mechanics and Cosmology as well. So both usages are prevalent in modern physics and centuries old. I don't know which is older, but I also don't see why this point matters if both are in common usage now and have been for used for centuries.
Newton lived in the 1600s, he was literally the first classical theoretical physicist - as well as first serious mathematical physicist - practically initiating the entire subject of physics as we know it today. Boltzmann and Gibbs lived much later (1800s until early 1900s). But let's not turn this into a measuring contest any further than it already is lol.

In any case, as I said before, if that is the standard terminology of the field, then you are correct to use it, no matter how unfortunate I or anyone else may find the terminology. This paper that you linked however defines fine-tuning on page 9 again exactly as parameter fine-tuning, i.e. the same definition that I am using...
You're treating this like a serious proposal, remember the context in which I brought this up. This toy model isn't intended to be a scientific advance. It's intended to show how simple it is to replicate all the features of QM except for entanglement, i.e. post-classical correlations. The model isn't even remotely realistic and is mathematically trivial and it can still replicate them.

The reason I brought up such toy models was to focus on the fact that things like quantised values, superposition, solving the measurement problem, etc can be done quite easily and this model is just the simplest such model demonstrating that (more complex ones exist).

What isn't easy is replicating breaking of the Bell inequalities and any model that really attempts to explain QM should focus on that primarily, as the toy model (and others) show that the other features are easy.
Yes, you are correct, I'm approaching the matter somewhat seriously, it is a topic I am truly passionate about and one I really want to see an answer found for. This is for multiple reasons, most importantly:

I) following the psi-ontic literature for the last few years, I have come across a few mathematical schemes which seem to be 'sectioned off' parts of full theories. These mathematical schemes (among others, twistor theory and spin network theory) themselves aren't actually full physical theories - exactly like AdS/CFT isn't a full theory - but simply possibly useful mathematical models of particular aspects of nature based in experimental phenomenology, i.e. these schemes are typically models based in phenomenology through the use of very particular not-necessarily-traditional mathematics for physicists.

II) these schemes all have in common that they are - taken at face value - incomplete frameworks of full physical theories. Being based mostly in phenomenology, they therefore tend to be consistent with the range of experiments performed so far at least and yet - because of their formulation using some particular nonstandard mathematics - they seem to be capable of making predictions which agree with what is already known but might disagree with what is still unknown.

III) to complete these theories - i.e. what needs to be added to these mathematical schemes in order to transform them into full physical theories - what tends to be required is the addition of a dynamical model which can ultimately explain some phenomenon using dynamics. QM in the psi-ontic view is precisely such a mathematical scheme which requires completion; this is incidentally what Einstein, Dirac et al. meant by saying QM - despite it's empirical success - cannot be anything more than an incomplete theory and therefore ultimately provisional instead of fundamental.

IV) there actually aren't that many psi-ontic schemes which have been combined with dynamic models transforming them into completed physical theories. Searching for the correct dynamical model - which isn't obviously incorrect (NB: much easier said than done) - given some particular scheme therefore should be a productive Bayesian strategy for identifying new promising dynamical theories and hopefully ultimately finding a more complete novel physical theory.

I cannot stress the importance of the above points - especially point III and IV - enough; incidentally Feynman vehemently argued for practicing theory (or at least that he himself practiced theory) in this way. This is essentially the core business of physicists looking for psi-ontic foundations of QM.
I didn't present the toy model as a candidate to replace QM, but as a demonstration of how easily all non-entanglement features can be replicated.
I recently made this very argument in another thread, so I'll just repost it here: There is a larger theme in the practice of theoretical science here where theoretical calculations done using highly preliminary models of some hypothesis, prior to any experiment being done/possible, leading to very strong claims against some particular hypothesis.

These strong claims against the hypothesis then often later turn out to be incorrect due to them resting on - mathematically speaking - seemingly trivial assumptions, which actually are conceptually - i.e. if understood correctly in physical terms - clearly unjustifiable. The problem is then that a hypothesis can incorrectly be discarded prematurely due to taking the predictions of toy models of said hypothesis at face value; i.e. a false positive falsification if you will.

This seems to frequently occur when a toy model of some hypothesis is a particular kind of idealization which is actually a completely inaccurate representation of the actual hypothesis, purely due to the nature of the particular idealization itself.
There are less psi-epistemic models though, they are very hard to construct, especially now in light of the PBR theorem. I really don't understand this.
W.r.t. the large amount of psi-epistemic models, scroll down and see point 1).
Again this is counter to virtually everything I've read in quantum foundations. Making Psi-Epistemic models is extremely difficult in light of the PBR theorem.
It is only difficult if you want to include entanglement, i.e. non-locality. Almost all psi-epistemic models don't do this, making them trivially easy to construct. I agree that psi-ontic models, definitely after they have passed the preliminary stage, need to include entanglement.

In either case, a general remark on these no-go theorems is in order: Remember that these "proofs" should always be approached with caution - recall how von Neumann's 'proof' literally held back progress in this very field for decades until Bohm and Bell showed that his proof was based on (wildly) unrealistic assumptions.

The fact of the matter is that the assumptions behind the proofs of said theorems may actually be unjustified when given the correct conceptual model, invalidating their applicability as in the case of von Neumann. (NB: I have nothing against von Neumann, I might possibly even be his biggest fan on this very forum!)
I don't think so, again not in light of the PBR theorem.
Doesn't the PBR theorem literally state that any strictly psi-epistemic interpretation of QM literally contradicts the predictions of QM? This implies that a psi-ontic interpretation of QM is actually a necessity! Can you please rephrase the PBR theorem in your own words?
This is what I am saying:
  1. Replicating non-entanglement features of Quantum Mechanics is very simple as all one needs is a classical theory with an epistemic limit. The toy model presented is an example of how simple this is.
  2. Hence something that replicates QM should explain how it replicates entanglement first, as the other aspects are easy
  3. However we already know that realist models will encounter fine-tuning from the Wood-Spekkens and Pusey-Leifer theorems.
1) The ease of replicating QM without entanglement seems to only hold for psi-epistemic models, not for psi-ontic models.
2) Fully agreed if we are talking about psi-epistemic models. Disagree or do not necessarily agree for psi-ontic models, especially not in the case of Manasson's model which lacks a non-local scheme.
3) Based on this paper, the critiques from those theorems seem to apply not to realist models but to a psi-epistemic interpretation of QM.
Even worse; even if they did apply to realistic models (i.e. psi-ontic models) they would only apply to a certain subset of all possible realist models, not all possible realist models. To then assume based on this that all realist models are therefore unlikely is to commit the base rate fallacy; indeed, the very existence of Bohmian Mechanics makes such an argument extremely suspect.
One of my the points in my previous posts tells you that I can't give you what you're asking for here because it has been proven not to exist, all realist models require fine tunings. That's actually one of my reasons for being skeptical regarding these sort of models, we already know they will develop unpleasant features. People present these models as if they will escape what they don't like about Bohmian Mechanics, however we know now that these features of Bohmian Mechanics are general to all such models.
I understand your reservations and that it may seem strange that I seem to be arguing against what seems to be most likely. The thing is I am actually - in contrast to how most physicists seem to usually judge likelihood of correctness of a theory - just both arguing and judging using a very different interpretative methodology to the one popular in the practice of physics - one in which (priorly-assumed-to-be) low probability events can actually become more likely, given the conditional adherence to certain criteria. In other words, I am consciously using Bayesian reasoning - instead of frequentist reasoning - to evaluate the likelihood that particular theories are or aren't likely (more) correct, because I have realized that these probabilities are actually degrees of belief not statistical frequencies.

I suspect that approaching the likelihood of the correctness of a theory w.r.t. open problems with very little empirics using frequentist reasoning is highly misleading and possibly itself a problematic phenomenon - literally fueling the bandwagon effect among theoretical physicists. This characterization seems to apply to most big problems in the foundations of physics; among others, the problem of combining QM with GR, the measurement problem and the foundational problems of QM.

While foundational problems seem to be able to benefit strongly by adapting a Bayesian strategy for theory construction, open problems in non-foundational physics on the other hand do tend to be easily solveable using frequentist reasoning. I suspect that this is precisely where the high confidence in frequentist reasoning w.r.t. theory evaluation among most physicists stems from: that is the only method of practical probablistic inference they have learned in school.

That said, going over your references as well as my own it seems to me that you have seriously misunderstood what you have read in the literature, but perhaps it is I who is the one who is mistaken. You definitely wouldn't be the first (I presume) physicist I have met who makes such interpretative errors when reading long complicated texts; it is as if subtlety is somehow shunned or discarded at every turn in favor of explicitness. I suspect that this might be due to the fact that most physicists today do not have any training in philosophy or argumentative reasoning at all (in stark contrast to the biggest names such as Newton, Einstein and the founders of QM).

In my view, you seem to be a) frequently confusing (psi-)ontology with (psi-)epistemology, b) interpreting certain extremely subtle arguments at face value and therefore incorrectly (e.g. Valentini's argument for BM, on the face of it goes against accepted wisdom in contemporary physics, this in no way invalidates his argument, his argument is a logically valid argument), c) interpreting no-go theorems possibly based on shaky assumptions as actual definitive demonstrations and d) attributing concepts deemed impossible within contemporary physics (e.g. superdeterminism, superluminality, retrocausation) as effects of fine-tuning based arguments in the form of c).

This is made clearer when you automatically generalize the validity of proofs using concepts defined in a certain context as if the proof covers all contexts - seemingly purely because it is a theorem - something you have no doubt learned to trust from your experience of using/learning mathematics. This should become clearer through the following example: Superdeterminism, superluminality and retrocausation would only necessarily be effects of fine-tuning given that causal discovery analysis is sufficient to explain the violation of Bell inequalities; Wood and Spekkens clearly state that this is false, i.e that causal discovery analysis is insufficient to understand QM! (NB: see abstract and conclusion of this paper). Most important to understand is that they aren't actually effects of fine-tuning in principle!

Furthermore, Wood and Spekkens are through the same paper (page 27) clearly trying to establish a (toy model) definition of causality independent of temporal ordering - just like what spin network theory or causal dynamical triangulations already offer; this is known as background independence, something which I'm sure you are aware Smolin has argued for for years.

And as I argued before, Hossenfelder convincingly argues that finetuning isn't a real problem, especially w.r.t. foundational issues. The only way one can interpret the Wood-Spekkens paper as arguing against psi-ontic models is to argue against parameter finetuning and take the accepted wisdom of contemporary physics at face value - which can be interpreted as using Occam's razor. I will argue every time again that using Einstein's razor is the superior strategy.
The only really different theories would be superdeterministic, retrocausal or Many-Worlds, but all of those have fine tunings as well.
I'm pretty well aware of these ideas themselves being problematic taken at face value, which is exactly why I selectively exclude such ideas during preliminary theoretical modelling/evaluating existing models using Bayesian reasoning. I should say again though that retrocausality is only problematic if we are referring to matter or information, not correlation; else entanglement itself wouldn't be allowed either.
Acausal models might be different (i.e. where physics concerns multiscale 4D constraints), but they are truly different theories with little analysis on them as of now.
All theories derived based on the Wheeler-deWitt equation are acausal in this sense, as are models based on spin networks or twistors. I suspect some - or perhaps many - models which seem retrocausal may actually be reformulated as acausal or worse, were actually acausal to begin with and just misinterpreted as retrocausal due to some cognitive bias (a premature deferral to accepted wisdom in contemporary physics).

Btw I'm really glad you're taking the time to answer me so thoroughly, this discussion has truly been a pleasure. My apologies if I come off as rude/offensive, I have heard that I tend to argue in a somewhat brash fashion the more passionate I get; to quote Bohr: "Every sentence I utter must be understood not as an affirmation, but as a question."
 
  • #50
DarMM
Science Advisor
Gold Member
2,370
1,397
Newton lived in the 1600s, he was literally the first classical theoretical physicist - as well as first serious mathematical physicist - practically initiating the entire subject of physics as we know it today. Boltzmann and Gibbs lived much later (1800s until early 1900s). But let's not turn this into a measuring contest any further than it already is lol.

In any case, as I said before, if that is the standard terminology of the field, then you are correct to use it, no matter how unfortunate I or anyone else may find the terminology. This paper that you linked however defines fine-tuning on page 9 again exactly as parameter fine-tuning, i.e. the same definition that I am using...
Genuinely I really don't get this line of discussion at all. I am not saying initial condition fine tuning is an older concept (I know when Newton or Boltzman lived) or that in Quantum Foundations they exclusively use fine tuning to mean initial condition fine tuning.

I am saying that fine-tuning has long been used to mean both in theoretical physics and Quantum Foundations like many other areas, uses fine tuning to mean both.

In that paper I linked they explicitly mean both as "causal parameters" includes both initial conditions and other parameters if you look at how they define it.

I really don't understand this at all, I'm simply using a phrase the way it has been used for over a century and a half in theoretical physics. What does it matter if using it on a subset of its current referents extends back further?

Doesn't the PBR theorem literally state that any strictly psi-epistemic interpretation of QM literally contradicts the predictions of QM? This implies that a psi-ontic interpretation of QM is actually a necessity! Can you please rephrase the PBR theorem in your own words?
No, it says that any Psi-Epistemic model obeying the onotological framework axioms and the principle of Preparation Independence for two systems cannot model QM.

The ease of replicating QM without entanglement seems to only hold for psi-epistemic models, not for psi-ontic models.
That's explicitly not true, coming up with Psi-Ontic models that model the non-entanglement part of QM is simple, even simpler than modelling it with Psi-Epistemic models. In fact Psi-Ontic models end up naturally replicating all of QM, you don't even have the blockade with modelling entanglement that you have with Psi-Epistemic models.

Based on this paper, the critiques from those theorems seem to apply not to realist models but to a psi-epistemic interpretation of QM.
That's not what the theorem demonstrates, it holds for both Psi-Ontic and Psi-Epistemic models. The class of models covered includes both.

Even worse; even if they did apply to realistic models (i.e. psi-ontic models) they would only apply to a certain subset of all possible realist models, not all possible realist models. To then assume based on this that all realist models are therefore unlikely is to commit the base rate fallacy; indeed, the very existence of Bohmian Mechanics makes such an argument extremely suspect.
Bohmian Mechanics needs to be fine tuned (Quantum Equilibrium hypothesis), it is known that out of equilibrium Bohmian Mechanics has superluminal signalling. In the Wood-Spekkens paper they are trying to see if that kind of fine-tuning is unique to Bohmian Mechanics or a general feature of all such theories.
It turns out to be a general feature of all Realist models. The only type they don't cover is Many-Worlds. However the Pusey-Leifer theorem then shows that Many-Worlds has fine-tuning.

Hence all Realist models have fine-tunings.

What one can now do is attempt to show the fine-tuning is dynamically generated, but you can't avoid the need for it.

That said, going over your references as well as my own it seems to me that you have seriously misunderstood what you have read in the literature, but perhaps it is I who is the one who is mistaken. You definitely wouldn't be the first (I presume) physicist I have met who makes such interpretative errors when reading long complicated texts; it is as if subtlety is somehow shunned or discarded at every turn in favor of explicitness. I suspect that this might be due to the fact that most physicists today do not have any training in philosophy or argumentative reasoning at all
I don't need a psychoanalysis or rating of what I do or do not understand. Tell me what I have misunderstood in the Pusey-Leifer or Wood-Spekkens papers. I've gone through the proofs and then rederived them myself to ensure I understood them, as well as seen the conclusion "All realist theories are fine-tuned" explicitly acknowledged in talks by Quantum Foundations experts like Matt Leifer.

See point nine of this slide:
http://mattleifer.info/wordpress/wp-content/uploads/2009/04/FQXi20160818.pdf

It's very easy to start talking about me and my comprehension, have you read the papers in depth yourself?
 

Related Threads on Quantization isn't fundamental

Replies
16
Views
3K
Replies
12
Views
3K
Replies
2
Views
1K
Replies
15
Views
3K
  • Last Post
Replies
13
Views
4K
  • Last Post
2
Replies
49
Views
13K
  • Last Post
Replies
10
Views
928
  • Last Post
2
Replies
48
Views
16K
  • Poll
  • Last Post
Replies
15
Views
4K
Top