DarMM said:
The other usage is centuries old as well, going back to at least Gibbs and Boltzmann and it's used in Statistical Mechanics and Cosmology as well. So both usages are prevalent in modern physics and centuries old. I don't know which is older, but I also don't see why this point matters if both are in common usage now and have been for used for centuries.
Newton lived in the 1600s, he was literally the first classical theoretical physicist - as well as first serious mathematical physicist - practically initiating the entire subject of physics as we know it today. Boltzmann and Gibbs lived much later (1800s until early 1900s). But let's not turn this into a measuring contest any further than it already is lol.
In any case, as I said before, if that is the standard terminology of the field, then you are correct to use it, no matter how unfortunate I or anyone else may find the terminology.
This paper that you linked however defines fine-tuning on page 9 again exactly as parameter fine-tuning, i.e. the same definition that I am using...
DarMM said:
You're treating this like a serious proposal, remember the context in which I brought this up. This toy model isn't intended to be a scientific advance. It's intended to show how simple it is to replicate all the features of QM except for entanglement, i.e. post-classical correlations. The model isn't even remotely realistic and is mathematically trivial and it can still replicate them.
The reason I brought up such toy models was to focus on the fact that things like quantised values, superposition, solving the measurement problem, etc can be done quite easily and this model is just the simplest such model demonstrating that (more complex ones exist).
What isn't easy is replicating breaking of the Bell inequalities and any model that really attempts to explain QM should focus on that primarily, as the toy model (and others) show that the other features are easy.
Yes, you are correct, I'm approaching the matter somewhat seriously, it is a topic I am truly passionate about and one I really want to see an answer found for. This is for multiple reasons, most importantly:
I) following the psi-ontic literature for the last few years, I have come across a few mathematical schemes which seem to be 'sectioned off' parts of full theories. These mathematical schemes (among others, twistor theory and spin network theory) themselves aren't actually full physical theories - exactly like AdS/CFT isn't a full theory - but simply possibly useful mathematical models of particular aspects of nature based in experimental phenomenology, i.e. these schemes are typically models based in phenomenology through the use of very particular not-necessarily-traditional mathematics for physicists.
II) these schemes all have in common that they are - taken at face value - incomplete frameworks of full physical theories. Being based mostly in phenomenology, they therefore tend to be consistent with the range of experiments performed so far at least and yet - because of their formulation using some particular nonstandard mathematics - they seem to be capable of making predictions which agree with what is already known but might disagree with what is still unknown.
III) to complete these theories - i.e. what needs to be added to these mathematical schemes in order to transform them into full physical theories - what tends to be required is the addition of a dynamical model which can ultimately explain some phenomenon using dynamics. QM in the psi-ontic view is precisely such a mathematical scheme which requires completion; this is incidentally what Einstein, Dirac et al. meant by saying QM - despite it's empirical success - cannot be anything more than an incomplete theory and therefore ultimately provisional instead of fundamental.
IV) there actually aren't that many psi-ontic schemes which have been combined with dynamic models transforming them into completed physical theories. Searching for the correct dynamical model - which isn't obviously incorrect (NB: much easier said than done) - given some particular scheme therefore should be a productive Bayesian strategy for identifying new promising dynamical theories and hopefully ultimately finding a more complete novel physical theory.
I cannot stress the importance of the above points - especially point III and IV - enough; incidentally Feynman vehemently argued for practicing theory (or at least that he himself practiced theory) in this way. This is essentially the core business of physicists looking for psi-ontic foundations of QM.
DarMM said:
I didn't present the toy model as a candidate to replace QM, but as a demonstration of how easily all non-entanglement features can be replicated.
I recently made this very argument in another thread, so I'll just repost it here: There is a larger theme in the practice of theoretical science here where theoretical calculations done using highly preliminary models of some hypothesis, prior to any experiment being done/possible, leading to very strong claims against some particular hypothesis.
These strong claims against the hypothesis then often later turn out to be incorrect due to them resting on - mathematically speaking - seemingly trivial assumptions, which actually are conceptually - i.e. if understood correctly in physical terms - clearly unjustifiable. The problem is then that a hypothesis can incorrectly be discarded prematurely due to taking the predictions of toy models of said hypothesis at face value; i.e. a false positive falsification if you will.
This seems to frequently occur when a toy model of some hypothesis is a particular kind of idealization which is actually a completely inaccurate representation of the actual hypothesis, purely due to the nature of the particular idealization itself.
DarMM said:
There are less psi-epistemic models though, they are very hard to construct, especially now in light of the PBR theorem. I really don't understand this.
W.r.t. the large amount of psi-epistemic models, scroll down and see point 1).
DarMM said:
Again this is counter to virtually everything I've read in quantum foundations. Making Psi-Epistemic models is extremely difficult in light of the PBR theorem.
It is only difficult if you want to include entanglement, i.e. non-locality. Almost all psi-epistemic models don't do this, making them trivially easy to construct. I agree that psi-ontic models, definitely after they have passed the preliminary stage, need to include entanglement.
In either case, a general remark on these no-go theorems is in order: Remember that these "proofs" should always be approached with caution - recall how von Neumann's 'proof' literally held back progress in this very field for decades until Bohm and Bell showed that his proof was based on (wildly) unrealistic assumptions.
The fact of the matter is that the assumptions behind the proofs of said theorems may actually be unjustified when given the correct conceptual model, invalidating their applicability as in the case of von Neumann. (
NB: I have nothing against von Neumann, I might possibly even be his biggest fan on this very forum!)
DarMM said:
I don't think so, again not in light of the PBR theorem.
Doesn't the
PBR theorem literally state that any strictly psi-epistemic interpretation of QM literally contradicts the predictions of QM? This implies that a psi-ontic interpretation of QM is actually a necessity! Can you please rephrase the PBR theorem in your own words?
DarMM said:
This is what I am saying:
- Replicating non-entanglement features of Quantum Mechanics is very simple as all one needs is a classical theory with an epistemic limit. The toy model presented is an example of how simple this is.
- Hence something that replicates QM should explain how it replicates entanglement first, as the other aspects are easy
- However we already know that realist models will encounter fine-tuning from the Wood-Spekkens and Pusey-Leifer theorems.
1) The ease of replicating QM without entanglement seems to only hold for psi-epistemic models, not for psi-ontic models.
2) Fully agreed if we are talking about psi-epistemic models. Disagree or do not necessarily agree for psi-ontic models, especially not in the case of Manasson's model which lacks a non-local scheme.
3) Based on
this paper, the critiques from those theorems seem to apply not to realist models but to a psi-epistemic interpretation of QM.
Even worse; even if they did apply to realistic models (i.e. psi-ontic models) they would only apply to a certain subset of all possible realist models, not all possible realist models. To then assume based on this that all realist models are therefore unlikely is to commit the base rate fallacy; indeed, the very existence of Bohmian Mechanics makes such an argument extremely suspect.
DarMM said:
One of my the points in my previous posts tells you that I can't give you what you're asking for here because it has been proven not to exist, all realist models require fine tunings. That's actually one of my reasons for being skeptical regarding these sort of models, we already know they will develop unpleasant features. People present these models as if they will escape what they don't like about Bohmian Mechanics, however we know now that these features of Bohmian Mechanics are general to all such models.
I understand your reservations and that it may seem strange that I seem to be arguing against what seems to be most likely. The thing is I am actually - in contrast to how most physicists seem to usually judge likelihood of correctness of a theory - just both arguing and judging using a very different interpretative methodology to the one popular in the practice of physics - one in which (priorly-assumed-to-be) low probability events can actually become more likely, given the conditional adherence to certain criteria. In other words, I am consciously using Bayesian reasoning - instead of frequentist reasoning - to evaluate the likelihood that particular theories are or aren't likely (more) correct, because I have realized that these probabilities are actually degrees of belief not statistical frequencies.
I suspect that approaching the likelihood of the correctness of a theory w.r.t. open problems with very little empirics using frequentist reasoning is highly misleading and possibly itself a problematic phenomenon - literally fueling the bandwagon effect among theoretical physicists. This characterization seems to apply to most big problems in the foundations of physics; among others, the problem of combining QM with GR, the measurement problem and the foundational problems of QM.
While foundational problems seem to be able to benefit strongly by adapting a Bayesian strategy for theory construction, open problems in non-foundational physics on the other hand do tend to be easily solveable using frequentist reasoning. I suspect that this is precisely where the high confidence in frequentist reasoning w.r.t. theory evaluation among most physicists stems from: that is the only method of practical probablistic inference they have learned in school.
That said, going over your references as well as my own it seems to me that you have seriously misunderstood what you have read in the literature, but perhaps it is I who is the one who is mistaken. You definitely wouldn't be the first (I presume) physicist I have met who makes such interpretative errors when reading long complicated texts; it is as if subtlety is somehow shunned or discarded at every turn in favor of explicitness. I suspect that this might be due to the fact that most physicists today do not have any training in philosophy or argumentative reasoning at all (in stark contrast to the biggest names such as Newton, Einstein and the founders of QM).
In my view, you seem to be a) frequently confusing (psi-)ontology with (psi-)epistemology, b) interpreting certain extremely subtle arguments at face value and therefore incorrectly (e.g. Valentini's argument for BM, on the face of it goes against accepted wisdom in contemporary physics, this in no way invalidates his argument, his argument is a logically valid argument), c) interpreting no-go theorems possibly based on shaky assumptions as actual definitive demonstrations and d) attributing concepts deemed impossible within contemporary physics (e.g. superdeterminism, superluminality, retrocausation) as effects of fine-tuning based arguments in the form of c).
This is made clearer when you automatically generalize the validity of proofs using concepts defined in a certain context as if the proof covers all contexts - seemingly purely because it is a theorem - something you have no doubt learned to trust from your experience of using/learning mathematics. This should become clearer through the following example: Superdeterminism, superluminality and retrocausation
would only necessarily be effects of fine-tuning
given that causal discovery analysis is sufficient to explain the violation of Bell inequalities; Wood and Spekkens clearly state that this is false, i.e that causal discovery analysis is insufficient to understand QM! (NB: see abstract and conclusion of
this paper).
Most important to understand is that they aren't actually effects of fine-tuning in principle!
Furthermore, Wood and Spekkens are through the same paper (page 27) clearly trying to establish a (toy model) definition of causality independent of temporal ordering - just like what spin network theory or causal dynamical triangulations already offer; this is known as background independence, something which I'm sure you are aware Smolin has argued for for years.
And as I argued before, Hossenfelder convincingly argues that finetuning isn't a real problem, especially w.r.t. foundational issues. The only way one can interpret the Wood-Spekkens paper as arguing against psi-ontic models is to argue against parameter finetuning and take the accepted wisdom of contemporary physics at face value - which can be interpreted as using Occam's razor. I will argue every time again that using Einstein's razor is the superior strategy.
DarMM said:
The only really different theories would be superdeterministic, retrocausal or Many-Worlds, but all of those have fine tunings as well.
I'm pretty well aware of these ideas themselves being problematic taken at face value, which is exactly why I selectively exclude such ideas during preliminary theoretical modelling/evaluating existing models using Bayesian reasoning. I should say again though that retrocausality is only problematic if we are referring to matter or information, not correlation; else entanglement itself wouldn't be allowed either.
DarMM said:
Acausal models might be different (i.e. where physics concerns multiscale 4D constraints), but they are truly different theories with little analysis on them as of now.
All theories derived based on the Wheeler-deWitt equation are acausal in this sense, as are models based on spin networks or twistors. I suspect some - or perhaps many - models which seem retrocausal may actually be reformulated as acausal or worse, were actually acausal to begin with and just misinterpreted as retrocausal due to some cognitive bias (a premature deferral to accepted wisdom in contemporary physics).
Btw I'm really glad you're taking the time to answer me so thoroughly, this discussion has truly been a pleasure. My apologies if I come off as rude/offensive, I have heard that I tend to argue in a somewhat brash fashion the more passionate I get; to quote Bohr: "Every sentence I utter must be understood not as an affirmation, but as a question."