Auto-Didact
- 747
- 558
Ahh, now I see, you were referring to initial conditions fine-tuning all the time! We are in far more agreement than it seems from the earlier discussion. The controversial nature of initial condition fine-tuning depends again on the formulation of the theory; the question is - just like with parameter fine-tuning - whether or not the initial conditions are determined by a dynamical process or just due to randomness implying issues of (un)naturalness again; this is actually a genuine open question at the moment.DarMM said:As an illustration that it means initial condition fine tuning, the quantum equilibrium hypothesis in Bohmian Mechanics are initial conditions. This is included in the type of fine-tuning discussed in the paper.
Having said that, the initial conditions in question i.e. the initial conditions of our universe is precisely an area where QM is expected to break down and where some deeper theory like quantum gravity seems to be necessary in order to make more definitive statements. The degrees of freedom predicted by standard QM - standard QM being time-symmetric - is far, far larger than what we seem to see in actuality. In particular, from CMB measurements we can conclude - being a blackbody radiation curve - that there was a state of maximum entropy and that is was therefore random, but more important to note is that there seemed to be no active gravitational degrees of freedom!
We can infer this from the entropy content of the CMB. Therefore we can conclude that in our own universe, the initial conditions were in fact extremely fine-tuned compared to what standard QM (due to time-invariance) would have us believe was allowed to be ascribed to maximum entropy i.e. to randomness, this huge difference being due to no active gravitational degrees of freedom i.e. a vanishing Weyl curvutare. The question then is: what was the cause of there being no gravitational degrees of freedom active during the Big Bang?
I'm very glad to announce that this is the source of our disagreement. Spekkens has conceptually misunderstood what psi-ontological means and therefore constructed a simplified technical model of it; his state space formulation does not nearly exhaust all possible psi-ontological models but only a small subset of them.DarMM said:The proof in the paper takes place in a generalisation of the ontological models framework, defined by Spekkens himself, which explicitly includes both Psi-Ontic and Psi-Epistemic models. Psi-Ontic models are simply the case where the state space of the theory ΛΛ\Lambda takes the form Λ=H×AΛ=H×A\Lambda = \mathcal{H} \times \mathcal{A} with HH\mathcal{H} the quantum Hilbert Space and AA\mathcal{A} some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.
Thanks for the notice!DarMM said:This doesn't affect your argument, but just to let you know it isn't Valentini's Hypothesis it goes back to Bohm, without it Bohmian Mechanics doesn't replicate QM.
Okay, fair enough.DarMM said:Certainly, I am simply saying they must be fine-tuned. However it could be the case, for example, that the world involves fine-tuned retrocausal processes. I'm really only discussing what we can narrow down the explanation of QM to.
I know that this isn't hep-th, I'm just presuming that the anti-'fine-tuning stance' probably originated there and then spilled over from physicists who perhaps began working in hep-th (or were influenced there during training) and then ended up working in quantum foundations.DarMM said:Well it's not so much that they result from fine-tuning, but proving that they require fine-tuning. Also this isn't high-energy particle physics. Quantum Foundations deals with QM as a whole, not specifically its application to high-energy particle collisions.
:)DarMM said:My apologies, you clearly are conducting this in good faith, my fault there.![]()
To avoid misunderstanding, restated: all the premises and assumptions which go into proving this theorem (and most of such no-go theorems) are not general enough to prove a theorem which is always true in physics regardless of context; an example of a theorem which is always true in physics regardless of context is the work-energy theorem. "The general case" does not precisely refer to all possible physical theories (since this would also include blatantly false theories), but rather all physical theories that can be consistent with experiment.DarMM said:What specific assumptions of the ontological framework are in doubt, i.e. what assumption do you think is invalid here?
...
If by the "general case" you mean "all possible physical theories" neither I nor the quantum foundations community are doing that. Most of these proofs take place in the ontological model framework or an extension there of. So something can evade the no-go results by moving outside that framework. However if a presented theory lies within it, we automatically know that it will be subject to various constraints from no-go theorems. If you aren't familiar with the onotolgical models framework I can sum it up quickly enough and perhaps you can say where your doubts are. I can also sum up some theorems that take place in one of its generalisations.
But as I have said above, Spekkens' definition of psi-ontology is an incorrect technical simplification. I can see where his definition is coming from but it seems to me to clearly be a problem of operationalizing a difficult concept into a technical definition, which doesn't fully capture the concept but only a small subset of instantiations of said concept, and then prematurely concluding that it does. All of this is done just in order to make concrete statements; this problem, i.e. a premature operationalization, arises when it is assumed that the operationalization is comprehensive and therefore definitive - instead of tentative i.e. a hypothesis.
These kinds of premature operationalizations of difficult concepts are rife in all of the sciences; recall the conceptual viewpoint of what was necessarily absolutely true in geometry prior to Gauss and Lobachevski. Von Neumann's proof against hidden variable theories is another such example of premature operationalization which turned out to be false in practice as shown by Bell. Here is another example by Colbeck and Renner which is empirically blatantly false, because there are actually theories which are extensions of QM with different predictions, eg. standard QM being a limiting case with the limit ##m \ll m_{\mathrm {Planck}}##; such theories can be vindicated by experiment and the issue is therefore an open question.
I do understand why physicists would (prematurely) operationalize a concept into a technical definition, hell I do it myself all the time; this is afterall, how progress in science made. However, here it seems that physics has much to learn from other sciences, namely that such operationalizations are almost always insufficient or inadequate to characterize some phenomenon or concept in full generality; this is why most sciences couch such statements in doubt and say (almost like clockwork) that more research is needed to settle the matter.
With physics however, we often see instead an offering of a kind of (false) certainty. For example, we saw this with Newton w.r.t. absolute space and time, we saw it with von Neumann w.r.t. hidden variables and we see it with Colbeck and Renner above. I suspect that this is due to the nature of operationalizations in physics i.e. using (advanced) mathematics. Here again physicists could learn from philosophy, namely that mathematics - exactly like logic (which philosophers of course absolutely adore) - can be - due to its general extremely high applicability and assumed trustworthiness - a blatant source of deception; this occurs through idealization, simplification and worse of all, by hiding subjectivities behind the mathematics within the very axioms. All of this needs to be controlled for as factors of cognitive bias of the theorist.
I should also state that these matters do not apply generally to the regular mathematics of physics - i.e. analysis, differential equations, geometry and so on - because the normal practice of physics, i.e. making predictions and doing experiments, doesn't concern the making of formal mathematics arguments utilizing proof and axiomatic reasoning; almost all physicists working in the field should be able to attest to this. This is why most physicists and applied mathematicians tend to be relatively bad at axiomatic reasoning, while formal mathematicians, logicians and philosophers excel at this type of reasoning being simultaneously relatively bad at regular 'physical' reasoning.