Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Quantization isn't fundamental

  1. Nov 1, 2018 #41

    DarMM

    User Avatar
    Science Advisor

    The other usage is centuries old as well, going back to at least Gibbs and Boltzmann and it's used in Statistical Mechanics and Cosmology as well. So both usages are prevalent in modern physics and centuries old. I don't know which is older, but I also don't see why this point matters if both are in common usage now and have been for used for centuries.

    You're treating this like a serious proposal, remember the context in which I brought this up. This toy model isn't intended to be a scientific advance. It's intended to show how simple it is to replicate all the features of QM except for entanglement, i.e. post-classical correlations. The model isn't even remotely realistic and is mathematically trivial and it can still replicate them.

    The reason I brought up such toy models was to focus on the fact that things like quantised values, superposition, solving the measurement problem, etc can be done quite easily and this model is just the simplest such model demonstrating that (more complex ones exist).

    What isn't easy is replicating breaking of the Bell inequalities and any model that really attempts to explain QM should focus on that primarily, as the toy model (and others) show that the other features are easy.

    There are less psi-epistemic models though, they are very hard to construct, especially now in light of the PBR theorem. I really don't understand this.

    I didn't present the toy model as a candidate to replace QM, but as a demonstration of how easily all non-entanglement features can be replicated.

    Again this is counter to virtually everything I've read in quantum foundations. Making Psi-Epistemic models is extremely difficult in light of the PBR theorem.

    I don't think so, again not in light of the PBR theorem.

    This is what I am saying:
    1. Replicating non-entanglement features of Quantum Mechanics is very simple as all one needs is a classical theory with an epistemic limit. The toy model presented is an example of how simple this is.
    2. Hence something that replicates QM should explain how it replicates entanglement first, as the other aspects are easy
    3. However we already know that realist models will encounter fine-tuning from the Wood-Spekkens and Pusey-Leifer theorems.
    One of the points in my previous posts tells you that I can't give you what you're asking for here because it has been proven not to exist, all realist models require fine tunings. That's actually one of my reasons for being skeptical regarding these sort of models, we already know they will develop unpleasant features. People present these models as if they will escape what they don't like about Bohmian Mechanics, however we know now that these features of Bohmian Mechanics are general to all such models.

    The only really different theories would be superdeterministic, retrocausal or Many-Worlds, but all of those have fine tunings as well.

    Acausal models might be different (i.e. where physics concerns multiscale 4D constraints), but they are truly different theories with little analysis on them as of now.
     
    Last edited: Nov 5, 2018
  2. Nov 1, 2018 #42
    It did this for color force, here:

    w.png

    Why the same should not apply to "the next next level" of gravitons?
     
  3. Nov 1, 2018 #43
    The question is 'why should it'? You seem to be reading this particular bit without controlling for your cognitive expectation bias, i.e. you are assuming based on the fact that quantization of gravity is a standard hypothesis in many models, that it is therefore also a hypothesis of this model.

    It is pretty clear that this model is compatible with either hypothesis w.r.t. gravitation. That is to say this model is completely independent of the hypothesis whether or not gravity should be quantized in the same manner as the rest of the forces in physics i.e. following the standard form of quantization for particle physics.

    This is bolstered by the fact that this is a phenomenological model i.e. it is constructed upon only empirically observed phenomenon. The form of quantization this model is attempting to explain is precisely the form known from experimental particle physics; no experiment has ever suggested that gravity is also quantized in this manner.

    In contrast to common perception, both the mathematical physics and quantum gravity phenomenology literature actually respectively, give very good mathematical arguments and empirical arguments to believe that this hypothesis is actually false to begin with; this wouldn't necessarily mean that gravitation is not quantized at all, but that if it is, it is probably not in quantized in exactly the same manner as the other forces, making any conclusions that it probably is at worst completely misguided and at best highly premature because it is non-empirical.
     
  4. Nov 1, 2018 #44

    Lord Jestocost

    User Avatar
    Gold Member

    Bell's theorem might imply that a “non-local realistic theory” might predict the correlations of measurements on entangeld systems. Regarding QM, there are other options.
     
  5. Nov 1, 2018 #45
    Non-local hidden variable theories are a subset of non-local realistic theories, i.e. this discussion is moot.

    The non-locality of QM - i.e. the non-local nature of entanglement - has been in the literature since Schrodinger himself.
    Aspect concluded in 2000 that there is experimental support for the non-locality of entanglement, saying
    Referenced sources are:
    [45] J .S. Bell, Atomic cascade photons and quantum-mechanical nonlocality, Comm. Atom. M01. Phys. 9, 121 (1981)

    [46] A. Aspect, Expériences basées sur les inégalités de Bell, J . Phys. Coll. C 2, 940 (1981)

    [47] CH. Bennet, G. Brassard, C. Crépeau, R. Josza, A. Peres, W.K. Wootters, Phys. Rev. Lett. 70, 1895 (1993)
    D. Bouwmeester, J .-W. Pan, K. Mattle, M. Eibl, H. Weinfurter, A. Zeilinger, Experimental quantum teleportation, Nature 390, 575 (1997)
    D. Boschi, S. Branca, F. De Martini, L. Hardy, S. Popescu, Experimental realization of teleporting an unknown pure quantum state via dual classical and Einstein-Podolsky-Rosen channels, submitted to Phys. Rev. Lett. (1997) A. Furusawa, J .L. Sorensen, S.L. Braunstein, C.A. Fuchs, H.J. Kimble, E.S. Polzik, Unconditional quantum teleportation, Science 282, 706 (1998)

    [48] S. Popescu, Bell’s inequalities versus teleportation: what is non-locality? Phys. Rev. Lett. 72, 797 (1994)
     
    Last edited: Nov 1, 2018
  6. Nov 2, 2018 #46
    Every theory can be reproduced by a non-local model, but that doesn't mean every theory is non-local. Say you have a computer which measures the temperature once a second and outputs the difference from the previous measurement. You can build a non-local model for this phenonema by storing the previous measurement at a remote location, which must be accessed on each iteration.

    Does that make this a non-local phenomena? Clearly not - since you can also model it by storing the previous measurement locally. To show that QM is non-local, you need to show that it can't be reproduced with any local model, even one with multiple outcomes. Bell's theorem doesn't do that; it requires additional assumptions.

    There is a very confusing thing some physicists do which is to use the phrase "non-locality" to mean something called "Bell non-locality", which isn't the same thing at all.
     
  7. Nov 2, 2018 #47

    Lord Jestocost

    User Avatar
    Gold Member

    As Alain Aspect says (A. Aspect, “To be or not to be local,” Nature (London), 446, 866 (2007)):

    "The experimental violation of mathematical relations known as Bell’s inequalities sounded the death-knell of Einstein’s idea of ‘local realism’ in quantum mechanics. But which concept, locality or realism, is the problem?"
     
  8. Nov 2, 2018 #48

    DarMM

    User Avatar
    Science Advisor

    As I mentioned up thread it's not really between locality or realism, but:
    1. Single Outcomes
    2. Lack of super-determinism
    3. Lack of retrocausality
    4. Presence of common causes
    5. Decorrelating Explanations (combination of 4. and 5. normally called Reichenbach's Common Cause principle)
    6. Relativistic causation (no interactions beyond light cone)
    You have to drop one, but locality (i.e. Relativistic Causation) and realism (Decorrelating Explanations) are only two possibilities.
     
  9. Nov 2, 2018 #49
    Newton lived in the 1600s, he was literally the first classical theoretical physicist - as well as first serious mathematical physicist - practically initiating the entire subject of physics as we know it today. Boltzmann and Gibbs lived much later (1800s until early 1900s). But let's not turn this into a measuring contest any further than it already is lol.

    In any case, as I said before, if that is the standard terminology of the field, then you are correct to use it, no matter how unfortunate I or anyone else may find the terminology. This paper that you linked however defines fine-tuning on page 9 again exactly as parameter fine-tuning, i.e. the same definition that I am using...
    Yes, you are correct, I'm approaching the matter somewhat seriously, it is a topic I am truly passionate about and one I really want to see an answer found for. This is for multiple reasons, most importantly:

    I) following the psi-ontic literature for the last few years, I have come across a few mathematical schemes which seem to be 'sectioned off' parts of full theories. These mathematical schemes (among others, twistor theory and spin network theory) themselves aren't actually full physical theories - exactly like AdS/CFT isn't a full theory - but simply possibly useful mathematical models of particular aspects of nature based in experimental phenomenology, i.e. these schemes are typically models based in phenomenology through the use of very particular not-necessarily-traditional mathematics for physicists.

    II) these schemes all have in common that they are - taken at face value - incomplete frameworks of full physical theories. Being based mostly in phenomenology, they therefore tend to be consistent with the range of experiments performed so far at least and yet - because of their formulation using some particular nonstandard mathematics - they seem to be capable of making predictions which agree with what is already known but might disagree with what is still unknown.

    III) to complete these theories - i.e. what needs to be added to these mathematical schemes in order to transform them into full physical theories - what tends to be required is the addition of a dynamical model which can ultimately explain some phenomenon using dynamics. QM in the psi-ontic view is precisely such a mathematical scheme which requires completion; this is incidentally what Einstein, Dirac et al. meant by saying QM - despite it's empirical success - cannot be anything more than an incomplete theory and therefore ultimately provisional instead of fundamental.

    IV) there actually aren't that many psi-ontic schemes which have been combined with dynamic models transforming them into completed physical theories. Searching for the correct dynamical model - which isn't obviously incorrect (NB: much easier said than done) - given some particular scheme therefore should be a productive Bayesian strategy for identifying new promising dynamical theories and hopefully ultimately finding a more complete novel physical theory.

    I cannot stress the importance of the above points - especially point III and IV - enough; incidentally Feynman vehemently argued for practicing theory (or at least that he himself practiced theory) in this way. This is essentially the core business of physicists looking for psi-ontic foundations of QM.
    I recently made this very argument in another thread, so I'll just repost it here: There is a larger theme in the practice of theoretical science here where theoretical calculations done using highly preliminary models of some hypothesis, prior to any experiment being done/possible, leading to very strong claims against some particular hypothesis.

    These strong claims against the hypothesis then often later turn out to be incorrect due to them resting on - mathematically speaking - seemingly trivial assumptions, which actually are conceptually - i.e. if understood correctly in physical terms - clearly unjustifiable. The problem is then that a hypothesis can incorrectly be discarded prematurely due to taking the predictions of toy models of said hypothesis at face value; i.e. a false positive falsification if you will.

    This seems to frequently occur when a toy model of some hypothesis is a particular kind of idealization which is actually a completely inaccurate representation of the actual hypothesis, purely due to the nature of the particular idealization itself.
    W.r.t. the large amount of psi-epistemic models, scroll down and see point 1).
    It is only difficult if you want to include entanglement, i.e. non-locality. Almost all psi-epistemic models don't do this, making them trivially easy to construct. I agree that psi-ontic models, definitely after they have passed the preliminary stage, need to include entanglement.

    In either case, a general remark on these no-go theorems is in order: Remember that these "proofs" should always be approached with caution - recall how von Neumann's 'proof' literally held back progress in this very field for decades until Bohm and Bell showed that his proof was based on (wildly) unrealistic assumptions.

    The fact of the matter is that the assumptions behind the proofs of said theorems may actually be unjustified when given the correct conceptual model, invalidating their applicability as in the case of von Neumann. (NB: I have nothing against von Neumann, I might possibly even be his biggest fan on this very forum!)
    Doesn't the PBR theorem literally state that any strictly psi-epistemic interpretation of QM literally contradicts the predictions of QM? This implies that a psi-ontic interpretation of QM is actually a necessity! Can you please rephrase the PBR theorem in your own words?
    1) The ease of replicating QM without entanglement seems to only hold for psi-epistemic models, not for psi-ontic models.
    2) Fully agreed if we are talking about psi-epistemic models. Disagree or do not necessarily agree for psi-ontic models, especially not in the case of Manasson's model which lacks a non-local scheme.
    3) Based on this paper, the critiques from those theorems seem to apply not to realist models but to a psi-epistemic interpretation of QM.
    Even worse; even if they did apply to realistic models (i.e. psi-ontic models) they would only apply to a certain subset of all possible realist models, not all possible realist models. To then assume based on this that all realist models are therefore unlikely is to commit the base rate fallacy; indeed, the very existence of Bohmian Mechanics makes such an argument extremely suspect.
    I understand your reservations and that it may seem strange that I seem to be arguing against what seems to be most likely. The thing is I am actually - in contrast to how most physicists seem to usually judge likelihood of correctness of a theory - just both arguing and judging using a very different interpretative methodology to the one popular in the practice of physics - one in which (priorly-assumed-to-be) low probability events can actually become more likely, given the conditional adherence to certain criteria. In other words, I am consciously using Bayesian reasoning - instead of frequentist reasoning - to evaluate the likelihood that particular theories are or aren't likely (more) correct, because I have realized that these probabilities are actually degrees of belief not statistical frequencies.

    I suspect that approaching the likelihood of the correctness of a theory w.r.t. open problems with very little empirics using frequentist reasoning is highly misleading and possibly itself a problematic phenomenon - literally fueling the bandwagon effect among theoretical physicists. This characterization seems to apply to most big problems in the foundations of physics; among others, the problem of combining QM with GR, the measurement problem and the foundational problems of QM.

    While foundational problems seem to be able to benefit strongly by adapting a Bayesian strategy for theory construction, open problems in non-foundational physics on the other hand do tend to be easily solveable using frequentist reasoning. I suspect that this is precisely where the high confidence in frequentist reasoning w.r.t. theory evaluation among most physicists stems from: that is the only method of practical probablistic inference they have learned in school.

    That said, going over your references as well as my own it seems to me that you have seriously misunderstood what you have read in the literature, but perhaps it is I who is the one who is mistaken. You definitely wouldn't be the first (I presume) physicist I have met who makes such interpretative errors when reading long complicated texts; it is as if subtlety is somehow shunned or discarded at every turn in favor of explicitness. I suspect that this might be due to the fact that most physicists today do not have any training in philosophy or argumentative reasoning at all (in stark contrast to the biggest names such as Newton, Einstein and the founders of QM).

    In my view, you seem to be a) frequently confusing (psi-)ontology with (psi-)epistemology, b) interpreting certain extremely subtle arguments at face value and therefore incorrectly (e.g. Valentini's argument for BM, on the face of it goes against accepted wisdom in contemporary physics, this in no way invalidates his argument, his argument is a logically valid argument), c) interpreting no-go theorems possibly based on shaky assumptions as actual definitive demonstrations and d) attributing concepts deemed impossible within contemporary physics (e.g. superdeterminism, superluminality, retrocausation) as effects of fine-tuning based arguments in the form of c).

    This is made clearer when you automatically generalize the validity of proofs using concepts defined in a certain context as if the proof covers all contexts - seemingly purely because it is a theorem - something you have no doubt learned to trust from your experience of using/learning mathematics. This should become clearer through the following example: Superdeterminism, superluminality and retrocausation would only necessarily be effects of fine-tuning given that causal discovery analysis is sufficient to explain the violation of Bell inequalities; Wood and Spekkens clearly state that this is false, i.e that causal discovery analysis is insufficient to understand QM! (NB: see abstract and conclusion of this paper). Most important to understand is that they aren't actually effects of fine-tuning in principle!

    Furthermore, Wood and Spekkens are through the same paper (page 27) clearly trying to establish a (toy model) definition of causality independent of temporal ordering - just like what spin network theory or causal dynamical triangulations already offer; this is known as background independence, something which I'm sure you are aware Smolin has argued for for years.

    And as I argued before, Hossenfelder convincingly argues that finetuning isn't a real problem, especially w.r.t. foundational issues. The only way one can interpret the Wood-Spekkens paper as arguing against psi-ontic models is to argue against parameter finetuning and take the accepted wisdom of contemporary physics at face value - which can be interpreted as using Occam's razor. I will argue every time again that using Einstein's razor is the superior strategy.
    I'm pretty well aware of these ideas themselves being problematic taken at face value, which is exactly why I selectively exclude such ideas during preliminary theoretical modelling/evaluating existing models using Bayesian reasoning. I should say again though that retrocausality is only problematic if we are referring to matter or information, not correlation; else entanglement itself wouldn't be allowed either.
    All theories derived based on the Wheeler-deWitt equation are acausal in this sense, as are models based on spin networks or twistors. I suspect some - or perhaps many - models which seem retrocausal may actually be reformulated as acausal or worse, were actually acausal to begin with and just misinterpreted as retrocausal due to some cognitive bias (a premature deferral to accepted wisdom in contemporary physics).

    Btw I'm really glad you're taking the time to answer me so thoroughly, this discussion has truly been a pleasure. My apologies if I come off as rude/offensive, I have heard that I tend to argue in a somewhat brash fashion the more passionate I get; to quote Bohr: "Every sentence I utter must be understood not as an affirmation, but as a question."
     
  10. Nov 3, 2018 #50

    DarMM

    User Avatar
    Science Advisor

    Genuinely I really don't get this line of discussion at all. I am not saying initial condition fine tuning is an older concept (I know when Newton or Boltzman lived) or that in Quantum Foundations they exclusively use fine tuning to mean initial condition fine tuning.

    I am saying that fine-tuning has long been used to mean both in theoretical physics and Quantum Foundations like many other areas, uses fine tuning to mean both.

    In that paper I linked they explicitly mean both as "causal parameters" includes both initial conditions and other parameters if you look at how they define it.

    I really don't understand this at all, I'm simply using a phrase the way it has been used for over a century and a half in theoretical physics. What does it matter if using it on a subset of its current referents extends back further?

    No, it says that any Psi-Epistemic model obeying the onotological framework axioms and the principle of Preparation Independence for two systems cannot model QM.

    That's explicitly not true, coming up with Psi-Ontic models that model the non-entanglement part of QM is simple, even simpler than modelling it with Psi-Epistemic models. In fact Psi-Ontic models end up naturally replicating all of QM, you don't even have the blockade with modelling entanglement that you have with Psi-Epistemic models.

    That's not what the theorem demonstrates, it holds for both Psi-Ontic and Psi-Epistemic models. The class of models covered includes both.

    Bohmian Mechanics needs to be fine tuned (Quantum Equilibrium hypothesis), it is known that out of equilibrium Bohmian Mechanics has superluminal signalling. In the Wood-Spekkens paper they are trying to see if that kind of fine-tuning is unique to Bohmian Mechanics or a general feature of all such theories.
    It turns out to be a general feature of all Realist models. The only type they don't cover is Many-Worlds. However the Pusey-Leifer theorem then shows that Many-Worlds has fine-tuning.

    Hence all Realist models have fine-tunings.

    What one can now do is attempt to show the fine-tuning is dynamically generated, but you can't avoid the need for it.

    I don't need a psychoanalysis or rating of what I do or do not understand. Tell me what I have misunderstood in the Pusey-Leifer or Wood-Spekkens papers. I've gone through the proofs and then rederived them myself to ensure I understood them, as well as seen the conclusion "All realist theories are fine-tuned" explicitly acknowledged in talks by Quantum Foundations experts like Matt Leifer.

    See point nine of this slide:
    http://mattleifer.info/wordpress/wp-content/uploads/2009/04/FQXi20160818.pdf

    It's very easy to start talking about me and my comprehension, have you read the papers in depth yourself?
     
  11. Nov 3, 2018 #51

    Fra

    User Avatar

    The discussions here was interesting, as they made me realise more how differently we all think about these foundational issues.
    In the extended meaning i used before, even the standard model as it stands encodes a realism of symmetries. And these symmetries are used as deductive constraints when we construct theories. This is the poweful methods the theoretical framwork of QFT rests on. But my perspective is that this power is deceitful. As the choice of the constraints can be seen as a fine tuning in theory space. So we do not only have the fine tuning of initial conditions, we have also the fine tuning of laws. This is a big problem i see, and dynamical fune tunings could then not follow a timeless law, as that is the metalaw dilemma Smolin talks about.

    Instead some kind of evolution, that does not obey dynamical LAW, seems needed. And this way of phrasing it naturally unifies initial states, and the state of law. As I see if none of them should be identified with ontic states. So I think these realis ontologies already lead us into trouble, even if we do not involve HV realist models. So even those that reject bohmian mechanics, but embrace the theoretical paradigm of standard model are still IMO in trouble.

    As has been mentioned already, these finetunings are alreadyt solved by nature, if physicists only would learn from biology. The state space in biology is now timeless fixed, its evolving, but not according to physical law. The one critique one can have about this at first is; so what, how can we get more predictive from this insight? That is the question I ask. And the reason why Smoling mentions ins CNS, so just set an example of showing that one prediction is that the insight means we can use the evolutionary traits such as survival, reproduction and self-organisation as soft sub-constraints to replace the HARD deductive constraints of timeless symmetries. And try to reconstruct the measurement theory as per this. Here the deductive machinery of an observer, is necessarily an evolved inference system which is more abductive, NOT deductive. But compressed sensing also means that even the inference systems itself is truncated, and when you truncate a soft inference, it looks like more exact like a deductive system, because you discarded the insiginificant doubts from reflections.

    The discussions on here made me realise exactly how much headache the entanglement and nature of non-commutative observables causes. If we can not find a conventional "realist model", we need to find another plausible common sense way of understanding htis. And i think that is possible.

    /Fredrik
     
  12. Nov 3, 2018 #52
    All I am saying is that having one phrase which can mean two distinct things is unnecessarily confusing, hence me calling it unfortunate. Based on a careful reading of that paper, it seems the newer secondary usage in the literature might even seems to be correlated with and therefore reducible to the older primary usage.

    This is of course assuming that a) the usage in this paper is a secondary usage and b) typical and therefore representative of the secondary usage in the literature. If readers will equivocate the effects (eg. superluminality) to the causes (parameter fine-tuning) this would naturally lead to a correlation between these terms and an evolution of this secondary usage among scientists in this subfield.

    The same thing I just described tends to occur in many other academic fields and subfields. I suspect the same may be happening here, but of course I could just be mistaken.
    It is either you or I who is thoroughly confused - or worse, perhaps it is even both of us. This is nothing to be ashamed of in these complicated matters. These matters are immensely complicated and have literally stumped the best minds in science for over a century including Einstein himself. In no way would I even rank myself close to such individuals. Complicated mathematics such as QFT or GR calculations are trivially simple in comparison with what we are discussing here.
    As I said before, there is absolutely nothing wrong with having or requiring parameter fine-tuning itself. This is reflected in the practice of bifurcation analysis of dynamical systems, wherein parameter fine-tuning is the essential strategy to identify the values of some parameter which leads to bifurcations in parameter space, i.e. to second order phase transitions and related critical phenomena. Usually in physics this is done by some kind of stability criteria exactly analogous to Valentini's Quantum Equilibrium Hypothesis; Manasson does this through an extremum principle in his paper.

    W.r.t. these 'physically illegal' ideas - including many worlds - the possibility that novel schemes/models will result which display these possibilities can actually be removed a priori by explicitly choosing particular mathematics which can not model such a phenomenon and then constructing the scheme/model based on such mathematics. A theorist who realizes this can obviously take advantage of this when constructing or searching for a new model.

    The same thing can also be done in reverse, i.e. if one wants to construct a scheme or model which intrinsically has some particular conceptual property eg. non-computability, one can choose to construct such a model using non-computable mathematics, such as non-periodically tiling the plane using shapes (Penrose tiling). Any resulting scheme/model based on such mathematics will then be - if successfully constructed - inherently non-computable, i.e. fine-tuned with non-computability as a resulting effect.

    It is important to understand that concepts such as non-computability, superdeterminism, superluminality and retrocausality aren't themselves logically incoherent. They are instead 'illegal' w.r.t. our current conceptual understanding of physical theory based on experimental phenomenology; there is however absolutely no guarantee that our current conceptual understanding of fundamental physical theories will not be modified or replaced by some superior theories in the future, meaning it could turn out either way.

    It goes without saying that this is exactly what physicists working in the foundations are trying to figure out. The related issue of whether 'physically illegal' ideas (such as superdeterminism, retrocausality and superluminality) could result from some kind of parameter fine-tuning is therefore frankly speaking completely irrelevant. Just because identifying fine-tuning is a useful strategy in order to exclude ideas in the practice of high energy theoretical particle physics doesn't mean it is useful outside of that context; as Hossenfelder argued, it isn't.
    As I have said at the end of my other post I mean you no offense whatsoever. I'm just trying to analyze what may be the cause of these disagreements which are holding us back from coming to a resolution. If I'm actually wrong, I'd be extremely happy if you or anyone else could show me why using good arguments; optimistically it may even lead to a resolution of these devilish misunderstandings which have plagued this field for almost a century now, but I digress.

    Yes, I have read the papers in depth (which is why I tend to take so long to respond). It is not that there is a mistake in the argument or that you have made a mistake in reproducing the argument; I am instead saying that to generalize (using induction) the conclusion of the argument from the particular case wherein the proof is given - based on these particular assumptions and premises - to the general case isn't itself a logically valid step. This is why these no-go theorems aren't actually intratheoretical theorems of QM or even physical theory, but merely atheoretical logical theorems about QM.

    To actually make a theorem which speaks about the general case - what you and others seem to be trying to do - would require much more premises and assumptions, i.e. all the conceptual properties necessary for the mathematical construction of a theory of which QM would be a limiting case, given that such a thing exists; if you could construct such a theorem, that theorem would actually essentially be an undiscovered law of physics.

    Essentially, this is exactly what I am trying to do: reasoning backwards from conceptual properties which have survived no-go theorems and then use nonstandard mathematics to construct a novel theory based on said remaining concepts. There is no guarantee such a strategy will work, but generally speaking it is a highly promising reasoning strategy which is often used to identify the correct mathematical description (usually in the form of differential equations) when dealing with black box systems.
     
  13. Nov 3, 2018 #53

    DarMM

    User Avatar
    Science Advisor

    As an illustration that it means initial condition fine tuning, the quantum equilibrium hypothesis in Bohmian Mechanics are initial conditions. This is included in the type of fine-tuning discussed in the paper.

    The proof in the paper takes place in a generalisation of the ontological models framework, defined by Spekkens himself, which explicitly includes both Psi-Ontic and Psi-Epistemic models. Psi-Ontic models are simply the case where the state space of the theory ##\Lambda## takes the form ##\Lambda = \mathcal{H} \times \mathcal{A}## with ##\mathcal{H}## the quantum Hilbert Space and ##\mathcal{A}## some other space. Psi-Epistemic theories are simply the case where it doesn't have this form.

    This doesn't affect your argument, but just to let you know it isn't Valentini's Hypothesis it goes back to Bohm, without it Bohmian Mechanics doesn't replicate QM.

    Certainly, I am simply saying they must be fine-tuned. However it could be the case, for example, that the world involves fine-tuned retrocausal processes. I'm really only discussing what we can narrow down the explanation of QM to.

    Well it's not so much that they result from fine-tuning, but proving that they require fine-tuning. Also this isn't high-energy particle physics. Quantum Foundations deals with QM as a whole, not specifically its application to high-energy particle collisions.

    My apologies, you clearly are conducting this in good faith, my fault there. :smile:

    What specific assumptions of the ontological framework are in doubt, i.e. what assumption do you think is invalid here?

    If by the "general case" you mean "all possible physical theories" neither I nor the quantum foundations community are doing that. Most of these proofs take place in the ontological model framework or an extension there of. So something can evade the no-go results by moving outside that framework. However if a presented theory lies within it, we automatically know that it will be subject to various constraints from no-go theorems. If you aren't familiar with the onotolgical models framework I can sum it up quickly enough and perhaps you can say where your doubts are. I can also sum up some theorems that take place in one of its generalisations.
     
  14. Nov 3, 2018 #54

    Fra

    User Avatar

    From my inference perspective, the ontological framework with an ontic sample space for "the black box" seems like a very strange ansatz to start with that i do not see as a general ontological model to psi, except if you secretly just tries to get behind the observational filters nature gives us to restore realism.

    After all, i assume the "ontic model of psi" does not really mean the same thing as realist model. The former phrase is a general understanding of what psi is.

    First objection is that it does not seem to even reflect over the relational nature of things between observer and the system. First, who says that the ontological model of psi, is about an ontic space associated to the the system (ie the black box)? It might as well reflect the observers ontological state of information of the black box; irrespective of what is "right or wrong". Often this is ignored or called the psi-epistemtic, because its easy to jump into the conclusion that this somehow involves a human observer. It could possibly also refer to the observers actual physical state (following from interaction history, and then it does not matter if you label it measurements or preparation, it falls into the same category). This then coincides with a observer bayesian interpretation of probabilities. We need no ensembles then, just the retained information and how its been processed. However the notion of observer needs to be generalized beyond classical measurement devices, to make this idea viable. For example, two observers can even be entangled with each other, truly making the information "hidden". There are some ways to hide information without sticking to the old realist models i think. Information inside a blackhole is also possible hidden, yet it can be entangled with things outside the black hole. Susskind has been talking alout about this when he used the headlines "ER = EPR" or even "GR = QM", where he argues that entanglement and the makeup of spacetime are related.

    Here in lies the distinction between the psi-epistemic view within what i think you refer to as the standard ontological model framework? and what i think of as the interpretation that the only justified "ontic states" are exactly the observers physical state; and this encodes expectations of its own environment. This "ontological model" does as far as i can see not fall into the "standard ontological model framework", because the whole ansatz of the ontic sample space is alien to the constructing principles.

    As I see it, a sound framework should make use only of things that are organised and recoded from possible observations; and i want to know how the ontic sample space got there in the first place, if its not due to the secrect dreams of old times. It seems to me the ontic space is not needed, it adds only confusion doesnt it?

    So what I think, is that i think the viable ontic model for psi we need (not for removing MWI nightmares, but in order to make progress in unification and quantum gravity) may not be in that standard framework? So in this sense, i agree that the scope of the no-go theoriems are limited. That's not to say they areny important of course! They are a tool to dismiss candidate theories.

    /Fredrik
     
  15. Nov 4, 2018 #55

    DarMM

    User Avatar
    Science Advisor

    Well the quantum state being a result of the system's state and the observer/measuring device's state is actually handled in the ontological models framework, via what are called the response functions.

    Plus if you actually think about it, the observer's state being a factor in determining the result doesn't help much in cases such as entanglement.
     
  16. Nov 4, 2018 #56

    Fra

    User Avatar

    How are the response functions, and the structure of the ontic space supposed to be inferred(abduced) by the observer? It seems to be they arent inferrable? So which observer is supposed to be doing this inference?

    As we know, assuming a random microstructure and they say apply an ergodic hypothesis is not innocent, as you can by the choice of conditioned partitioning sort of bias your probabilistic conclusions. This is why i object to introducing probability spaces such as a sample space that is not constructable from the observers perspective.

    I think QM needs and inference ontology, not a hidden space ontology.
    That depends on what you make out of it I think.

    You are right that we are getting nowhere if we just stop and say that its all jsut expectations of the observer. Nothing interesting happens until we allow these observers to interaction; or communicate. But this communication is a competitive game, that is also about survival. This is like the difference between QM in curved space, and QG. Its first when we try to account for the real backreaction of the environment, to the observers expectations that we get the real thing.

    First, I will admit the obvious that I do not have a ripe model yet with, so perhaps I should just be silent. But the line of reasoning that i have in mind is this:

    The observer is interaction with its environment, an in the general case the environment is the black box.
    But what is the environment? By conjecture its abstractly populated by fellow observers.

    And the conjecture here is that they are all understood as information processing agents, that follow the same rules of inference.

    If we see the the action of an observer, as a guided random walk in its own state space, and the backreaction of the environment as a guided random walk it ITS statespace, what we end up with are two coupled interacting information processing agents. And and evolution will take place, that evolves the theory implicitly encoded in both sides, and the random walked gets improved guidance as the theory evolves. If not, the agent will destabilise and give up its complexions tothe environment (ie dissipate or radiate away).

    So in entanglement - I envision the superposition is not seen as a property of the system(the entangle particle), but as the state of the environment. And then note that we arent just talking about Alice and Bob, but of the whole experimental setup, including slits or polarizers or whatever is in there. Ie the whole environment, encodes and thus BACKREACTS on the particle just as if it IS in superposition. And this is not challenged unless the entanglement is broken by a measurement. And if we assume that this gives the same dynamics as if the superposition was soley due to Alice and Bobs ignorance, this will give a different result, because its not how it works. Its not about Alice and Bobs ignorance, its about the entire environments support or the superposition. This is not the same thing as a hidden variable.

    One can understand this conceptually by a game theoretic analogy. As long as ALL other players are convinced about something, it does not matter if its a lie, because the backreaction of the environment "supports the lie". And in its extreme, there is no way for a player to tell a stabilized lie from reality. Ultimately this means that in the inference perspective, boolean states are not objective. True or false are as deceiptive as is old time realism.

    These idea are what i am taking seriously, and i think that these constraints will guide us to predict WHICH information processing structures are likely to appear in this game, if we start from zero comlpexity and built from there. Ie. this reasoning starts at highest possible energy at big bang, and then we ask ourselve which mathematical inference systems are most likely to survive if we implement these ideas? And can we harvest the various known phenomenolgoy as we reduce temperature (and this increase complexity) of the observers?

    /Fredrik
     
  17. Nov 4, 2018 #57

    DarMM

    User Avatar
    Science Advisor

    It doesn't really matter, I mean it's not as if the form of the Ontic space affects Bell's theorem does it? You have to be more crazy (in the sense of Bohr's "not crazy enough") than this to escape the onotological models framework.

    All of this makes sense, nothing wrong with it, but it falls within the ontological models framework, so it will have to be nonlocal, retrocausal, superdeterministic or involve Many-Worlds and in addition be fine-tuned. In fact from the rest of your post what you are talking about seems to me to be superdeterminism driven by environmental dynamics.
     
  18. Nov 4, 2018 #58

    Fra

    User Avatar

    We dont need details,but the main point is that mere existence of the ontic space, and the conditional probability measures that connect ontic state to epistemic state and preparation, and the response functions contains non-trivial information about the matter. And this is what is used in the theorem.

    Its the fact that the notic space exists with the mentioned conditional probability measures that encodes information used in the theorem. If this information does not exist, the premise of the theorem is lost.

    What i suggested is that i do not see a clear justification for the ansatz. The ansatz is obvious if your mindset is tuned in on classical thinking. But if you release yourself from this, and instead think of inferences, i am not sure how you can justify the ansatz?
    Surely the explanatory burden is all on me to explain my reasoning, sorry!

    But i dont see how you get this impression. Btw, the "rules of inference" i refer to, are NOT deductions, I actually think of them as evolved random processes. Their non-random nature are self-organised, and not left for ad hoc fine tunings. This should be as far from superdeterminism as you can get? As far as locality goes, what i suggest is explicitly local in information space, non-locality is possible only as evolved correlations. I will try to write more later, or we can drop it here. The main point was not to try to explain everything of this in detail anyway, I just do not see that this idea fits into the ontological model framework. I would insist that competitive models to QM, by no means are exhausted by that framework? To proove it explicitly i suppose nothing less than completeing it will do. So lets just leave my objection for the record ;)

    /Fredrik
     
  19. Nov 5, 2018 #59

    DarMM

    User Avatar
    Science Advisor

    A minor technical point, I would say "this is what is used in the framework", i.e. the Ontic models framework in general and all proofs that take place in it, not just Bell's theorem.

    Indeed this is what is used, as the theorem attempts to constrain Realist theories. If you reject ##\Lambda##, then you have a non-Realist interpretation, which the framework isn't about. Indeed you could see the no-go results as suggesting moving to a non-Realist interpretation/model, it's not meant to also argue against them.

    I think you might be missing the point of the framework as I discussed above. It's not to constrain all interpretations, just Realist ones, so the ansatz is justified as an encoding of the the framework's targets, i.e. Realist theories.

    It would be like somebody setting out to see what constraints apply to discrete models and then objecting to their use of ##\mathbb{Z}^{d}##

    I didn't understand then, I'd need something more concrete in order to say anything sensible, perhaps some mathematics.
     
  20. Nov 5, 2018 #60

    ftr

    User Avatar

    All indications are that nonlocality is the first reason for QM, local effects are byproducts in a similar fashion to how "virtual particles" paradigm works.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted