Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Entanglement, Mind and Causality

  1. Dec 3, 2009 #1
    It seems to me a lot of what quantum mechanics is essentially saying is that "causality of any given event is equal and opposite to the causing event" - leading to entanglement.

    This is in some way analogous to the Newton's laws of "action and reaction are equal and opposite".

    What I mean is that generally the so-called observer that caused the experiment to happen is also been influenced in some equal and opposite way by the observed event and therefore is inextricably "entangled" with the experiment by this causality law.

    So if you are a fan of the movie "the matrix" the merovingian that laced the lady's drink with something in the matrix world to set of a chain reaction did not realize, perhaps he is somehow also influenced in some equal and opposite way by that event that he thought he caused... who was the one really causing things to happen? Anyways I digress a bit, pardon moi.

    This causal symmetry would then apply to any "non-conscious" systems as well that influence each other in a symmetric fasion in equal and opposite ways.

    However, I also hypothesize that what we call "mind" or consciousness is a unique entity that is not governed by this "equal and opposite" causality law. This "mind" is able to influence other entities ("physical world" or "outside universe") without being influenced *as much* in return (asymmetric), therefore it (the mind) is uniquely ***"closed"*** from a causal perspective relative to the "universe" out there. Sort of like unidirectional or semi-unidirectional gate for the flow of causality.

    The "flow" of causality is then more from the conscious entity (or "mind" or whatever you want to call it) to the outside universe than vice versa unlike other non-conscious systems in which causality goes equally in both directions. Lack of perfect causal symmetry between the mind and outside universe then leads to lack of perfect entanglement.

    This then leads to what we call the "wave function collapse" because the mind is no longer perfectly entangled with the outside physical world and forces the physical world to behave differently than it otherwise would when it interacts with non-conscious systems.

    This causal asymmetry may somehow also lead to the perception of the flow of time...

    This concept may then allow room for the idea of "free will" since consciousness is somehow causally insulated (at least partly) from the causal chaos in the surrounding universe or multiverse whatever you want to call it.

    Mere speculation but it's something I think is worth pondering and I'm using quite general, vague ideas rather than hard specifics.
  2. jcsd
  3. Dec 4, 2009 #2
    however tempting it might be, associating various quantum physics phenomena with all kinds of mystical/metaphysical/vari-philosophical ideas and concepts is something (at least I think) we should refrain from.

    sure, some quantum phenomena and epiphenomena have a certain "exotic" flavour to them but still, a golden rule of thumb is: never try to pull out more than it's there... this kind of associative chain is rather working against the enrichment of knowledge not alongside it...

    noticing some apparent similarity just isn't a good enough hint to try and mix two different things together.

    ps: I guess I'm not talking just about this particular thread but also about many other such topics I noticed on this board (although, admittedly there are some that do bring some relatively interesting points)...
  4. Dec 4, 2009 #3


    User Avatar
    Gold Member

    Actually there is a kernel of an important idea here. And as usual, it is about the application of dichotomies in modelling.

    What smersh is talking about is an equilibrium of interaction - a balance between events and their contexts, the local and global scale of a system.

    Consider Newton's action and reaction. The need for this idea of a world that pushes back at you becomes less mysterious once you realise that it is a book-keeping exercise to correct for the fact that the Newtonian approach breaks the world down into a tale of one-way chains of cause and effect. A force - by definition - is an action in a single direction. Yet if reality is actually woven of interactions, causality must always be mutual in some way. The supposed doer and done-to need to have the properties that allow them to engage - to create an event - in the first place.

    According to the third law, if a person pushes a door shut, the door is said to push just as hard against the person’s hand. A diagram would show two equal force vectors springing to life through the line of contact.

    However what is really occurring is an interaction between two complex contexts. There is the door with all its material properties such as a certain structural rigidity, a relative lightness, and a low-friction hinge. Then there is the person with also a certain structural rigidity, a greater mass, and the ability to employ the frictional resistance of feet braced against the even greater bulk of the Earth. And so any interaction between hand and door will strike an emergent balance. In this case, the door will be seen to move a lot, a person and the earth very little.

    Then to simply all this messy reality down to the kind of math vectors that can easily be manipulated, Newton created the first two laws to deal with the very idea of vectors, then the third law to deal with the symmetry of interaction - the equilibrium balance that emerges even when two interacting are wildly asymmetric, or local~global.

    So what about QM? Well we should arguably expect the same story to apply. Science thrives on simplifications. But the model still has to capture the messy reality.

    Now QM famously does not have the observer, the global context, as part of its formal structure. The observer has to be added in uncertain fashion as a pragmatic step in everyday use of the theory, and leads to a host of competing interpretations as to "what is really going on".

    But if the observer, the globally constraining scale, is to be added to the theory, then the Newton's third law seems a good example of the kind of way it would be done.

    This is I believe the way the decoherence interpretation of QM should pan out. Local QM potentials in interaction with global GR/classical contexts. It has ABSOLUTELY NOTHING to do with human consciousness or biological complexity (excuse the shouting). It has everything to do with the way classical reality is an equilbrium structure in which the bottom-up constructing push of QM is in constant dynamic balance with the top-down constraining "push" of an already decohered universe.

    At every scale of observation (by that I mean actual observation by human scientists) we will find that there is then a symmetry - a steady state balance between two quite unequal scales of action. This would in fact be a formal prediction of a dissipative structure, systems-style, approach to QM. And we do see that generally speaking, the classical world does really dominate over the quantum "weirdness" or uncertainty. Humans have to create special isolated systems - as in twin slit experiments - to get naked QM to show at all.

    Again, the mind and freewill have NOTHING to do with any of this. Though you could perhaps say that human experimenters do manage to set up corners of the universe where there is LESS constraint on QM phenomenon. We ease the reaction to make the action more visible.
  5. Dec 5, 2009 #4
    I think you do have a point that caution is required when dealing with the possible implications of our current knowledge of quantum physics and applying it in general to metaphysical ideas.

    That is why these ideas are as I described in the original post are (needless to say) highly speculative as you suggest.

    However I think that speculation is an essential method of exploring the "phase space" or "solution space" of possible models of reality that *may* provide usefull information in ruling in or out certain ideas of this reality and completely abandoning such speculation is just as dogmatic and anti-intellectual as overvaluing such speculation.

    There is always a middle ground of strong intellectual curiosity and the need to seek out knowledge however strange it may be while maintaining healthy sceptimism. For example I do not consider *anything* to have a 100% probability of being true. *Everything* every theory is subject to possible error.

    In my opinion, there is currently an inconsistent "mosaic" of data out there including the current understanding of quantum physics, classical physics and mathematics that essentially has hit a theoretical "dead end" in my opinion in the last few decades and therefore an open mind is needed to explore new ideas ***however strange it may be to current scientific dogma***. Historically dogma has tended to be wrong and also honestly quite boring.

    We need more empiric data but while we wait for this data, it is harmless (and perhaps intellectually interesting to some people) to speculate on a message board about how we can fit the current "mosaic" of data into some vaguely sensible, simple but speculative model or models. The merits of any said speculation is of course highly subjective since it is not based on solid empiric data.
  6. Dec 5, 2009 #5
    I have seen your previous and current posts about this very interesting description of a sort of symmetry between local and global, top down and bottom up causality and it appears to be consistent with the idea that the rules that govern this universe we are in tend to be highly if not completely symmetric.

    Intuitivelly any self-consistent universe with consistent rules needs a kind of symmetry.

    However, I am not sure how this approach is able to explain the concept of "I" (considering the limitations of semantics and language, I guess "I" seems to be the most often used word for this).

    The classic quote from Descartes which is almost a cliche: "I think, therefore I am". It sounds to many as the one statement that could be counted as fact if anything is to be considered as fact at all. If this statment by Descartes is false, then everything else I think is either false or everything else doesn't matter anyways, since "I" do not exist.

    By induction, one could consider that there are other "I"s or other entities that are also thinking (ie other thinking beings).

    So what is this "I" that thinks? What is this entity or process "I"? And why is this "I" seemingly "tied" to this specific physical "body" (my body) rather than for example your body?

    Lets say there is a set of apparent entities that consist of all the "I"s (whatever these "I's are) on this planet earth set A (a, b, c, d.. ) and a set of apparent entities A' (a', b', c', d', ..) that represent the physical bodies that appear to correspond to the "I"s in the set A.

    What is the process or law that specifically says that "a" in the set A must correspond to a' in the set A'. Is this process or law deterministic or random? Is it governed by the local or the global and why?

    Some may say the set A' does not exist (physical bodies do not exist etc). If so then at least the set A must exist from Descarte's theorem. What law or process distingishes "a" from "b" or from "c" within the set of A. In other words what makes the members of the set A INTRINSICALLY DIFFERENT. Again is this law or process deterministic or random? Is it governed by the local or gobal and why?

    Some may say that there is no distinction between members of set A, therefore there is only one member, "a". To me that is analogous to the previous dogma that the earth was the centre of the universe, an anthropocentric notion but nonetheless I suppose some may consider it valid.

    Some may say that this set A simply does not exist, and therefore by implication this "I", not even one member of the set A exists, or in some sense A is a null set. Therefore in this case, Descartes was wrong and "I" does not exist.

    So if this "I" does not exist and A is a null set, then what is the basis of determining from our logic and thinking that anythying else exists much less what the laws are that govern these things?

    I wonder what your thoughts are on these questions and how they may relate to your ideas.

    Essentially what I'm getting at is that a lot of theories do not seem to confront this idea of "I" which seems to be the STARTING POINT of any belief in anything that may exist and I think a theory is incomplete if it doesn't directly address what this "I" is.

    In my opinion any metaphysical theory has to start at this point with what we consider most certain ("I" exist) before it address the entities are are less certain (Empirical data, the quantum and classical physical universes, metaverses, the Models and Laws governing everything etc). I think the fact that we "think" is more concrete evidence than anything else.

    I would almost propose a "level of evidence" scale for metaphysical discussion: Level 0 (Aboslute truth, unknowable), Level 1 (I think therefore I am or similar such proof), Level 2 (Empirical data, experiments), Level 3 (Theories or Models that predict experiments and predict empirical data, ? Qualia) etc.
    Last edited: Dec 5, 2009
  7. Dec 5, 2009 #6


    User Avatar
    Gold Member

    The question you have to ask yourself is whether I is the simplest thing or instead the most complex thing? Is it a fundamental aspect of reality, or is perhaps rather the intersection of quite a variety of activities and processes?

    Well science supports the second view, even if naive realism seems to say that I-ness is somehow the irreducible fundamental - the only thing that can't be doubted, imagined to be not existent, etc.
  8. Dec 5, 2009 #7
    Well said. The second view seems to me too reminiscent of a return to classical reductionism, that this I-ness is a construct of smaller less complex subunits (activities or processes) as opposed to an independently "thing" that exists in itself (or that has a strongly emergent property and exists indepently at least).

    Also the second view seems self-contradictory. This I-ness (the thinking process) by definition encompases our very same logical thinking processes that we are using to come up with all our questions, our theories, our logic, our mathematics and our thoughts etc. So if this thinking by "I" is somehow false or not real, then all the content of the thinking (all our theories and logic) is also somehow false and not real as well (since it cannot be proven without thought). Therefore with the second view ANY theory we came up with is not real since the thinking is not real.
  9. Dec 5, 2009 #8


    User Avatar
    Gold Member

    Actually, it is the second view I believe in and was arguing for.

    But of course, I don't advocate classical reductionism. I advocate the appropriate approach to modelling complexity which is the systems science approach. Which would be what is now the norm in theoretical biology and theoretical neuroscience.

    Examples of classical reductionism still exist of course - evolutionary psychiatry and overly computational approaches to cognitive psychology for instance. Very popular still in US and UK especially.
  10. Dec 5, 2009 #9
    I understood you were advocating for the second view and I merely meant you stated it well, although I did not agree with it, and you have made some very interesting points.

    I think one of the problematic ideas with reductionism is that something that is complex must necessarily be less fundamental and must consist of simpler more fundamental components. But there is no logic that this is necessarily so. Something *may* be exceedingly complex yet fundamental and irriducible to any simpler components.
  11. Dec 5, 2009 #10


    User Avatar
    Gold Member


    Or what about reductions that go in two directions instead of just the one?

    Take water for example. An H2O molecule can be "reduced" both to its substantive components and its global form. And I mean reduced as not "breaking something big into little pieces", but in the modelling sense of generalisation - that is, reducing the amount of information needed to model some aspect of reality.

    So we can talk about H2O as particular example of a collection of atoms. Or we can also talk about it as a particular example of the global form we call "a liquid" (or even more generally of course, a phase of matter, so bringing in solids, gases and plasmas).

    Does the atomic description of H2O ever tell us everything about H2O-ness? No. And equally, we need to know more about H2O that that it is a liquid and shares a form that is common to many liquids. But given both the atomic description and the global systems behaviour, we now do have a pretty good understanding of H2O molecules.

    Same with "conscious" humans. We can look to the neurons. But we should also look to the global form of brains, which for example could be modelled as an anticipatory system, an autopoietic system, a complex adaptive system - there are few choices saying reasonably much the same thing. These would be generalisations about the organisation of the parts, like "liquid" is a generalisation about the organisation of atoms.

    But much more complex!
  12. Dec 6, 2009 #11
    Not being a philosopher I feel unqualified to join this discussion but I did take a QM class that went through entanglement.

    My professor did not view entanglement as a reason to doubt Quantum mechanics. Originally, entanglement was posed as the Achilles heel of the theory because it implied non-locality of signalling of events. A measurement on earth can instantaneously inform an observer in the Andromeda galaxy. Einstein felt that signals need to travel through the intervening space and that any theory that allowed instantaneous signalling could not describe be correct because it violated any rational picture of how the Universe must work. His objection was much like an application of the principal of sufficient reason.

    Not long after Einstein died, a experiment was devised to test for non-locality. It showed that non-locality was correct and that Einstein was wrong. Since then, many physicists accept entanglement as an established feature of the Universe.

    However, the real conceptual problem is not with entanglement per se but with the nature of measurement. quantum mechanics describes the outcomes of measurements but not what a measurement is. For instance, a measurement is not a solution of the Shroedinger equation.

    The collapse of the wave function during measurement remains a mystery. There is no good fundamental theory yet of how this happens. All that we have is a phenomenological description of the outcomes of repeated measurements.

    Eugene Wigner believed that measurement involved the duality of mind and matter. Matter QM teaches us has the quality of non-definiteness - of a distribution of states that are simultaneously combined - whereas mind has the property of definiteness and discrete specific objects. These are concepts and particular perceptions. Wigner believed that when the two realities of matter and mind interact the mind is forced by its intrinsic nature to convert the indefinite world of matter into the specific world of mind, that is of definite discrete ideas. This for him was a measurement. For Wigner dualism was a theory of measurement.

    In Wigner's theory the observer doesn't really affect reality. Measurement is the interpretation of one world (matter) by another (mind) - I think.

    Naively, it seems to me that Wigner's theory is not reductionist. The Wave function/matter is more complex that the observed reality and the process of observation which requires the mind matter system is actually simpler.

    Physicists today are still trying to reconcile the problem of measurement and do not entertain Wigner's theory probably because it does not take them any further.

    An important point though is that it is really the problem of measurement, not of entanglement, that makes QM confusing. For instance, the measurement on earth of the momentum of a proton collapses its wave function and instantaneously provides the information to the observer on the Andromeda galaxy. So measurement contains entanglement as a special case.
  13. Dec 6, 2009 #12


    User Avatar
    Science Advisor
    Gold Member

    The mystery is only there if you reify (treat as a representation of reality) the wave-function.
    When you follow the orthodox Copenhagen Interpretation the wave-function is a representation of information about the system in the same sense as a classical probability distribution. That is to say it is one level deeper in abstraction than a physical representation of reality.

    Similarly entanglement is simply a non-classically describable but purely statistical correlation between observables of two systems. The non-locality is a red herring and provably non-causal (no Bell telephones are possible.) If the non-local phenomena cannot be used to send a signal then it must be non-locality with regard to non-physical components of a model. (We don't observe wave-functions we write wave-functions for systems which we do observe.)

    Given that interpretation there is no mystery to the collapse of the wave-function as it is qualitatively no different from the collapse of a classical probability distribution upon updating assumptions about the physical system.

    The analogue I often use is the "non-local collapse" in value of lottery tickets when the drawing is made. It expresses a change in relationships subject to non-local constraints (there is only one winning ticket among all the tickets distributed over all of space).

    Take the example of an observation of a particle position. The assumption that you are dealing with a system of say exactly one proton over a large volume is the imposition of a non-local constraint. Measuring the position of that proton is a non-local measurement because it incorporates the continued constraint on e.g. allowing another proton to form via pair production or to enter the experimental region. One measures the proton at position 1 you are simultaneously observing all positions or at least observing proton fluxes across a boundary surrounding all regions in question and hence your action is not localized. Why then should you expect your representation of your knowledge about the proton's behavior be localized?

    As to the system's not satisfying Schrodinger's equation during measurement... it is not a fundamental transgression of the SE but rather the fact that a measurement process involves an interaction between system and measuring device which must invoke an unknown Hamiltonian involving dissipative effects inherent in the measurement process. Measurement is a thermodynamically non-trivial act. The non-unitarity of the system evolution during measurement is merely a feature of the fact that during measurement the system is not isolated but rather part of a larger entropic system-measuring device.

    Failure of unitary evolution during measurement is no different than non-conservation of energy for a system in contact with an entropy dump. One does not need to believe this non-conservation is occurring at a fundamental level, only that the usual assumption that the system is evolving in isolation is explicitly being invalidated.

    Mind you mysteries and unanswered questions remain w.r.t. quantum theory but I don't think there are any great mysteries with regard to entanglement and WF-collapse per se.
  14. Dec 6, 2009 #13
    I do not believe that there is any way to introduce a new Hamiltonian into the system to account for measurement. That is the whole point.
  15. Dec 6, 2009 #14
    It seems to me that your lottery ticket analogy begs the questions of what a measurement is.

    Your arguments about the non-locality of measurement of position are not predicted in quantum mechanics unless all you mean by this is that a particular position gives you information that all other positions are excluded. This is true of any measurement of anything. I do not see that it explains the collapse of the wave function. To do this this I would think you need to have a physical theory that describes the selection of the lottery ticket. In classical physics this description exists because the theory requires that particles have positions and momentums in phase space. But in QM this is not true. The wave function evolves according to a law which then is suddenly violated in measurement.
    Last edited: Dec 6, 2009
  16. Dec 6, 2009 #15

    In classical probability, the so-called 'probability' is equal to our 'ignorance' of the values of the variables(e.g. flipping coins). This isn't the case in QM and makes for a completely new case, that has no resemblance to the inherent indeterminate observables. Moreover, contrary to classical mechanics, one can never make simultaneous predictions of conjugate variables.

    Why are you asserting this? What does it explain?

    So if information cannot be send by humans, then no causal signalling is at play? This assertion cries for elaboration and elucidation.

    I agree with this. But in my reasoning, there is something fundamental unaccounted for, that cannot be patched up with "this is just a strong correlation", "it's just statistics", "it's just a feature of the universe we live in", etc.

    Do you mean that nonlocal signalling is not a physical process?

    The tickets always have definite properties, whether you measure them or not. You cannot claim so with quantum objects, so your example cannot be valid. As the saying goes - "Heisenberg may have been here".:smile:
    Last edited: Dec 6, 2009
  17. Dec 8, 2009 #16


    User Avatar
    Science Advisor
    Gold Member

    Within the classical context of ontological description. When you transition to quantum theory you transition to "praxic" probabilities.

    Remember you can reverse this "representation of ignorance" to "representation of limited knowledge". You then are not referring to what is potentially operationally meaningless and sticking to the operationally meaningful "knowledge about the system" in the form of observables and the probability of what you will observe.

    One is still making the "classicalness" assumption when one partitions degree of information about a system into "knowledge vs. ignorance" as in "knowledge vs. ignorance about the state of reality". Even in the classical paradigm this can lead to problems as with Gibbs paradox. In the quantum paradigm the two qualities gain more independence. One can still quantify ignorance in the form of entropy but without making a priori judgments about nature.

    One can make simultaneous predictions in the form of stating expectation values for all observables (including squares of the given observables and hence their variances).

    Rather one cannot simultaneously test all predictions.

    The nature of entanglement. Specifically that there isn't necessarily some unobservable causal connection between actions (e.g. acts of measurement) on entangled pairs.
    So I shall elaborate. If information cannot be sent by humans then any causal signaling which you may assert is empirically meaningless, in the same sense as a aether explanations of Michelson-Morley experiments is empirically meaningless. You can say "Ha! You cannot disprove the aether!" and naturally one cannot empirically verify the non-existence of the fundamentally empirically invisible. Likewise you can say "Ha! You cannot disprove causal signaling!". But such hypotheses, being inherently untestable (within the given theory) should be excised with Occam's ever-sharp scalpel.

    How many angels can dance on the head of a pin? Is there a God? There is much which is "unaccounted for" in physics. Our first task is to excise from the theories what is fundamentally unaccountable because it is empirically meaningless. There is still room for it in the language of physics and you are welcome to speak of a particular model for what lies beneath the observable physical phenomenon. E.g. it not a priori improper to speak of modeling entangled pairs and EPR experiments with this addition of causal signaling. But the traditional use of models is to act as scaffolding for the generation of the actual theories and once they are built this scaffolding ought to be removed.

    Observable non-local signaling is a "physical process" in that it is a testable hypothesis. Physical processes are observable processes so a "fundamentally unobservable physical process" is an oxymoron.

    Now mind you, we can play with new theories invoking more in the way of what is observable about nature. E.g. one can speak of a theory in which there are exotic means to actually perceive the aether. Likewise one could theorize about exotic means to directly observe Bohm's pilot waves. One is then stepping beyond the theory in hand. Feel free to play with alternative theories. But otherwise, while one is in the given theory, one should not make physicality assertions about what is assumed within that theory to be fundamentally unobservable.

    Yes this isn't an example of quantum system. Nonetheless there is still a "collapse"! This analogy is intended to show that collapse in description needn't imply some physical seismic activity "out there". The update in our knowledge about the tickets is demonstrably non-physical (i.e. not a sudden non-local change in the nature of each ticket). Because the tickets are classical objects we can directly assert this and it is obvious.

    Now the lotto drawing is not a measurement of the tickets. The analogy is not intended to be pushed that far. It is only a demonstration of the nature of "collapse".

    In the actual quantum case of a measurement one can first assert a specific measurement is made (a physical action) in which case one changes the system description. After a measurement is made and before one has asserted which value one has measured the proper description is a density operator diagonal in the eigen-basis of the observable in question (well diagonal assuming it is a non-degenerate observable, otherwise block diagonal...)

    The "quantum probabilities" become "classical probabilities". It is then when you make the assertion that "the measured value was x!" that you collapse these classical probabilities to a singular certainty and you'll note it is a classical collapse. You can then update the density operator, assert it is in fact represents a sharp description and write the corresponding "ket" or "wave function" if you prefer that format. But if you stick to density operator format you loose nothing physical in the description.

    In that format it is easier to break down the physical act of measurement. Clearly by interacting with our measuring device the quantum system has become entangled. Thus the joint density operator for system + device yields more entropy when we do a partial trace. This is simply decoherence of the system.

    Now if you are still looking for a physical collapse note that simply by changing our description we can change "where and when the collapse occurs". It is hardly reasonable to assert that a phenomenon so format dependent is actually physical. Not impossible mind you but if so, you necessarily should be able to recast exactly what is going on in a format independent way. Have at it but I suspect such an attempt with regards to collapse is futile.

    Ultimately the distinction between classical and quantum theory is not the scale of the system but the scale of the observer (measuring devices). Classically we assume the observer can be arbitrarily small so as to be affected by the system without affecting it. (The bug notices the bus without the bus noticing the bug.) Note the definition and role in classical theories of the test particle.

    We always (semantically) begin with a system description which is meaningful as our knowledge about the system. This is necessarily so given the empirical epistemology of science.

    The classical assumption is then that we can recast a representation of "our knowledge about the system" to a representation of the reality of the system. This assumption we make implicitly and intuitively but it gets made. But given the physical nature of our knowledge, e.g. it derives from a physical act of measurement = physical interaction with the system this assumption has physical consequences. It is a testable component of the theory which gets invalidated when we note acts of measurement do not commute.

    When we "explicitize" and thence withhold this assumption and the corresponding transition to an ontological description we are still doing good science. But we must consistently stay in the realm of "system description as our knowledge about the system". Attempting to look deeper is attempting to reassert that which we so necessarily rejected.

    Thinking in terms of "the real state of the system" is as improper in this context in just the same way as is thinking in terms of "which twin is really older" in the symmetric twins paradox of SR. One can choose to define an absolute answer to the latter by asserting an absolute frame defined by the aether. But in so doing one is both rejecting the cornerstone of the theory and stepping outside the realm of science in making operationally meaningless assertions.

    QM relativizes the "absolute state of reality" in the same way that SR relativizes absolute time.
  18. Dec 8, 2009 #17
    I would love to understand your points but I find your exposition difficult. Can you simplify the language?

    here are a couple of talking points that may help direct your response.

    - In classical mechanics one can in principle follow the path of any particle although in practice this may be impossible. In QM it is in principle impossible to follow the path of a particle. So it is not really a case of ignorance or limited knowledge.

    The example of a lottery doesn't isolate this distinction. For instance, I can think of the lottery as measuring the position of a particle in continuous Brownian motion.

    Further in the Brownian motion, the act of measurement does no change the stochastic process governing the path. In QM measurement creates a jump discontinuity in the stochastic process. In Brownian motion there is no collapse of the wave function. The stochastic process of the particle is unchanged.

    - QM measurement can change the stochastic process of characteristics of the QM entity than other the one being measured. It can for instance throw a particle out of an eigenstate of another operator. Again the lottery analogy does not cover this.

    - While I have never worked this out I wonder what the evolution of measurment probabilities for the position operator are. In QM it seems that the real stochastic process is the evolution of amplitudes. This is a Markov like process except that amplitudes replace probabilities. This again is not reflected in the lottery analogy.

    - One might summarize all of these point by saying that the stochastic processes of QM are different than processes where only probabilities are evolving. Another example is that interference in QM is really the superposition of two "complex Markov processes."

    - In entanglement there is no signaling merely collapse of the wave function. A signal would have to travel at a finite speed. One might try to mimic signaling by setting up an apparatus on Betelgeuse that detects the second electron's spin when ever it actually had a definite spin - although I am not sure that such an apparatus could in principle be constructed. Then when we measure the spin of the first electron here on Earth the apparatus on Betelgeuse will immediately record the spin of the second electron. The problem with this seems to be that simultaneity is not an absolute concept.

    Instantaneous changes occur in classical physics even in General Relativity. The motion of a star changes the curvature of all of space continuously and instantaneously.

    - You say, 'Thinking in terms of "the real state of the system" is as improper in this context in just the same way as is thinking in terms of "which twin is really older" in the symmetric twins paradox of SR."

    If a measurement throws a particle out of a previously measured eigenstate of another operator then you do know the state of the particle with respect to the other operator. It is a definite linear combination of eigenstates of that operator.
    Last edited: Dec 8, 2009
  19. Dec 8, 2009 #18

    I don't agree with the assumption that unmeasured states have definite observables that are inaccessible to our experiments(i.e. limited knowledge of the system). That seems to be the dividing line between our viewpoints.

    If you didn't assume that particles had definite properties at all times that lie in a blocked view from us, you'd be hard pressed to come up with classical-like example. In my view, a polarised photon has neither a horizontal, nor a vertical polarisation until you do a measurement. I find it meaningless to about unmeasured events. It appears you are you advocating the Bohmian approach.

    I can't decide if my confusion stems from you mixing Bohmian mechanics with QM or if I misunderstood your point.
  20. Dec 9, 2009 #19
    Why do you think this? To me this is post hoc ergo propter hoc; to assume the UP specifies that objects are inherently undetermined until hit by photons, electrons, or some other combination of particles that we use to measure them is a logical fallacy. While I agree that the UP specifies that any observation necessitates a change in that system, and we therefor can not know momentum and position simultaneously, I do not agree that it necessitates that reality is undetermined until interacted with, whether intentionally (observation) or not.

    Consider this thought experiment:

    Define the Universe to have one quantum system, a hydrogen atom, H. Let us assume that the observer is disregarded and that we can somehow, however implausible, know about H. We are somehow privy to the absolute, which includes that which does and does not exist.

    By your reasoning:
    - H would never emerge into existence because there are no other quantum systems other than H. (Causality is violated--reality itself, not to be confused with determinism, is somehow defined by other portions of reality? Hmm. I don't know if I can agree with this. This is like saying nothing would exist if it weren't for something else interacting with it. Please explain if you feel this is true. And what happens if a particle is utterly isolated, does it wisp out of existence? Wouldn't that violate conservation?)

    By my reasoning:
    - H is there, but could change if other quantum systems were introduced to the universe. (Causality is preserved.)

    A third reasoning (undecided):

    - H isn't there yet, but exists in some quasi-fundamental plane/state in which an interaction (observation, etc) with another quantum system will cause it to emerge into the Universe, but based deterministically on the interactions with said system(s). (Causality is preserved.)

    Looking forward to your thoughts on this.

  21. Dec 9, 2009 #20
    Following from http://plato.stanford.edu/entries/bell-theorem/" [Broken], when considering classical complementary properties like position and momentum, it is a necessary fact of QM that the two values cannot have simultaneous definite existence. The suggestion that properties actually belong to particles but are just hidden from our knowledge is a claim of local realism, which has been falsified according to experiments, QM, and the assumptions of all interpretations anywhere near the mainstream.

    This is why Bohr often referred to Heisenberg's "indeterminacy relation" rather than his "uncertainty principle." The historical use of "uncertainty" is confusing and unfortunate.
    Last edited by a moderator: May 4, 2017
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Discussions: Entanglement, Mind and Causality
  1. Mind (Replies: 9)

  2. Axiom of Causality? (Replies: 13)

  3. The mind (Replies: 4)

  4. Causality and Psychics (Replies: 2)

  5. Causal Determinism (Replies: 6)