Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

B Does the EPR experiment imply QM is incomplete?

  1. Sep 25, 2018 #1
    I know this has been discussed in so many ways on this forum, but it is hard for me to separate interpretations, fact, speculation, and inaccuracies. You can probably just skip the next two paragraphs that I describe the EPR/Aspect experiment and go right to my question.

    In an EPR experiment like the one Alain Aspect performed with entangled photons being measured by respective polarizers, Quantum Mechanics treats the experiment as a system where the probability for the polarization of the photons leaving the polarizers is determined to be cosine squared of the difference between the polarizer angles.

    Say one of the polarizers, call it polarizer B, is far away relative to polarizer A and the photon source. Say it is so far away that by the time photon A hits polarizer A, there is no possible way by instant communication (faster than the speed of light) that photon B, heading towards polarizer B has any idea of what it might interact with in the future.

    QUESTION:
    So if all the components in the system have no idea what photon B will interact with in the future, how can QM math give the correct result?

    The only two logical possibilities that come to mind are:

    1) That QM happens to give the correct probability, but the reason it does is because there is an underlying mathematical algorithm that describes how photon A collapses some of its state to photon B when it interacts with polarizer A.
    2) That QM works because the system is actually everything in the entire universe and somehow every particle in the universe knows the state of every other particle in the universe and can follow the rules of QM math..

    At least in my eyes, #1 seems the simplest, most likely explanation, and it is easy enough to simulate and get the correct result in this specific scenario with linear polarized photons. Would #1 be called a collapse or Copenhagen interpretation of QM? Is this the most popular interpretation by QM experts? Do most QM experts who subscribe to this interpretation think there is a deeper realistic (i.e. based on rules of math) theory that gives an explanation for the collapse that is not just some random probability generator? If not, are there any strong reasons why this interpretation is a dead end?

    #2 seems unlikely to me. QM has very simple rules, you add probability amplitudes when a state can change in multiple ways, but otherwise you multiply; then you square the result for the probability. These simple rules seem to have a local basis and so I have a hard time equating these rules with the state of the entire universe. Is there a name for this type of interpretation? How popular is this interpretation with QM experts?

    Thanks.
     
  2. jcsd
  3. Sep 25, 2018 #2
    You are making this way too complicated. Quantum mechanics gives the correct answer with no need to invoke anything more than that. The photons are anti-correlated, so there is no cause and effect relationship. Actually, I think wat confuses most people is the relativistic aspect in not understanding simultaneity. Since thye photons are spacelike separated, their measurements cannot be time ordered, so if one were to interpret quantum mechanics to give the epr experiment some cause and effect mechanism, either that interpretation of quantum mechanics or relativity would be wrong. There doe not need to be any deeper reality and I think the belief that there must be is an attempt to understand the quantum world in classical terms, even if the classical terms are weird. In addition, I think it misses the point.

    Yeah, it's basically the way most physicists (at least the ones I have met) interpret quantum mechanics. It's a local theory.
     
  4. Sep 25, 2018 #3

    PeterDonis

    Staff: Mentor

    The fact that the Bell inequalities are violated in actual experiments means that no such algorithm is possible that does not use inputs from spacelike separated events. In other words, any such algorithm would have to include information propagating faster than light.
     
  5. Sep 26, 2018 #4

    Boing3000

    User Avatar
    Gold Member

    There is a deeper reality, that's why we do use laboratory and not (negative)wishful thinking.
    ... do science, and not just gazing at equations and be quite satisfied by them, and believing that they ARE reality instead of a (just another)VIEW of reality.
    But there is a believe that there exist a proof of impossibility concerning some better theory. I have yet to see one (and then understand why we haven't closed every physics university)
     
  6. Sep 26, 2018 #5

    DrClaude

    User Avatar

    Staff: Mentor

    To cite from F. Laloë, Do We Really Understand Quantum Mechanics?
    Up to now, point 3 holds, so one of the two other assumptions must be let go of.
     
  7. Sep 26, 2018 #6

    Boing3000

    User Avatar
    Gold Member

    That's very well put. I would add that, in such form, those algorithms absolutely belong to the "classical" category. There is definitively nothing "weird" about them.

    I think you are contradicting your previous sentence. Those algorithm cannot possibly factor-in any type of causality or change in space-like separation.
     
  8. Sep 26, 2018 #7
    It's not negative wishful thinking. I'm pointing out that if you take the uncertainty relations seriously, there is physics in that alone, without trying to circumvent it.
    No, I just don't believe that the microscopic reality can be described in terms we can have any physical experience with and that trying to do so is just wrong headed. Particles are not just little things with little masses that move along well defined (if bizarre) trajectories. The physics is in what they are not what you want them be. If the epr photons represent a cause and effect phenomena, then relativity is wrong. Take your pick.
     
  9. Sep 26, 2018 #8

    Boing3000

    User Avatar
    Gold Member

    I haven't seen anybody, especially the OP, not taking it seriously. Quite the opposite. I have always though that such fundamental principle were the most interesting ones to investigate, to gain better and better understanding. Not just dismiss it by "it's just the way it is"

    I may have a vocabulary issue here. I think it's fine to try to circumvent other deep principle of "reality" like gravity, by gaining knowledge enough to build bridge,shopper or rocket (in more an more refined/efficient ways).

    No "terms" in physics connect to "physical experience". They connect to rational inference. A real number is no more physical than a complex number. A Hilbert Space is barely more abstract than a Galilean FoR. People deal in probabilities all the time.
    I find curious peoples that are trying to stop that process of "improving" their understanding, by whatever additional connection with experience they make (string, filed, particle, dice and what not).
    And yet I wouldn't call it "wrong headed". There are certainly plenty of reasons to consider physics as complete.

    And the OP never suggested that. So why bring it ?

    Bell has provide a way to actually exclude cause and effect. So it does not seem than QM is the problem here.
    Nor is relativity, which is not at all disproved by any EPR experiment (nor Bell's proof).

    The overwhelming evidences indicates that non-locality is a definitive feature of the universe. And yet, so is locality. They are not mutually exclusive.
     
    Last edited: Sep 26, 2018
  10. Sep 26, 2018 #9

    PeterDonis

    Staff: Mentor

    No, I'm not. An algorithm that calculates what happens at event A using inputs from event B, which is spacelike separated from A, is implicitly assuming that information can travel from event B to event A, which would require information to travel faster than light.

    I'm not sure what you mean by this.

    I have no idea what you mean by this.
     
  11. Sep 27, 2018 #10

    Boing3000

    User Avatar
    Gold Member

    That's more specific, so let's be even more specific (that is: representing what entanglement is): An algorithm can compute what happens at event A (or event B) using input that are NOT connected to any other event (That's after all what the wave function is).
    So any other type of input can do, it just have to be unique (in case of entanglement) and non-local(isable) (as per Bell)

    No. That is an additional statement that does not follow. It would be true only if a change occurred at B (and only B).

    I mean the space where every algorithm live (the UMA architecture on top a the RAM) is the simplest thing there is. A 1 dimensional "space" with not even a notion of distance. A basic set that everybody can wraps his head around.

    It means "there is a burden of implementation". Anyone that pretend that "signaling info" is identical to "sharing info" has to actually prove it by providing an "implementation" (provide some solvable formula).
    In this case you will need to add to the "algorithm" new data like positions(including time), some FoR, and trying to compute that signal FTL propagation (maybe there is a formula for tachyon?), and solve the causality issues (that are not due to FTL, but only the addition of a time coordinate).

    Algorithm can't do that because they never evolve an "entire state" but only the atomic step. And none of those step can be swapped (they are not commutative)
     
  12. Sep 27, 2018 #11
    Certainly! If I did not say it in my original post, I completely meant to. Whatever algorithm happens appears to happen instantly (FTL) over any distance. It still confuses me when "collapse" is used, is this what everyone means by "collapse"?
     
  13. Sep 27, 2018 #12
    I am just using math based logic that all of physics including Quantum mechanics is based on. I don't see how using math based logic is making things more complicated. QM or QFT does not explain everything and is probabilistic in nature and if I understand QFT perturbation theory to some extent (I don't really) it is more of a tool than a theory. If you believe the universe rules are based on math, at least at the level we are discussing, then I would expect the natural instinct when encountering a probabilistic theory is to understand the behavior of the instances that make up the probability. I assume that is the first thing physicists did when QM was first formalized. And there are many examples of instances that appear to be exactly deterministic like orthogonal polarization behavior. If you think we should stop looking for a deeper theory now, I would like to understand what strong reasons you have for this are? I obviously don't have the knowledge of QM that most have on this forum and so I am very interested in understanding these strong reasons (if they exist).

    I don't see how entanglement and FTL effects would invalidate relativity. As far as I know, their domains do not conflict. The only thing that I can see invalidated at this point are some interpretations and non-math definitions of "time".

    I don't see how math based logic translates to classical thinking.

    Good to know, thanks.
     
  14. Sep 27, 2018 #13

    Boing3000

    User Avatar
    Gold Member

    There is no such algorithm. Something that happens instantly exclude the possibility for something to travel (going from place to place at any speed, going smoothly from any value to another value) from somewhere to elsewhere, because everybody (any FoR for any observer) observe a ZERO "distance" between those value. Happily this is the default behavior for computer that run algorithm, a (unique)value V is simply shared/referenced in X other values.

    The only algorithmic solution for otherwise different object value to be kept synchronized (at all time, always), is to add two reference (in each value V1 and V2) to one another. Then to use a locking mechanism (which is identical to a causality violation barrier)) rendering this two value indistinguishable from a unique value (so why bother ?)

    Other even more complicated scheme may be tried by adding reference to A in B and vice versa, and trying to signal between them (whatever the speed you manage to simulate). In such case, you'll immediately encounter algorithmic singularities (like infinite recursion), and even more complex causality synchronization problem (running the algorithm in parallel will crash the simulation)

    Actually the problem is darn simple when observed in the most basic an classical logic framework.
    1. A is independent of B at any and all (space)time. A is not even "aware" of B
    2. B is independent of A at any and all (space)time. B is not even "aware" of A
    3. A and B have a reference to the same V (not a copy) (whose location don't matter(non-locality), at any and all (space)time)
    4. Whenever A or B need to test/update V (atomically) its reference to V is severed and replaced by a new value (call it collapse if you want)
    This guaranties a perfect correlation (again not FLT, because there is never a speed of change involved), and a strict impossibility to A to signal/affect/action to B (and vice versa)
     
  15. Sep 27, 2018 #14

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    I don't follow. First, you say V is non-local (and location doesn't matter). OK, that part works. So you are suggesting:

    1. When A tests V to get a value, is A sending something to V? (I.e. A is testing polarization at 45 degrees, and is requesting V to provide the answer for that measurement basis.)
    2. And then I guess V is sending something back, the "answer"?
    3. And then I guess V updates itself to say "At 45 degrees, the answer is "[+]" and I must save that so I can answer B consistently at a later time. And by the way, I cut off my connection to A.
    4. And when B tests V to get an answer, B sends its measurement setting and V returns something "consistent" with what it told A. If B also tests polarization at 45 degrees, the result must agree with what was told to A. If B tests at some other angle, then V uses a Cos^2(theta) rule (or whatever is appropriate for the entanglement) to get the correct QM prediction.
    5. And then V cuts off its connection to B as well. Now it is not connected to anything. Does it still exist? Because it is otherwise isolated from the rest of the universe.

    So if the above is what you are suggesting: It's non-local. And A and B are in partial communication with each other. V is simply the intermediary switching station for their communication. V is in FTL communication with both A and B. And obviously: A's request affects B's outcome, in contradiction to your last statement.

    On the other hand, if I describe the mechanism incorrectly, please correct me. :smile:
     
  16. Sep 27, 2018 #15

    PeterDonis

    Staff: Mentor

    No, that's not what the wave function is. The wave function contains information about the entire system, which includes subsystems that are measured at events spacelike separated from each other. The wave function is not local to a particular measurement event.

    How would you tell whether a "change" occurred at B and only B?
     
  17. Sep 27, 2018 #16

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    1. Of course you can order the measurements of Alice and Bob in all reference frames. You measure them in the same inertial frame at the same place. However, you delay one of the measurements. Add a meter of fiber and voila, one precedes the other.

    2. That's strictly an interpretation dependent comment. Entangled state correlations lack a spacetime limitation or dependency. Obviously, distance between measurements can be made arbitrarily large. Further, entanglement can exist between particles that have never existed in a common light cone. Not sure how you get Einstein locality out of that, although certainly some interpretations do anyway.
     
  18. Sep 27, 2018 #17

    DarMM

    User Avatar
    Science Advisor

    I mightn't explain this well! :smile:

    Firstly "collapse" need not be a physical process. Some interpretations see it as epistemic. In these interpretations the wavefunction is your set of predictions for the system. However when you make an observation you update your set of predictions in light of that observation. It's like rolling a dice. You initially have ##p(E) = 1/6## for each number, but when you look at the dice after the roll this collapses to ##p(E) = 1## for one number and ##p(E) = 0## for the rest.

    As for the realist question, this is really a question as to whether quantum probabilities reflect your ignorance of the properties of the particles or not.

    Realist Interpretations:
    In some interpretations (called "Realist" in Quantum Foundations) the probabilities come about because you don't know something about the particles or system you are studying or something about their interactions with you.

    In Bohmian Mechanics it's because you don't know the exact position of the particles.

    In Many Worlds it's because they will split you into multiple copies of yourself, each copy corresponding to one of the multiple states the particle occupies and in advance you don't know which one you will experience/which copy you will be.

    In the Transactional Interpretation (which has retrocausal signals) its because you aren't aware of the configuration of the signals from the future. In any case there is something mathematically modellable related to the particles that you don't have full certainty of.

    The general critique of these interpretations is that they require fine tuning. Bohmian Mechanics has faster-than-light signals, but for us to never see them or use them requires our ignorance of the particles' positions to mask the signals precisely. In Many-Worlds a symmetry called Operational Time Symmetry is violated (an observed symmetry in QM) and so one needs the initial state of the multiverse to be fine tuned to mask the breaking of this symmetry. The Transactional Interpretation explicitly has communication with the past, so you need some sort of effect on our equipment that masks this.

    Participatory Realist Interpretations:
    The other class of interpretations are called "AntiRealist" or "Participatory Realist". In these interpretations the probabilities aren't related to some ignorance of the system under study's properties. Rather the probabilities are a way for an "agent" to organise their interactions with the system. There can be several versions of this.

    In Copenhagen the ultimate constituents of reality are incomprehensible/ineffable. The best a classical agent like us can do is organise our dealings with such ineffable objects with the probability calculus known as Quantum Mechanics. The uncertainty relations tell us how valid it is to reason with a given classical concept (e.g. position or momentum). Heisenberg would have said that when a concept applies very well (low uncertainty), such as how the wave concept applies very well in atomic orbitals, then the ineffable stuff is somehow "more like" that concept in that situation. Bohr didn't even allow this.

    In QBism quantum mechanics is just a way an agent organises their probabilities (understood as beliefs) in a world where interactions "create a new fact". The world wasn't completely created in the Big Bang, but rather new elements/facts of reality emerge into the universe with each quantum interaction. They are random in the sense that they aren't determined by anything prior, being entirely new.

    So you can see the common element is "Reality is weird in some way, QM is the framework an agent must use to navigate such a world" rather than "QM's probabilities come from this or that aspect of the particles". The phrase "Participatory" comes about because the observation result cannot be understood in any way that removes the observer. In QBism they create result. In Copenhagen the result takes place in our classical mental framework, the observer defines the space of results.

    The general critique of these interpretations is that they in some sense go against the usual notion of a scientific understanding of phenomena, depending on who you ask.
     
  19. Sep 27, 2018 #18
    Agree, that is what I meant by instant :smile:

    Yes, this is the algorithm I more or less have in mind and have discovered in my own simulation attempts. As far as I know, for linear polarization it seems to hold up in every case. All of the amazing experiments like "delayed quantum eraser" seem to follow this algorithm precisely. At least from a "realist" view point, this is the only way the experiments make sense to me. Do you know of any experiments it does not hold up?

    In step 4 of your algorithm, by "test/update" you mean an interaction with the polarizer? And the reference to the shared V is severed after the interaction (and thus photon B inherits the result of the interaction between A and polarizer A)?

    I was thinking about attempting to add circular polarization to the algorithm and try to model the probabilities when going between linear and circular polarization which introduces complex numbers with the QM math.
     
  20. Sep 28, 2018 #19

    Boing3000

    User Avatar
    Gold Member

    In the algorithm, the process is trivial (and happily non-local). A(or B) probe/read/peek the value V (V is a placeholder for a value). Then A will decide for itself what it will do with value V (for example compare it to a local polarization angle, then maybe change V's value (that is what severe the link). The trick is that probing/modify can only happens once and atomicaly. After that A will have a new reference to a copy to an ##V_{after}## while B is still referencing the old (but updated and equal to ##V_{after}##)
    Kind of, but V is something much simpler that have no decision making to do. It is just a Value placeholder (here an angle)
    Ho no. That part is A's (or B's) business only. Again V is totally unaware of A or B. Being read_and_modified by "prober" is its only functionality.
    In a simpler way, because B's reason to have a correlated answer (whatever angle B is going to use) is that A have left a new value in V (corresponding to A's "engeinvalue" (here 45 degree)) before severing its link with V.
    More the other way around. Its it kind of important. V is truly unaware that any other object have a link to it.
    A very good question. Happily, when writing algorithm we don't have to bother with such menial tasks. Implementation wise it will be literally garbage collected.

    But if I try to connect this pure classic-simple information base simulation of QM result, to some physical quantity, I think V is definitely connected to conserved quantities. The fact that this spin is "nowhere" begs the question of what the local volume of space at A and B actually spin value is (between measure).

    Likewise I have always wondered what happens if you put a polarizer before the cristal. If the incident photon have a ##V_{in}##, the way a ##V_{out}## is created not only create entanglement (a unique ##V_{out}##) but a random ##V_{out}## (even if ##V_{in}## is known).

    That's a way to put it. But what is really important is that there is no path between A and B (nor is away from V). There is ##A{\rightarrow}V{\leftarrow}B## the arrow are unidirectional (a pointer or reference)

    Correct. But no train is ever leaving V on its own accord. There is no arrow away from V

    I wish I could let this "FTL" TLA pass without reacting like crazy... but I must. There are two fundamental ways it is not FLT.

    1)There is no speed, no continuous evolution of values (like position). Algorithm are discrete, for all purpose, their are also quantized. Their implementation also happens to be quantized. The computer hops from (classical) state to state is discrete step. There is nothing in between those step. No distance, nothing. Some V is something then is something else. B point to ##V## then to ##V_{after}##. There is no intermediate step against which to compute a difference/delta.
    Concerning that simulation, the only critical step that need to be "discrete protected" to perfectly match QM (for example same angle at A and B always match) is: [ A(or B) read V, roll the dice against V, update V, clone V] there are actual operation that allows for that

    But I don't think instantaneous is a correct way either ("instant" refers to much to "time" for my taste), but we are going to lack correct words ("value update" is not sexy). Values (information in algorithm) always update at the same speed of 1 cycle per cycle. Algorithm are totally timeless (everything is instantaneous, no distance), values (at addresses) are spaceless (always here, no distance).
    Inventing(implementing/simulating/making up) a notion of time within the 1 dimensional distance-less "space" of algorithm, is a very complicate business that is never perfect (actually the values(informations) themselves are quantized, not only the evolution)

    2)There is no communication simply because there is no "wire" between V and "the not V". The "wires/arrows" are unidirectional. This is of utmost importance.

    Absolutely not. You get that "feeling" because as an observer you see A,B,V. You have arrow pointing to them. But those arrow don't allow updated. You are a helpless observer.
    Actually I even contest the fact that B have been affected. What is affected is your sequence of your copy (as an outsider) of the the evolution of state. ##[B_{step1},S_{step1}], [B_{step2},S_{step2},...]## You choose to make those copy because you only have pointer to every value in the system, including the arrows.

    Implementation-wise no value of B is affected before B "decide" to test_and_update V. This my implementation of DrChinese's challenge. I defy anybody to equip B(or A) with a (local) way to detect any affect on its values :biggrin:
     
    Last edited: Sep 28, 2018
  21. Sep 28, 2018 #20

    Boing3000

    User Avatar
    Gold Member

    I suppose that you are talking of the QM equation there, but QM equation can also be used by computer. See below.
    Which is equivalent to what I said
    The wave function can be queried/peek at anytime by any entity in the algorithm. The wave function is a non-local object with no dependency whatsoever on the actual included subsystem. The information as been encoded at the start of the simulation, the wave function do not live in real space.

    My words, nearly exactly (see above)

    You cannot (I have developed the point in my response to DrChinese).
    The simpler(trivial) version is that B don't know its initial value and it can peek it only once. So there is no way for it to detect any change.

    That's my understanding of Bell's proof, and the very reason he used the word "instantaneous".
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted