Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Local Realism After Bell

  1. Jan 11, 2005 #1


    User Avatar
    Science Advisor
    Gold Member

    I will split this post so to make it easier to reply, if needed.

    I. Background
    The EPR Paradox (1935) left the scientific community in a divided state. Some saw QM, even then a successful theory, as the likely victor in the debate over realism. But many remained unconvinced, and felt that hidden variables existed which could restore determinism in a classical fashion. But then Bell (1964) stunned the theoretical community with his paper on EPR. After, it became clear to the majority of physicists that QM could not be reconciled with the classical form of determinism, herein called local realism (LR). Bell's Theorem demonstrates that the following 3 things cannot all be true:

    i) The experimental predictions of quantum mechanics (QM) are correct in all particulars
    ii) Hidden variables exist (particle attributes really exist independently of observation)
    iii) Locality holds (a measurement at one place does not affect a measurement result at another)

    QM predicts that certain LR scenarios, if they existed, would have negative likelihood of occurance (in defiance of common sense). Any LR theory - in which ii) and iii) above are assumed to be true - will make predictions for values of these scenarios which is significantly different than the QM predicted values. QM does not acknowledge the existence of these scenarios, often called hidden variables (HV or LHV), so it does not have a problem with this consequence of Bell's Theorem.

    Why would Bell mean the end of LR? The reason is that by 1964, QM had scored a long series of victories. Its formalism explained things that were unimaginable, and in ever increasing accuracy. It made prediction after prediction in advance of experiment. When the experimental technology eventually caught up, QM would be shown to be correct yet again. It had endured no significant predictive failures. So Bell seemed like a sure bet.

    Eventually, experimental technology caught up yet again, and Aspect soundly confirmed Bell in favor of QM. LR was ruled out by 9 standard deviations by 1982, and 30 by 1998. No surprise in that.

    During recent years, a chorus of tenacious opponents has attempted to argue that Aspect is NOT, after all, evidence in favor of QM and against LR. They soundly reject Aspect with clains of "accidentals" and failure to achieve "fair sampling". Maybe. Let's take a look under the hood.

    II. After Bell, before Aspect
    Let's roll things back to before Aspect's confirming experiments. As noted above, QM was already held in such high esteem (and why not?) that few expected to see its predictions rejected in favor of LR. After all, Bell showed that the assumption of independent reality of hidden variables led to the prediction of negative probabilities for the results of certain outcomes in spin tests of photons in a correlated singlet state. This was a sure sign to most that these cases could not exist. So we already had the following:

    a. QM was a star theory that could do no wrong (perhaps a slight exaggeration :)
    b. QM was at odds with LR, and the two could not peacefully coexist.
    c. The QM predictions matched classical optics with its cos^2 function.
    d. There were no specific/consistent alternative predictions made by local realists.

    I could arguably stop here, and there is already plenty of evidence for QM and against LR.

    III. The Setup to be Discussed
    See elsewhere for more information on Bell tests. It is enough to state that we will refer to the same setup used in the thread "Bell's Theorem and Negative Probabilities".

    In the Realistic view, we could imagine that the spin answers to polarizer settings A, B and C all exist at the same time - even if we could only measure 2 at a time. Therefore, there are 8 possible outcomes probabilities, cases [1..8] below, that must total to 100% (probability=1). This is "common sense" and is a requirement of LR (but not QM). The permutations are:

    [1] A+ B+ C+ (and the likelihood of this is >=0)
    [2] A+ B+ C- (and the likelihood of this is >=0)
    [3] A+ B- C+ (and the likelihood of this is >=0)
    [4] A+ B- C- (and the likelihood of this is >=0)
    [5] A- B+ C+ (and the likelihood of this is >=0)
    [6] A- B+ C- (and the likelihood of this is >=0)
    [7] A- B- C+ (and the likelihood of this is >=0)
    [8] A- B- C- (and the likelihood of this is >=0)

    With "relative" settings for A, B and C of 0, 45 and 67.5 degrees respectively, we solve for 2 special cases [SC] - essentially pointed out by Bell:

    SC = [3] + {6]

    = (X + Y - Z) / 2


    X = correlations between measurements at A and C, a relative difference of 67.5 degrees
    Y = non-correlations between measurements at A and B, a relative difference of 45 degrees
    Z = correlations between measurements at B and C, a relative difference of 22.5 degrees

    and leading to (where "QM." is prefixed to the Quantum Mechanical prediction, and "LR." is prefixed to the Local Realistic prediction):

    [QM.SC] = -.1036 (prediction per QM, if the cases existed)

    [LR.SC] >= 0 (prediction per LR, simply by assuming the cases existed)

    There ended up being significant discrepancies between the predicted values of X, Y and Z that led to [QM.SC] and [LR.SC].

    QM predicts QM.X = .1464 while LR predicts* LR.X > .2500 (big difference)
    QM predicts QM.Y = .5000 while LR predicts* LR.X =.5000 (no difference)
    QM predicts QM.Z = .8536 while LR predicts* LR.Z < .7500 (big difference)

    * Though not specifically required, there are the assumptions used here that LR.X + LR.Z = 1 and QM.Y = LR.Y but strictly any values of LR.X, LR.Y and LR.Z are acceptable IF LR.SC = LR.X + LR.Y - LR.Z >= 0. Let's assume these values, coming the closest to the QM predictions and therefore having the smallest discrepancies, are the predictions of LR.

    I note also that Caroline Thompson's LR model, the "Chaotic Ball" makes reference to a roughly linear function which matches the above values closely (see her page 3, below formula 1 and her figure 14).

    IV. Tests of X, Y and Z
    So all we need to do is measure X, Y and Z and we will solve the puzzle. Or do we? The QM predictions match a known optics formula: cos^2(angle). LR matches no otherwise known spin statistics. In other words, QM is working from a mathematical formalism while LR has no specific values to offer us. QM is an otherwise successful theory; LR has long been abandoned as a working tool. Why bother with any tests at all?

    Scientists, being a thorough lot, tested away - of course. Through the years, Aspect and others performed tests of X, Y and Z in various forms and combinations. Always the results were substantial and ever-increasing deviations from the LR expecation values that had the smallest variance with QM.

    I could arguably stop here, and there is already plenty of evidence for QM and against LR.
  2. jcsd
  3. Jan 11, 2005 #2


    User Avatar
    Science Advisor
    Gold Member

    V. Loopholes in the Aspect and other tests
    The supporters of LR have cried foul, claiming there are loopholes in the Aspect tests. If so, they are not falling back to much. A loophole in Aspect provides NO support for the LR predictions! A Pyrrhic victory, if the loopholes are valid. But let's ask a few questions first.

    Fair sampling loophole: The tested photons are not a fair sample of the universe, the photons that get counted are ones that just happen to be tilted towards the QM camp. Guess what, this has no positive implications for LR anyway. You would need to show first that a fairer sample leads to a result compatible with LR. There is no such evidence! The loophole is more theoretical - the sample could be biased, and if biased, the results could be compatible with LR (that is the argument, anyway). But it is difficult to believe that the photons know to be biased against LR in the first place.

    Subtraction of accidentals loophole: Detections that should be included in the sample are ignored - essentially the reverse of the Fair Sampling Loophole. This has no positive implications for LR either. You would again need to show first that a fairer sample leads to a result compatible with LR. There is no such evidence! The loophole is also theoretical - the sample could be biased, and if biased, the results could be compatible with LR (that is the argument, anyway). But it is difficult to believe that the photons know to be biased against LR in the first place

    And this is the coup de grace for the loopholes: Why, with loopholes present that are alleged to be statistically significant, do we end up with the following relationships:

    LR.X + LR.LoopholeAdjustment.X = QM.X = .1464
    LR.Y + LR.LoopholeAdjustment.Y = QM.Y = .5000
    LR.Z + LR.LoopholeAdjustment.Z = QM.Z = .8536

    or more strictly:

    LR.SC + LR.LoopholeAdjustment.SC = QM.SC = -.1036

    which if we minimize the LR.LoopholeAdjustment.SC becomes:

    LR.LoopholeAdjustment.SC = QM.SC = -.1036

    ...with ever increasing sampling precision, now up to 30 standard deviations!!!!! And the loophole is the QM value in its entirety!!

    The LoopholeAdjustments above create a lot more problems for LR than they are intended to solve. Rather than an argument for LR, it is actually an argument against LR. We accept that loopholes are possible, could be even significant, but none have been demonstrated to have an actual specific value (merely a range of possible values). And it would need to have the exact value above to match LR anyway. Rejecting QM certainly does not imply LR is valid, as you can see from the above. The LR side actually requires an extremely tight correlation between the loopholes and the QM predictions.

    It is as one argued that the proton actually had less mass than an election - but that all experiments that showed otherwise had a loophole that took back it to exactly the observed value. That is not science.

    So I could arguably stop here, and there is still plenty of evidence for QM and against LR.

    6. Conclusion
    We all understand the idea that there may be experimental problems that prevent measured values from getting arbitrarily close to predictions. But no matter how far you choose to go in evaluating the evidence - stopping anywhere above, for example - it is easy to see that QM has a lot more in its favor than LR.

    In my own opinion, the mere existence of the Bell Theorem is probably enough to exclude LR, and Aspect is definitely icing on the cake.

    Last edited: Jan 12, 2005
  4. Jan 12, 2005 #3


    User Avatar
    Science Advisor
    Gold Member

    For Caroline Thompson


    A. Please tell me what the probability values you predict for correlations at 22.5, 45, and 67.5 degrees, where 0=perfectly correlated. I estimate that you give these values (which I label X, Y and Z), per your figure 14 of The Chaotic Ball, as:

    X = .7500
    Y = .5000
    Z = .2500

    respectively, is that correct? If so, in my opinion, these would conform to the requirements of LR per Bell's Theorem. Do you agree?

    B. Please list all loopholes that apply in the measurement process described above a la Aspect, and the values of their error at the above angles.

    C. Please describe what currently hidden variables account for the differences between your figures 14-17 of the Chaotic Ball and how a physical experiment a la Aspect might confirm or deny the existence of such currently unknown variables - I believe you refer to them as "bands" and "imperfections".

    (Presumably, B and C combined will fulfill the requirements of my LR.LoopholeAdjustment.SC in my 5. in the post above.)

    D. Please explain why the same Loopholes and Chaotic Ball ball you describe above (B. and C.) do not apply to other experimental tests of QM, if indeed that is the case.

    Thanks, and feel free to answer in pieces as your time allows,

  5. Jan 12, 2005 #4


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    some reflections on EPR and their opponents.

    I would like to throw in a few reflections on EPR experiments and the main stream attitude vs. the "local realist" defenders.

    A point to make (I think in accordance with what DrChinese says) is on a misunderstanding on how science works. Science doesn't work "by mathematical proof" which, once and for all, establishes undeniable truths, but more like a court case where evidence is provided pro and contra to make stand out different theories. In the end, personal and social jugement play a role.
    In a way, the local realist defenders are right when people claim, erroneously, that it has now been proved, once and for all, that local realism is absolutely impossible, for the rest of the future of science, because this is an overstatement which is not scientific. Nothing is ever established "once and for all". But pointing out this rethoric is by itself not very interesting, and what LR defenders forget to point out is that the case IS massively made in favor of the QM predictions.

    There is no point in trying to establish particular LR models which have identical raw data predictions for a certain class of EPR experiments IF THESE MODELS ARE MADE UP TO DO EXACTLY THAT. The only thing you establish that way is that the case is not 100% mathematically closed. But we knew that already. In order to be fruitfull, you have to give a global physical theory from which you can derive these models, and from which you can also derive all the other, verified, predictions of quantum theory.
    In that context, things like "stochastic electrodynamics" have a higher merit, but before using them to tackle EPR setups, they first have to survive more modest checks: atomic physics, particle physics, solid state physics.
    This work can be interesting, and for the following reason: if an equivalence for these matters IS established, maybe it gives a different view on certain problems and opens up different strategies to solve problems which, actually, turn out to be too hard. But given the overwhelming case in favor of QM, there's a really big chance that it won't work out.

    Nevertheless, an ongoing investigation in these EPR situations IS important. I think the main reason is that it indicates that one way to implement the measurement process, namely a small non-linearity which gives rise to collapse on the mesoscopic scale, has now a strong argument against it. Due to decoherence, it will be extremely difficult to maintain, in a small system, superposition beyond the mesoscopic scale, so "local quantum theory" doesn't allow you to distinguish decohered linear QM predictions with "true collapse models" ; and EPR is the way out. Indeed, two entangled photons, 50 km apart, cannot be considered to be part of a mesoscopic system !
    As such, EPR is not so much an indication against local realism, but is more an indication against a physical mesoscopic collapse of the wavefunction.
  6. Jan 12, 2005 #5
    Aspect in his PhD thesis did not claim this! He kept a fairly open mind back in 1983, immediately after his experiments.

    Again and again we hear this kind of fact, but surely even someone with no statistical training can see that if the experiment is biased a statement of its accuracy -- the precision of its estimates, made on the basis of the statistical spread of the observation -- is meaningless.

    My analysis of data from Aspect's PhD thesis shows that the subtraction of accidentals produces very severe bias, sufficient to account for the violation of the Bell tests in two of his experiments (the first and last). Though the data is available only from the first, the fact that there were more accidentals in the last means that I am pretty confident this was even more biased than the one I could analyse. See http://arXiv.org/abs/quant-ph/9903066 .

    This data unfortunately was not recognised as important until I came on the scene, in about 1998. Marshal et al in 1983 had realised that the subtraction was illegal but had not looked into the figures.

    I have no other quarrel with your description of the status quo until I come to
    Since when was a cos^2 function the same as a (cos^2 + constant) one? When you carry out the local realist calculation (for the perfect case), the usual classical assumption re Malus' Law leads to a curve that has a constant added.

    Why would you expect there to be? Every actual experiment is different. The detailed local realist predictions depend (as I explained elsewhere in this forum earlier today) on the actual relationship that replaces the ideal "Malus' Law" one.

    I'm afraid I have no time and/or interest in checking your algebra re negative probabilities. The original Bell argument, supplemented by Clauser and Horne's reasoning in Physical Review D, 10, 526-35 (1974), is enough for me.

    Aha! Even without looking at the details, this is probably where you are in error. In no actual experiment have we had 100% efficiency. There have always (other than in the Rowe et al experiment, but that does not count since we did not have sufficiently separated particles) been lots of non-detections. See my Chaotic Ball model for why this matters. (http://arxiv.org/abs/quant-ph/0210150 )

    Yes indeed. This is the standard LR prediction. It applies, though, only to the 100% efficiency case.

    Here we go again, with this claim that it matches known optics. The known optics formula you are thinking of applies to two polarisers placed one after the other, "in series". What the Bell test is concerned with is two polarisers "in parallel". The "coincidence rates" that are measured are not expected, under classical theory, to obey the same logic as when the polarisers are in series.

    Small variance with large bias nullifies any claim of significance! Yes, the results are reproducible, but what I claim is that the bias is being reproduced. The source of bias is not the same in every Bell test experiment, but I claim that one or more of the loopholes are operational in every one.

    I have left myself no time for responding to the other recent messages in this thread. I can answer one of your questions, though.

    You ask if the loopholes apply in other areas of QM, and the answer is "Yes". Something that is effectively the fair sampling loophole applies in many experiments where coincidence rates are analysed. The subtraction of accidentals, though harmless in many contexts, might equally be important when coincidences are analysed. In many experiments it is assumed that the local realist model to be refuted involves the assumption that the detection rate after a polariser is proportional to cos^ 2 (angle). This is not a necessary assumption. Relax it and the options for local realism are substantially increased.

    Last edited: Jan 12, 2005
  7. Jan 12, 2005 #6
    double slit

    I dont understand why you make such an effort to prove that local realism isn't really proved by experiment not to exist. These experiments that try to disprove local realism with the Bell inequality are as it seems very complicated and difficult to do. And it might well be that mistakes have been made in some of them. I dont know. But as I understand things then QM have to be wrong for localrealism to be right. QM dont seem to be wrong.

    Also say we are to have local realism how could such a theory explain the double slit experiment with single electrons? If local realism is to be right then the electron will always have a well defined position that would imply it travels through only one of the two slits and further since we only allow local influences it would have to be unaffected of whether the other slit was open or closed! We all know that the interference patterns disappears when one of the slits was closed. As I understand things now the double slit experiment is enough for me to show that local realism is impossible.

    The other thing I think about is the passage of particles through Stein-Gerlach machines. If the SG machine is open not registering which way the particle went, then the wave function of the particle is unaffected. This seems to imply that the particle didn't pass through anyone of the open paths. For if we knew it did then the wavefunction would be collapsed.

    If there exist one fundamental theory that everything can be derived from, one truth. Than that theory cant have an explanation for if it has then its explanation must be that theory.

    Buddists seems to be aware of this since they feel that the universe must have existed before God. Thus that universe cant have an explanation since we call the explanation God. Thats why I feel that non-realism isn't that bad. :cool:
  8. Jan 12, 2005 #7


    User Avatar

    As I understand it, this is a correct statement of Bell's theorem. And since I agree with the others here who have argued that the experiments are fairly conclusive in demonstrating violation of Bell's inequalities, I will simply assume premise (i). That leaves (ii) and (iii) as possibly to blame for the violation: either there are no hidden variables, or the Bell locality condition fails.

    But this statement of Bell's result is also misleading, because it leaves out half of what Bell himself considered to be the full argument for his conclusion. Specifically, it leaves out the EPR argument, which showed that orthodox QM itself is either incomplete or nonlocal. That is, the EPR argument shows that either some hidden variable theory is true (i.e., QM is incomplete) or QM is nonlocal. This part of the argument is so straightforward, it's shocking the Bohr and others couldn't understand it. Two particles fly off back to back; when you measure some property on one of the particles, you immediately learn about some property on the other distant particle; if you didn't affect that distant particle by measuring the nearby particle then it must have had that property all along; if you did affect it that constitutes a nonlocal interaction. So there's the EPR dilemma.

    But notice what happens when you combine the EPR argument with Bell's theorem. EPR says either "completeness" or "locality" is false. Bell says either "incompleteness" or "locality" is false. It might be clarifying to rephrase the two claims like this: EPR says that the completeness assumption forces us to concede that QM is nonlocal: completeness --> nonlocality. Bell on the other hand shows that the assumption that QM is incomplete leads inevitably to the conclusion of nonlocality: incompleteness --> nonlocality.

    Obviously, when combined, these two arguments yield the inescapable conclusion of nonlocality. So it is really misleading to claim that what the Bell theorem/tests prove is the non-viability of something called "local realism". Whatever "realism" is supposed to mean here, it apparently has nothing to do with it. The EPR argument + Bell's theorem + experiment show the non-viability of locality, period. That is, any theory which is going to be consistent with experiment (i.e., with the predictions of quantum theory) is going to have to be nonlocal.

    People seem to know this about hidden variable theories -- e.g., the only way Bohm's theory is able to predict the right answers for these kinds of experiments is because of its nonlocality. But people seem unwilling to let themselves recognize that orthodox QM is equally non-local. Indeed, I think Dr. Chinese said elsewhere in this thread that he considers himself a "local non-realist." Well, evidently that means he doesn't agree with quantum mechanics, because that theory is non-local. It's right there in the collapse of the wave function, and especially obvious for EPR/Bell states: when you measure a property of one of the particles, the state attributed to the other distant particle changes. The probabilities for various outcomes of various possible measurements on that distant particle are different after the nearby measurement than they were before it. But that is *precisely* the kind of thing Bell's locality condition forbids.

    Put slightly differently, what I am saying is that QM itself violates "Bell locality" (the factorizability condition Bell used in deriving the inequalities). And it is simply inconsistent, therefore, to dismiss hidden variable theories on the grounds that they must violate this condition, unless one is also willing to dismiss orthodox QM on the same grounds, for it also violates the condition. Of course, dismissing both of those options leaves one in a pickle -- it would then be impossible to find *any* theory to account for the results. So the correct option is to recognize this, and to acknowledge that one's theory will have to include nonlocality because nature is nonlocal. And then it becomes far less clear whether one should prefer standard QM or something like Bohm's theory.

    Of course, I agree with DrChinese that the evidence against local hidden variable theories is overwhelming, if not 100% airtight. So what I am objecting to here is specifically the implication that we learn something about hidden variable theories more generally (or "realism" or whatever) from this whole issue. We don't. We learn that nature is non-local, and that's it.

    I hope that is clarifying or at least provocative enough to generate some discussion. =)

  9. Jan 12, 2005 #8


    User Avatar

    That's easy. Bohm's theory for a single electron is perfectly local, and has no trouble explaining the 2 slit experiment.

  10. Jan 12, 2005 #9


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The part which is obviously non-local in quantum theory is the collapse of an entangled state because that influences space-like separated subsystems.
    But you don't need to collapse that state, if you can accept something which is close to "many-worlds".
    Imagine the entangled state to be:
    |+>|-> - |->|+>

    Now consider that Alice measures the first subsystem, and Bob the second, 2 miles away. If we say that their "measurement" is just an entanglement, we get our Alice and Bob in an entangled state:
    |A+>|B-> - |A->|B+>

    But there's no way to find out yet !
    They have to travel to my place, and then I can measure their states (which correspond to their measurement results). I can only look at both of their results when they are "local" to me. The result of this measurement comes from the interference of the different |A>|B> terms, locally.

    So EPR situations can be considered local, if you accept that measurements can only occur in one single place (yours!), and that "far away done measurements" are not collapsing measurements but only entanglements of the subsystems with the measurement result carrier.
  11. Jan 12, 2005 #10


    User Avatar

    Yes, that's true. You can avoid the conclusion that nature is nonlocal by denying that experiments (including, by the way, all of the experiments which made us believe in quantum mechanics in the first place) have definite outcomes (including the specific outcomes we erroneously thought they had which made us erroneously believe in quantum mechanics).

    MWI is not just a clever way to avoid having a nonlocal theory; it has implications which radically undermine virtually everything that humans have ever believed. I'm happy to see it brought into the discussion, though: it demonstrates the lengths one must go to to avoid the conclusion I argued for earlier. Personally, I think this strengthens my case: if the next best option (after admitting that nature is nonlocal) is to believe that nature is nothing like what we have always thought and to believe that literally everything you believe (including this??) is a delusion, it starts to make it seem pretty reasonable (which of course it is) to just admit nonlocality and get on with life.

  12. Jan 12, 2005 #11


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    That's too cheap a remark :smile:
    I'm convinced that MWI as it stands, is wrong. However, I'm saying that you have to apply the projection postulate at the very end of the processing chain (which is always your conscious observation). So I do not deny experiments to have definite outcomes, once I personally observed them. I only consider the possibility that outcomes, far away, of experiments, can still exist in QM superpositions. I agree that the solipsist part of this explanation is unconfortable, but philosophically not impossible.

    I don't have profound conceptual problems with non-locality. The problem I have with it is that nature seems only to be halfbaked nonlocal, in that this non-locality can never be used to transmit information. If nature were truely non-local, then why is it impossible in principle to make a faster-than-light telephone ?

    If you accept EPR results, then there's one thing which is for sure: entangled superpositions of particles can exist over huge distances. So we should take the superposition principle very seriously. There's still a struggle, however, with what exactly is this projection postulate. Decoherence theory shows us that in most cases, it doesn't matter where we put it, as long as we put it between the system at hand and "observation". But what IS the ultimate observation ? I say that in EPR experiments, the ultimate observation is the observation of the correlations. If the price to pay is that machines, people and so on walk around in superposed states, then so be it. It is undetectable... except in EPR experiments !

    Although this explanation has obvious "common sense" problems, it has one big advantage: apart from being fully compatible with everything QM has ever predicted, it keeps locality (not for locality's sake) together with EPR, and as such, indicates you why you will not be able to make a faster-than-light telephone.
    It is nothing else but a daring macroscopic measurement apparatus version of the 2-slit experiment, where you need "nonlocality" to explain the "correlation" (the interference pattern observed by the detector) if you want to have the particle go through slit 1 and slit 2 ; but where you don't have that problem if you consider the "correlation" to be the result of interference of the two partial waves coming together again.
    In the EPR case, we don't let partial waves of a particle interfere, we let "measurement result signals" interfere in the device that calculates the correlations.

    As I said, this is my pet-interpretation of EPR, but 1) I like it and 2) it is in no contradiction with any result.
  13. Jan 12, 2005 #12


    User Avatar
    Staff Emeritus
    Gold Member
    Dearly Missed

    Patrick, as you say this gets you into "entangled cat" paradoxes on the macroscopic level, and I am not sure decoherence can get you out. Here's a question, say we have two entangled states that are each 1 kilometer long, can they (without observation or projection) become entangled and make a state 2 kilometers long?

    I guess what I am asking is, is entanglement transitive at the quantum state level.
  14. Jan 12, 2005 #13
    Is it the DeBroglie-Bohm theory you refer to? If it is I got the impression when reading alittle about it that it contains its quantum potential which in turn is all knowing which imply the theory isnt local. Please correct me if im wrong.
  15. Jan 12, 2005 #14


    User Avatar

    Yes, the de Broglie - Bohm (dBB) theory, aka "Bohm's theory" or "Bohmian Mechanics" or whatever.

    It's true, in the general N-particle case, that the quantum potential (if you choose to even formulate the theory that way, which you don't actually have to) "is all knowing", i.e., the force that potential exerts on a given particle depends on the instantaneous positions of all the other particles in the N-particle system. No doubt that means the theory is nonlocal.

    But in the one particle case, the force just depends on the quantum potential at the point of the particle in question, just like the classical (say) electric force on a particle depends on the electric field at the particle's current location. So it's not nonlocal in that case.

    Check out S. Goldstein's article on Bohm's theory for more information:


    By the way, I don't want to make a big deal over whether or not Bohm's theory is local in the one-particle case. Who really cares, after all? There's no denying its nonlocality in the general case, so it's probably just fine to say, as you did, that "the theory isn't local" and leave it at that. The point I am more concerned to make is that this cannot be used as an argument against the theory. *Any* theory that reproduces the quantum predictions, QM itself included, has to be nonlocal. That's what the combined EPR-Bell analysis proves. And, paraphrasing Bell, it is to the credit of the Bohmian theory for helping to bring this out.

  16. Jan 12, 2005 #15
    "Decoherence is the theory of universal entanglement" in the words of Erich Joos, one of the theory's pioneers. So I guess the entire universe is the limit to what can become entangled.
  17. Jan 12, 2005 #16
    Einstein Was Right

    The Sept 04 issue of Scientific American is a special issue titled "Beyond Einstein". One of the articles is "Was Einstein Right?" which deals with that question in relation to quantum mechanics. The following is from that article:

    "Was Einstein Right? by George Musser

    Unlike nearly all his contemporaries, Albert Einstein thought quantum mechanics would give way to a classical theory. Some researchers nowadays are inclined to agree

    Einstein has become such an icon that it sounds sacrilegious to suggest he was wrong. Even his notorious "biggest blunder" merely reinforces his aura of infallibility: the supposed mistake turns out to explain astronomical observations quite nicely [see "A Cosmic Conundrum," by Lawrence M. Krauss and Michael S. Turner]. But if most laypeople are scandalized by claims that Einstein may have been wrong, most theoretical physicists would be much more startled if he had been right.

    Although no one doubts the man's greatness, physicists wonder what happened to him during the quantum revolution of the 1920s and 1930s. Textbooks and biographies depict him as the quantum's deadbeat dad. In 1905 he helped to bring the basic concepts into the world, but as quantum mechanics matured, all he seemed to do was wag his finger. He made little effort to build up the theory and much to tear it down. A reactionary mysticism--embodied in his famous pronouncement, "I shall never believe that God plays dice with the world"--appeared to eclipse his scientific rationality.

    Estranged from the quantum mainstream, Einstein spent his final decades in quixotic pursuit of a unified theory of physics. String theorists and others who later took up that pursuit vowed not to walk down the same road. Their assumption has been that when the general theory of relativity (which describes gravity) meets quantum mechanics (which handles everything else), it is relativity that must give way. Einstein's masterpiece, though not strictly "wrong," will ultimately be exposed as mere approximation.

    Collapsing Theories
    In recent years, though, as physicists have redoubled their efforts to grok quantum theory, a growing number have come to admire Einstein's position. "This guy saw more deeply and more quickly into the central issues of quantum mechanics than many give him credit for," says Christopher Fuchs of Bell Labs. Some even agree with Einstein that the quantum must eventually yield to a more fundamental theory. "We shouldn't just assume quantum mechanics is going to make it through unaltered, says Raphael Bousso of the University of California at Berkeley.

    Those are strong words, because quantum mechanics is the most successful theoretical framework in the history of science. It has superceded all the classical theories that preceded it, except for general relativity, and most physicists think its total victory is just a matter of time. After all, relativity is riddled with holes--black holes. It predicts that stars can collapse to infinitesimal points but fails to explain what happens then. Clearly, the theory is incomplete. A natural way to overcome its limitations would be to subsume it in a quantum theory of gravity, such as string theory.

    Still, something is rotten in the state of quantumland, too. As Einstein was amoung the first to realize, quantum mechanics, too, is incomplete. It offers no reason for why individual quantum events happen, provides no way to get at object's intrinsic properties and has no compelling conceptual foundation. Moreover, quantum theory turns the clock back to a pre-Einsteinian conception of space and time. It says, for example, that an eight-liter bucket can hold eight times as much as a one-liter bucket. That is true for every day life, but relativity cautions that an eight-liter bucket can ultimately hold only four times as much--that is, the true capacity of buckets goes up in proportion to their surface area rather than their volume. This restriction is known as the holographic limit. When the contents of the buckets are dense enough, exceding the limit triggers a collapse to a black hole. Black holes may thus signal the breakdown not only of relativity but also of quantum theory (not to mention buckets).

    The obvious response to an incomplete theory is to try to complete it. Since the 1920s, several researchers have proposed rounding out quantum mechanics with 'hidden variables.'
    The idea is that quantum mechanics actually derives from classical mechanics rather than the other way around. Particles have definite positions and velocities and obey Newton's laws (or their relativistic extension). They appear to behave in funky quantum ways simply because we don't, or can't, see this underlying order. 'In these models, the randomness of quantum mechanics is like a coin toss,' says Carsten van de Bruck of the University of Shelffield in England. 'It looks random, but it is not really random. You could write down a deterministic equation.'
    Musser continues on to explain 'hidden variables' and other classical models. He then concludes with the following:

    "All that said, most physicists still regard hidden variables as a long shot. Quantum mechanics is such a rain forest of a theory, filled with indescribably weird animals and endlessly explorable backwaters, that seeking to reduce it to classical physics seems like trying to grow the Amazon from a rock garden. Instead of presuming to reconstruct the theory from scratch, why not take it apart and find out what makes it tick. That is the approach of Fuchs and others in the mainstream of studying the foundations of quantum mechanics.

    They have discovered that much of the theory is subjective: it does not describe the objective properties of a physical system but rather the state of knowledge of the observer who probes it. Einstein reached much the same conclusion when he critiqued the concept of quantum entanglement--the "spooky" connection between two far-flung particles. What looks like a physical connection is actually an intertwining of the observer's knowledge about the particles. After all, if there really were a connection, engineers should be able to use it to send faster than light signals, and they can't. Similarly, physicists had long assumed that measuring a quantum system causes it to "collapse" from a range of possibilities into a single actuality. Fuchs argues that it is just our uncertainty about the system that collapses.

    The trick is to strip away the subjective aspects of the theory to expose the objective reality. Uncertainty about a quantum system is very different from uncertainty about a classical one, and this difference is a clue to what is really going on. Consider Schroedinger's famous cat. Classically, the cat is either alive or dead; uncertainty means that you do not know until you look. Quantum-mechanically, the cat is neither alive nor dead; when you look, you force it to be one or the other, with a 50-50 chance. That struck Einstein as arbitrary. Hidden variables would eliminate that arbitrariness.

    Or would they? The classical universe is no less arbitrary than the quantum one. The difference is where the arbitrariness comes in. In classical physics, it goes back to the dawn of time; once the universe was created, it played itself out as a set piece. In quantum mechanics, the universe makes things up as it goes along, partly through the intervention of observers. Fuchs calls this idea the 'sexual interpretation of quantum mechanics.' He has written: 'There is no one way the world is because the world is still in creation, still being hammered out.' The same thing can clearly be said of our understanding of quantum reality."

    Christopher A. Fuchs Webpage ==>

    "Physics Today" Article "Quantum Theory Needs No 'Interpretation'"==>

    Comments on "Physics Today" Article "Quantum Theory Needs No 'Interpretation'"==>

    "I've said it before, I'll say it again:
    Can a dog collapse a state vector?
    Dogs don't use state vectors.
    I myself didn't collapse a state
    vector until I was 20 years old."
    - Christopher A. Fuchs

    All the best
    John B.
  18. Jan 12, 2005 #17
    The point of my Chaotic Ball model is to illustrate the principle behind the detection loophole -- to demonstrate how easy it is to produce local realist models that violate the CHSH test when the statistic is estimated in the accepted way. None of the numerical predictions have any value, since nobody seriously thinks that any real system would behave in that way, with sharp boundaries between regions of hidden variable space where all particles are detected and others where none are.

    The model shows the difference between the two ways of estimating that test statistic. If we estimate each of the terms by expressions of the form:
    (1) (N++ + N-- - N+- - N-+)/ (N++ + N-- + N+- + N-+)​
    we may quite likely violate the CHSH inequality. If, however, we estimate them by expressions of the form:
    (2) (N++ + N-- - N+- - N-+)/ N​
    where N is the total number of emitted pairs, we cannot ever violate it.

    I wonder, incidentally, if you have looked at the following paper?

    R. García-Patrón Sánchez, J. Fiurácek , N. J. Cerf , J. Wenger , R. Tualle-Brouri , and Ph. Grangier, “Proposal for a Loophole-Free Bell Test Using Homodyne Detection”, Phys. Rev. Lett. 93, 130409 (2004)

    In this experiment they propose to use the "event-ready detectors" that Bell himself recommended as leading to a valid version of his test, thus effectively using the second of the above two methods. The proposal looks quite feasible and I can see no potential loopholes. Unfortunately, though, I think the theory they put forward to test for the presence of "non-classical" light is seriously flawed. The experiment is doomed to be a flop, producing no Bell test violations but, equally, no sign of non-classical light. Once they have realised that they have only got classical light, I expect the quantum theorists and local realists to agree that no Bell test violation should be predicted.

    I don't know what you mean here. It is not the purpose of the model to provide error estimates, any more than it is to give numerical predictions for any real experiment.

    Now I'm definitely lost, since my figure numbers end at number 15! However, I shall try and answer what I think is your question. If we're looking at real experiments such as Aspect's, we have a slightly different basic model, one in which the 0/1 probabilities are replaced by the cos^2 curves of Malus' Law (see Appendix C of http://arXiv.org/abs/quant-ph/9903066). The equivalent of my Chaotic Ball "missing bands" is broad troughs in the functions that replace Malus' Law cos^2 curves. I expect these broad troughs to be found when detector efficiencies are very low, or, alternatively, when beam intensities are very low.

    One reason for mentioning the Sanchez et al "loophole-free" experiment is that this would, I think, present a very good opportunity to check out the local realist ideas. It would be easy to take different measurements -- use the raw detector output voltages instead of the proposed "digitised voltage differences" -- and explore what happens with different efficiencies etc..

    If you look at my expressions (1) and (2) above, you will see that they involve counts such as N++ and N+-. They assume that both '+' and '-' events are counted. Not all experiments do this. The test that is designed for use when only '+' results are counted is the CH74 test, which does not have the same risk of bias. [Some experiments in which only '+' events are counted nevertheless, by doing extra experimental runs, manage to use the CHSH test. This is most unfortunate!]

  19. Jan 12, 2005 #18
    A Bell-type inequality of the form

    E(2θ) ≤ 2E(θ)

    (where E(θ) is the probability of anti-correlation for an angle θ between the axes of the two polarizers) can be derived on the basis of the following five assumptions:

    1) probability E of anti-correlation depends only on relative angle θ between polarizers' axes ;

    2) E(0) = 0 ;

    3) free will of experimenters ;

    4) contrafactual definiteness of measurement results ;

    5) locality .

    But Quantum Mechanics predicts

    6) E(θ) = sin2θ ,

    which is not consistent with the said inequality.

    Which of 1-6 do people wish to negate? (... Or would you rather say I did not correctly identify the assumptions?)
  20. Jan 12, 2005 #19


    User Avatar
    Science Advisor
    Gold Member

    1. I think that is my point. The Chaotic Ball is 100% speculation. So what if LR theories can disguise themselves experimentally when testing conditions are not perfect? That does not change the fact that LR is not compatible with the predictions of QM, and vice versa.

    There has never been a statistically significant variance in the measured results a la Aspect - they all agree with QM to the penny within N standard deviations where N is increasing. Yet your Chaotic Ball theory - by your own admission above - yields an entire range of results both higher and lower than the QM value AND should vary in observed value according to the hypothetical variable(s) that affect it (the ones that have yet to be discovered). Yet there is NO VARIATION outside the QM observed range and your model makes absolutely no provision for this.

    In other words:

    .7500 = LR maximum for correlation at 22.5 degrees
    .8536 = QM predicted value for 22.5 degrees, same as observed
    .7501, .7502, .7503, ... .9999 = set of possible observed values per Chaotic Ball which violate the CHSH tests.

    If you expect to be taken seriously as a scientist, you must acknowledge this serious deficiency and correct it. Otherwise, you could use similar logic to attack all experiments ever run previously and without exception. What would be the utility (a word you obviously despise) to that? After all, and let's face it, the Chaotic Ball is nothing more than an artifact of your imagination, even you don't pretend it is "real" (and you are the realist).

    2. Sorry, I meant figures 11-14.
  21. Jan 13, 2005 #20


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This must be one of the reasons I gave up my subscription to Sci. Am. :devil:
    More and more they have these watered-down smoky-bar discussions, which make those who know what it is about, smile, and confuse those who don't know.
    "will quantum theory remain with us for ever" ??? No of course. I hope that physics, 2000 years from now, will have evolved ! But does that mean that Einstein's viewpoint was right ? No ! Did he point out the hard parts in QM ? Yes. Did he solve them ? No.

    But all this is blahblah in the air.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?