Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Bell's Theorem and Negative Probabilities

  1. Jan 9, 2005 #1


    User Avatar
    Science Advisor
    Gold Member

    Bell's Theorem and Negative Probabilities

    Author's note: This post is based on Bell's Theorem (1). I have reformulated the presentation to make it a little easier to see that "Negative Probabilties" are a seemingly paradoxical consequence of his work. The Bell Inequalities can be presented in many forms, and most are essentially equivalent. I do not know if this particular presentation format or derivation has been used by others, I can only assume it has. I follow conventional interpretation of both QM and Bell. For a more rigorous proof, look to Bell and others. I assume the reader already has basic familiarity with Bell test setups such as Aspect(2). Assistance is welcome in achieving the desired result :) so please don't sue me if I don't get it 100% right the first time. My goal is to make this short, sweet and as easy to follow as possible. I will likely edit the post as improvements are incorporated so that this can be referenced in the future.

    Some forum members have posed the question, "How does Bell's Theorem lead to predictions of negative probabilies?" Bell's Theorem demonstrates that the following 3 things cannot all be true:

    i) The experimental predictions of quantum mechanics (QM) are correct in all particulars
    ii) Hidden variables exist (particle attributes really exist independently of observation)
    iii) Locality holds (a measurement at one place does not affect a measurement result at another)

    QM predicts that certain classical scenarios, if they existed, would have negative likelihood of occurance (in defiance of common sense). Any local realistic theory - in which ii) and iii) above are assumed to be true - will make predictions for values of these scenarios which is significantly different than the QM predicted values. QM does not acknowledge the existence of these scenarios, often called hidden variables (HV), so it does not have a problem with this consequence of Bell's Theorem. We will ignore the iii) case here, as if you accept that locality fails anyway then there is no particular conflict between i) and ii). Again, our objective is to see the effect of the "hidden variable" or "Realistic" assumption and how that specifically leads to results that defy our intuitive common sense.

    In the entangled photon scenarios, the Realistic view - which maps to assumption ii) above - states that the photon polarity is determinate as of the point in time that the photons' existence begins. Even though the entangled photons can only be measured at 2 angles before they are disturbed, the Realistic view states that they could potentially have been measured at other angles as well. Thus, the Realistic view is that the existence of the photon spin polarity is independent of the act of measurement. On the other hand, QM (Heisenberg Uncertainty Principle) says that the photon polarity exists only in the context of a measurement, and the the act of observation is somehow fundamental to the measurement results. Here is the paradox that is a partial result of Bell's Theorem:

    a. Let there be 2 single channel detectors I will call Left and Right. The Left is set at angle A=0 degrees. The Right is set at C=67.5 degrees. We will consider that there is the possibility that we could have also measured the polarity at another angle in between the settings of Left and Right detectors, and this angle is for the sake of discussion called B=45 degrees. In each case, the angle settings are adjusted so that 0 degrees difference would mean that there is perfect correlation, as is true in both classical and quantum mechancial scenarios. A difference of 90 degrees means there is perfect anti-correlation, also identical in all scenarios. Our selection of the angles is not random, it is done specifically to highlight the desired conclusion. Let's call it + if there was a detection at that spot, and - if there is no detection. Practical detector efficiencies and actual experimental requirements are ignored.

    b. In the Realistic view, we could imagine that A, B and C all exist at the same time - even if we could only measure 2 at a time. Therefore, there are 8 possible outcomes that must total to 100% (probability=1). This is "common sense". The permutations are:

    [1] A+ B+ C+ (and the likelihood of this is >=0)
    [2] A+ B+ C- (and the likelihood of this is >=0)
    [3] A+ B- C+ (and the likelihood of this is >=0)
    [4] A+ B- C- (and the likelihood of this is >=0)
    [5] A- B+ C+ (and the likelihood of this is >=0)
    [6] A- B+ C- (and the likelihood of this is >=0)
    [7] A- B- C+ (and the likelihood of this is >=0)
    [8] A- B- C- (and the likelihood of this is >=0)

    The sum of all possible outcomes above:

    [1] + [2] + [3] + [4] + [5] + [6] + [7] + [8] = 100% = 1

    With a Realistic view, this is true regardless of the unknown hidden variable function that controls these individual outcome probabilities. So it is the requirement that each outcome have an expectation value >=0 that connects to the assumption of reality per ii) above. When measuring A and C, B existed even if we didn't measure it.

    c. In the quantum world, 2 of the above outcome cases are suppressed: [3] and [6]. The reason is that they don't actually exist as possibilities - even though common sense says they should! B is the hypothesized angle between A (left) and C (right) in my example, and B must always yield the same +/- value as either A or C. In these two cases [3] and [6] as you can see from the chart above, B is opposite to A and C. It does not matter to the argument presented here if you agree with this reasoning; you only need to accept that it is the [3] and [6] cases which QM says have a negative probability of occurance as will be shown in d.-h. below.

    But in the Realistic view, [3]>=0 and [6]>=0. Combining these, we get the non-negative prediction for the Realistic side:

    [3] + [6] >= 0 (per the Realistic view)

    This is the only assumption we make for the Realistic view, and it is not required in QM. We next need the QM prediction for cases [3] and [6], preferably one that can be tested via experiment.

    d. Bell brilliantly saw a way to do this. Remember, we can only actually measure two of A, B, or C at a time - we can't measure all 3 simulataneously. But we can separately measure some new combined cases called X, Y and Z:

    X = combined probability of cases [1] + [3] + [6] + [8]
    Y = combined probability of cases [3] + [4] + [5] + [6]
    Z = combined probability of cases [1] + [4] + [5] + [8]

    e. Also note that:

    X = correlations between measurements at A and C
    Y = non-correlations between measurements at A and B
    Z = correlations between measurements at B and C

    You can review the 8 cases in b. above to see that this is so.

    f. Why do we pick these particular combinations to define X, Y and Z? Because (X + Y - Z)/2 is the same as the probability of our 2 suppressed cases, [3] and [6] from c) above. You can now see that:

    (X + Y - Z) / 2

    = (([1] + [3] + [6] + [8]) + ([3] + [4] + [5] + [6]) - ([1] + [4] + [5] + [8])) / 2

    Now simplify by eliminating offsetting terms:

    = ([3] + [6] + [3] + [6]) / 2

    = [3] + [6]

    Which means that, if c. above is true, per the Realistic side:

    (X + Y - Z) / 2 >= 0

    g. In QM and in classical optics, correlation of photon polarity is a function of the square of the cosine of the angle between. Non-correlation of photon polarity is a function of the square of the sine of the angle between.

    X is determined by the angle between A and C, a difference of 67.5 degrees
    X = COS^2(67.5 degrees) = .1464
    This prediction of quantum mechanics can be measured experimentally.

    Y is determined by the angle between A and B, a difference 45 degrees
    Y = SIN^2(45 degrees) = .5000
    This prediction of quantum mechanics can be measured experimentally.

    Z is determined by the angle between B and C, a difference 22.5 degrees
    Z = COS^2(22.5 degrees) = .8536
    This prediction of quantum mechanics can be measured experimentally.

    h. QM predicts that (X + Y - Z)/2 would then be calculated as follows:

    (X + Y - Z) / 2

    Substituting values from g. above:

    = (.1464 + .5000 - .8536)/2

    = (-.2072)/2

    = -.1036


    [3] + [6] = -.1036 (per QM)

    Which predicted result is less than zero. (QED per c. above)

    QM predicts an expectation value for cases [3] and [6] of -.1036, which is less than 0 and seemingly absurd. However, this is born out by experiment, in defiance of common sense. Note that X, Y and Z can be separately tested anywhere in the world at any time and you still end up with the same conclusion once you combine the results per h.

    IMPORTANT NOTE: If you are a proponent of local realism (local hidden variable theories), the above argument is sufficient to dash your hopes as purported Bell test loopholes don't matter. You have only to convince yourself that the experimentally measured values of X, Y and Z are close enough to the predictions given by QM from g. above. It is pretty obvious that any other values would have been noticed a long time ago, since the QM predictions match classical optics anyway. No local realist has ever even given alternative predictions for the experimental values of X, Y and Z which would yield a result compatible with the Realistic view after Bell's Theorem.

    (1) J.S. Bell: "On the Einstein Podolsky Rosen paradox" Physics 1 #3, 195 (1964). You can view a copy of the original paper at: http://www.drchinese.com/David/EPR_Bell_Aspect.htm

    (2) A. Aspect, Dalibard, G. Roger: "Experimental test of Bell's inequalities using time-varying analyzers" Physical Review Letters 49 #25, 1804 (20 Dec 1982).

  2. jcsd
  3. Jan 9, 2005 #2
    May I suggest that people read this paper before making comments:http://uk.arxiv.org/abs/quant-ph/0501030

    I hope you do not mind, I think it may stimulate some interest to your obvious great thread?
  4. Jan 9, 2005 #3


    User Avatar
    Science Advisor
    Gold Member


    I did not see the direct relevance to this thread, although it looks interesting. Tresser develops a form of the Bell Inequality using a Stern-Gerlach setup and then says that Bell Inequalities tell us nothing about physics. From the citation:

    "The simpler example that I propose has allowed me to
    rediscover the danger of counterfactuals: the necessar-
    ily counterfactual character of any experiment trying to
    compare Nature with theories using counterfactuals is
    why Bell type inequalities do not prove anything about

    The reason I prepared a new thread was to highlight the existence of negative propbabilities which result from Bell in the Realistic view. Interestingly, that seemed to also be the view of Tresser: that non-locality was not demonstrated by Bell, but that Realism was violated.
  5. Jan 9, 2005 #4
    Thanks, I hope to comment, but I have a vested interest, I have been interested in EPR, for some time, and having reached a specific conclusion, I came across the linked paper, it made me think a little more, and finally pushed me into a corner that I cannot escape!..but more on that at a later more appropiate time, hope people can read through you post, and delve into the consequences relayed.
  6. Jan 9, 2005 #5


    User Avatar
    Science Advisor

    Dr. Chinese --I'm afraid that I do not understand your notation. My ignorance is laid plain at the end of the quote.

    Last edited: Jan 9, 2005
  7. Jan 10, 2005 #6

    Hans de Vries

    User Avatar
    Science Advisor
    Gold Member


    Your post here is very much appreciated here since it does raise
    the level of the threads significantly in the sense that the actual
    physics is discussed in detail rather than the usual vague
    interpretation talk.

    However I do hope that you appreciate that people are keen to find
    the loopholes in the mathematics to disprove such claims as
    "negative probability"

    It shows in my opinion how dangerous it is to base very far reaching
    consequences like non-locality (or negative probabilities like in this
    case) after a sequence of calculations which might seem correct at
    first glance.

    3 entangled photon experiment.

    There's nothing which forbids to do the same experiment with
    3 entangled photons. Surely you don't want to predict a
    "negative probability" for cases 3 and 6?


    I get the following probabilities by assuming the cos^2 law:

    1 [A+B+C+] = 1/8 + 1/16 cos(22.5) + 1/16 cos(45) + 1/16 cos(67.5) = 0.2508543
    2 [A+B+C-] = 1/8 - 1/16 cos(22.5) + 1/16 cos(45) - 1/16 cos(67.5) = 0.0875339
    3 [A+B-C+] = 1/8 - 1/16 cos(22.5) - 1/16 cos(45) + 1/16 cos(67.5) = 0.0469810
    4 [A+B-C-] = 1/8 + 1/16 cos(22.5) - 1/16 cos(45) - 1/16 cos(67.5) = 0.1146305
    5 [A-B+C+] = 1/8 + 1/16 cos(22.5) - 1/16 cos(45) - 1/16 cos(67.5) = 0.1146305
    6 [A-B+C-] = 1/8 - 1/16 cos(22.5) - 1/16 cos(45) + 1/16 cos(67.5) = 0.0469810
    7 [A-B-C+] = 1/8 - 1/16 cos(22.5) + 1/16 cos(45) - 1/16 cos(67.5) = 0.0875339
    8 [A-B-C-] = 1/8 + 1/16 cos(22.5) + 1/16 cos(45) + 1/16 cos(67.5) = 0.2508543

    Cases 3 and 6 have the lowest probability but the are neither zero nor
    negative. These results come from integrating over all possible angles
    using 3 polarization filters at 0, 45 and 67.5 degrees in combination with
    the cos^2 law. We get probabilities like:

    [tex] A+B+C+ = \int \cos^2(\phi) \cos^2(\phi-45) \cos^2(\phi-67.5) d\phi[/tex]
    [tex] A+B-C+ = \int \cos^2(\phi) \sin^2(\phi-45) \cos^2(\phi-67.5) d\phi[/tex]

    Regards, Hans
  8. Jan 10, 2005 #7


    User Avatar
    Science Advisor
    Gold Member

    Hans, thanks for your time to review this. I wonder if you might take a second look...

    I do not believe your setup is the same as mine. As a check, I looked at your 4 cases that map to my "Y", which is with the polarizer settings at 45 degrees. This should always give a value of .5 (predicted value for classical and QM). In your setup, it gives a different value.

    If I understand your setup, there are 3 photons (in Bell tests there are only 2). I do not know your seup well, but I would say that my Realistic assumption is not being tested in it. You would need 16 permutations to test this assumption (the A, B , C above plus a new one, D). When that is added back in, the negative probabilities should re-appear.
  9. Jan 10, 2005 #8


    User Avatar
    Science Advisor
    Gold Member

    Reilly, the answer is that QM does not actually make a prediction of a negative probability - it is the Realistic view that makes such a prediction. QM does not acknowledge that the Realistic cases [3] and [6] even exist.

    It is as if someone predicted that heavier objects fell faster than lighter ones - a very common sense perception some years back. Then someone tested it and found it was not so. No different here with the [3] and [6] cases. If they existed, they would have negative probabilities - but they don't exist.
  10. Jan 10, 2005 #9
    We are making an assumption that the value obtained in any measurement is already determined beforehand at the time of creation of the photon pair. This means that, for every setting of each of the two detectors, each photon is in a definite state of either "+" or "-" relative to the given setting. So, for example, we can imagine two distinct settings for the left detector, θ1 and θ2, and even though we cannot perform both measurements on a given run, we can certainly talk about the incoming photon as objectively being "+" or "-" with respect to each of these two settings.

    For the moment let's just look at the left photon. On run number 1, its state is, say, θ1+ and θ2+ ... and on run number 2, the state is, say, θ1+ and θ2- ... etc... .

    The whole point of this exercise is to show that such an assumption is inconsistent with: (i) what QM predicts; and (ii) established, loophole-free experimental results.

    Note that our assumption is two-fold: hidden variables exist and locality holds.
  11. Jan 11, 2005 #10
    What is a negative probability?

    It is all very well to say that the difficulties can be resolved by introducing negative probabilities. However, we need an interpretation of them analogous to the interpretations we have of positive probability. Here are some options.

    Laplacian view: Laplace thought that probabilities captured the physical symmetries of the object that we attch them to. Thus, the reason that a perfect-cubical dice has a probability of 1/6 to land on each number is because it is a perfect cube. On this view, we have to ask what physical symmetry is captured by a negative probability.

    Frequentist view: Probabilities are the limit of the relative frequency of occurance of the possible events in an infinite sequence of trials. What can it mean for this to be negative?

    Bayesian View: Probabilities represent degrees of belief of an agent who has to act in a situation of uncertainty. The standard "Dutch-book" and other coherence arguments show that these have to obey the standard probability axioms (including positivity). How do these arguments have to be weakened in orger to allow negative degrees of belief and what do these mean?

    Potentiality: Particularly in quantum-mechanics, some people like to interpret probabilities as being objective properties of a physical system. However, they clearly don't represent a directly observable property in a single shot experiment, rather they determine a system's "potentiality" to yield the different possible outcomes in an experiment. What does a negative potentiality mean?

    If one is seriously considering negative probabilities, one has to answer at least one of these questions. The only reasonable answer I have come across so far is due to Richard Feynman. On his interpretation, negative probabilities are a convenient fiction that can sometimes make calculations more convenient. An analogy is the following:

    Suppose I have 5 apples, then I pick 3 more and give away 7 of them. How many apples do I have at the end?

    Well 5+3 = 8 and 8-7 = 1, so I have 1 apple left.

    However, I could also say 5 - 7 = -2 and -2 + 3 = 1.

    Whilst it is clear that I can never have -2 apples, it can be legitimate to introduce them into the claculation as an intermediate step if I find it more convenient to do it that way round.

    So, Feynman would say that if negative probabilities make your calculations more convenient then by all means introduce them at an intermediate stage. But all probabilities of real, observable physical events should be positive.

    It is not clear to me that this interpretation is sufficient to save a realist interpretation of QM by introducing negative probabilities. For this, I think that the negative probabilities have to be attached to real (if not observable) physical events. Therefore, I am still waiting for someone to come up with a better answer to one of the four questions posed above. Any thoughts?
  12. Jan 11, 2005 #11


    User Avatar
    Science Advisor
    Gold Member

    I would say that the local realist position cannot be saved, because it requires negative probabilities (as shown above for cases [3] and [6]). But I do not think the negative probabilities are an artificial mathematical construct, as perhaps are infinities in the manner of Feynman calculations. How do others feel?
  13. Jan 14, 2005 #12


    User Avatar
    Science Advisor
    Homework Helper

    If you start with the assumption that the total chances of a polyhedron landing on any of it's faces is going to be one, then you might be in for a surprise when you start throwing concave polyhedra, since, for example, it is not difficult to construct one (for example a 'star') that will never land on a face. Hence the total probability of coming to rest on a face will be zero. Now you have the result of 1=0, so you can look back at the assumptions and state that one of them is false, but it's not clear which one it is. In this case, changing polyhedron to 'convex polyhedron' fixes things. However, changing 'be one' to 'be less than or equal to one' would also make the statement true.

    Really, what Bell's theorem proves is that the Quantum behavior of particles is not 'nice'. However it does not demonstrate in what way that behavior fails to be nice. You can elect to have, for lack of a better expression, 'non-standard probability theories' (like negative probailities), non-local interaction (which includes 'spooky action at a distance'), or warped space ('wormhole' explanations of EPR behavoirs have been suggested). There are probably other assumptions, but these seem to be the three most comonly questioned ones.

    As far as I am aware, the EPR experiments to date have not adressed which of these possibilities is consistent with our reality. However, experiments that eliminate some categories of these theories are concievable.

    By the way, besides negative probabilities there are alternative 'mathematical monstrosities' that can be used to adress the EPR paradox and Bells theorem. The theorem does not apply, for example, if one allows for unmeasurable probability functions (google for Banach Tarski paradox for more information on how ugly this is).
  14. Jan 14, 2005 #13


    User Avatar
    Science Advisor
    Gold Member

    1. So do you think renormalization is cheating? (I don't, but just wondering after googling Banach Tarski paradox).

    2. Do you think Einstein would have accepted the Bell Theorem as convincing? I think he would have, and that he would not have sought refuge in "non-standard probabilty theories" as a way to save Local Realism. I would agree that there may other ways to skin the cat that we have yet to understand.
  15. Jan 14, 2005 #14


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I'm missing something -- probability functions are measures, so you can't really have an unmeasureable probability function. I suppose, though, you meant to talk about events which are unmeasurable, but doesn't most everything fail when you cannot apply probabilities?
  16. Jan 16, 2005 #15
    Motivated by the discussions here,I've started getting into the details of Bell's theorem,Alain Aspect's experiment and DrChinese's posts.I have the follwing questions.

    For DrChinese:-
    I've just read your post on 'Bell's theorem and negative probabilities'.In classical optics,cos^2 term arises only if the polarizers are in series.For QM,can you tell me how cos^2 is arrived(may be I'm asking a stupid question)--do you average over all possible polarization directions?

    Coming to your set-up and classical optics,what exactly are you saying:-using 'classical optics', probabilities for the 8 possibilities add to one only if a couple of them are taken to add to a -ve probability ?Can you give a classical optics(local realism) expression for detections to be correlated?Mind you,you can't use arguments like 'if you apply QM to the classical scenario'--you have to use classical optics in a classical scenario.

    Using QM (3) and (6) are anyway ruled out--so do the rest add to one?

    General question:-
    Even if we assume that the photons in a photon pair have their directions of polarization defined(from the moment of birth),these directions will be different for
    another pair--each photon pair will in general have its own polarization direction.If we average over these directions,we should again get back to the cos^2 term(?)--so that even in classical optics(which supports local realism) you don't expect Bell 's inequality to hold.Hope one of you removes my confusion.
  17. Jan 16, 2005 #16


    User Avatar
    Science Advisor
    Gold Member

    Hi, I think I can answer some of your questions... although I may take several posts to do so.

    1. The reason I reference classical optics is more because of the correspondence princliple - i.e. that QM reproduces many classical formulas formulas when the sample is large enough. This principle explains why QM was "hidden" for so long.

    The LR formula for spin correlation is substantially at odds with the classical prediction of cos^2. For example, at 22.5 degrees, the classical formula yields .8536 while the LR value cannot exceed .75*. By all logic, the LR position should EXACTLY match the classical formula, but it doesn't. Now, it certainly is possible that the classical formula does not apply in extangled setup. After all, as you point out, the classical setup is polarizers in a series.

    So my point is that the classical formula does NOT after all support LR - since it yieds an entirely different value for a similar setup. That alone does not make LR wrong, but it certainly is highly suspicious.

    I am sure you are aware of other oddities of classical optics as well - things that are easy to see in QM terms but difficult to see in LR terms. In LR, the measurement is not fundamental to the results. In QM, it IS fundamental to the results. If you have a series of polarizers in a classical setup, it is possible to insert progressively more and more polarizers - if done at optimal angles - and have MORE light come through at the end. This makes sense in the QM world, as we are making measurements and this affects the results in ways that are counter-intuitive. But the LRist usually waves their hands and says "I predicted that too" because it agrees with classical optics. But that really isn't the case at all. You can't have your cake and eat it too, as an LRist. If LR hangs its hat on the predictions of classical optics, then it would give the same predicted value for EPR tests as QM does - but that would violate Bell's Theorem, QED. Classical optics and LR are not one and the same at all.

    2. Naturally, if the [3] + [6] cases add to less than zero, it follows that:

    [1] + [2] + [4] + [5] + [7] + [8] > 1

    Of course this is just an nonsensical as negative probabilities. In fact, this is why Bell Inequalities can be cast in so many different - but utilmately equivalent - formulas. Many Bell tests use CHSH or CH74 instead. But all of them depend in one way or the other on the re-arrangement of terms that lead to negative probabilities.

    QM sees only 4 cases and not 8, and those cases have all 4 such that:

    0 <= [QM] <= 1 and this fits with experimental observation.

    3. If you assumed that there were definite hidden polarizations, and averaged them, you would then get the classical cos^2 formula - so yes.

    *I get the .75 by creating a function that satisfies Bell's Theorem for LR, which has the smallest deviation from the classical predictions. This LR functions give identical predictions at 0, 45 and 90 degrees but is different at all other angles. You can create your own alternatives, too, but the differences will then be greater.
  18. Jan 17, 2005 #17
    I think I need to explain my point a bit more.Consider two polarizers at an angle [tex] \alpha [/tex].Consider QM,then the probability of both photons passing thru/being detected is:-

    [tex] P(yes,yes|a,b) = \int \cos^2(\phi) \cos^2(\phi-\alpha) d\phi [/tex]

    where we have integrated over all possible polarization directions that the 'photon' may have.I haven't checked,but I assume that this works out to [tex] \cos^2 \alpha [/tex].Now consider the classical case:-a single photon(or the pair) has a definite direction of polarization--denote it by [\phi].The probability of this pair passing thru the respective polarizers is [tex] \cos^2(\phi) \cos^2(\phi-\alpha) [/tex].But you have so many photon pairs--for another photon pair the direction of polarization [tex] \phi [/tex] may be different from the earlier photon pair--so you need to integrate/average over all the [tex] \phi's [/tex] and you again get the expression

    [tex]P(yes,yes|a,b)= \int \cos^2(\phi) \cos^2(\phi-\alpha) d\phi [/tex]

    where the integration is over the different polarization directions of 'different photons'(both classical optics and local realism are being respected here).In one case you've integrated over all possible polarization directions of a single 'photon' whereas in the other case you have integrated over the different polarization directions of 'different photons'---but the end result is the same.So I don't see why Bell's inequality should even be expected to be satisfied( even with local realistic classical optics) unless of course all photon pairs are supposed to have identical polarizations.The crucial assumption in deriving Bell's inequality is that probabilities factorize in the following manner:-

    P(yes,yes|a,b) = P(yes|a).P(yes|b)
    This is true only if from one photon pair to the other,the direction of polarization is supposed to remain exactly the same as per classical optics(this is something you've to tell me).
  19. Jan 17, 2005 #18
    I would say that you can elect to have these things if you can flesh out a theory or interpretation that uses them and makes clear what they mean. My point was that it is not yet clear that we can attach any physical interpretation to negative probabilities and it is currently just a mathematical device. Note that we are not in need of mathematical devices to solve the conceptual issues involved in Bell-inequalitiy violation, since we already have a perfectly good Hilbert-space formalism that gives all the correct mathematical predictions of QM. What needs to be done is to figure out what negative probabilities actually mean. This is likely to be a thorny issue, since it is connected to the philosophical debates about what ordinary probability means, which are far from being settled.

    On the other hand, the "non-local interaction" possibility is much better worked out. There are the Bohmian and other hidden variable interpretations, which give specific proposals for which variables correspond to features of reality and describe the nonlocal interations by which they evolve. I am not saying that these theories are correct, but we have a good idea of what the different possibilities are and what they would mean in this case.

    The "warped spacetime" proposal seems a little bit far-fetched, but the jury will have to remain out on this until we have a fully-fledged theory of quantum gravity. Certainly, there seems to be no room for this sort of proposal in string theory, but some other quantum gravity proposals do allow for nonlocal connections without violating causality.
  20. Jan 17, 2005 #19


    User Avatar
    Science Advisor
    Gold Member

    1. The QM prediction for a yes/yes is:

    [tex]P(yes,yes|a,b) = .5 \cos^2 (\alpha-\beta) [/tex]

    and there is no integration because there is no [tex] \phi [/tex].

    2. The LR (local realistic) prediction is:

    [tex] P(yes,yes|a,b) = \int .5 \cos^2(\alpha-\phi) \cos^2(\phi-\beta) d\phi [/tex]

    and the right side reduces to the same [tex].5 \cos^2 (\alpha-\beta) [/tex].

    3. But that now leads us to the following:

    [tex] P(yes,yes|a,b) = P(yes,yes,yes|a,b,\phi) + P(yes,yes,no|a,b,\phi) [/tex]

    Which leads us right back to the logic of Bell's Theorem: the [tex]P(yes,yes,no|a,b,\phi)[/tex] term on the right is supposed to be >=0.

    4. Actually, the measurements are always on correlations so the following case is included in the formal derivations as well:

    [tex] P(no,no|a,b) = P(no,no,yes|a,b,\phi) + P(no,no,no|a,b,\phi) [/tex]

    [tex] P(yes,yes|a,b) + P(no,no|a,b) = P(yes,yes,yes|a,b,\phi) + P(yes,yes,no|a,b,\phi) + P(no,no,yes|a,b,\phi) + P(no,no,no|a,b,\phi) [/tex]

    and so

    [tex] P(yes,yes,no|a,b,\phi) + P(no,no,yes|a,b,\phi) >= 0 [/tex] Per LR (which is what actually disagrees with experiment)

    5. The only sense that classical optics enters into things is that QM and LR use the same formula in the expressions above. It works OK for QM but not for LR because of contradiction (LR using the classical formula versus experimental results).

    In other words, classical optics is not equivalent to LR but does correspond to QM. Does that satisfy your last comment as to what you expected from me?
  21. Jan 17, 2005 #20


    User Avatar
    Science Advisor
    Homework Helper

    I'm not quite sure what you mean by 'probability functions are measures', but I can attempt to clarify what I was describing:

    If the particle has a hidden state, then we can describe that hidden state as some [itex]\lambda[/itex] which might be a vector or any number of things and let's say that [itex]\Lambda[/itex] is the range of possible hidden states. Since this [itex]\lambda[/itex] is 'real' (based on the realist assumption) there must be some 'probability function' [itex]h:[0,1] \rightarrow \Lambda[/itex] for the hidden state of the particle. Similarly, we can stipulate a function [itex]g:\Lambda \righarrow R[/itex] which goes from some hidden state of the particle to a discrete finite set of measurement results. Let [itex]f=g h[/itex] (function composition) so this function [itex]f:[0,1] \rightarrow R[/itex].

    Now, what we can do is take the inverse images of elements of [itex]R[/itex] under [itex]f[/itex], let's call them [itex]h_i[/itex]. And, let's take a look at the sum of the measures of [itex]h_i[/itex]. Since the [itex]h_i[/itex] are disjoint and their union is a subset of [itex][0,1][/itex], if the [itex]h_i[/itex] are all measurable, then we can expect the sum of their measures to be less than one. If they are not measurable, then the sum is undefined. And, Bell's theorem assumes that these values are indeed measurable.

    My concern with this was that the measures of the [itex]h_i[/itex] correspond to experimentally testable values, and I'm guessing that that's also related to your question. However, because Bell's theorem partitions on all potentially measured dimensions, rather than actually measurable combinations of dimensions this is not necessarily the case.
    Last edited: Jan 17, 2005
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook