Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Photon entanglement and fair sampling assumption

  1. Nov 19, 2009 #1

    zonde

    User Avatar
    Gold Member

    I am wondering why there are no discussions about correctness of fair sampling assumption in photon entanglement experiments so I would like to start one.

    Bell's inequalities are derived considering all emitted particles. But in real photon entanglement experiments only portion of emitted particles is detected and therefore in order to apply Bell's inequalities to real experiments so called "fair sampling assumption" is required (that photon sample detected is faithful representative of photon sample emitted).
    So the statements about violation of Bell's inequalities and nonlocality lack creditability if some basic tests of this fair sampling assumption are not preformed.
    Of course it is fair to say that it is doubtful that fair sampling assumption can be conclusively proved in photon experiments but on the other hand we can not conclusively declare any experiment to be free from systematic errors. However what makes a difference between qualitatively performed experiments and poor ones is tests against variations in experimental setup and it's environment to asses possible sensitivity against some systematic errors.

    So I would like to ask if there are others who share this view? Or why such discussions are avoided?



    But to give direction to possible discussion I would like to describe three experiments that can show what I have on my mind (I have in some form mentioned them in discussions already).

    #1 Two photon correlations in three photon entanglement.
    This experiment is meant more for theoretical consideration as motivation for doubting fair sampling assumption.

    We have three entangled photons. Two photons interact with polarizers that have 45deg relative angle between them and third photon's polarizer is oriented so that it's angle is between first two (22.5 and 22.5 deg with the firs two).
    Two entangled photons from polarizers at 45deg relative angle will have 50% coincidences in idealized case as cos^2(45deg)=0.5.
    Third entangled photon has to have 85% coincidences with the first photon and 85% coincidences with second photon as cos^2(22.5deg)=0.85.
    The maximum amount for what all three photons can coincidence is 50% (that's because that is the number for first two photon coincidences). So it means that the rest from both of 85% coincidences of the third photon should be separate for first photon and second photon and it is minimum 35% . But now for the third photon we have:
    taking that tree photon coincidences are x <= 50%
    x+(85%-x)+(85%-x)=170%-x >= 120%
    The reason for arriving at this obviously wrong inequality can be only wrongly assumed fair sampling assumption if we do not question empirical cos^2(rel.angle) formula.

    To illustrate what I mean there is simple diagram. First row show 50% coincidence between first two photon streams (m-matching polarization, d-different polarization). 1./3. row shows 85% coincidences between 1. and 3. photon streams. 2./3. row shows impossibility to have 85% coincidences between 2. and 3. photon streams (if there is match between 1./2. and 1./3. then there is match between 2./3.; if there is mismatch between 1./2. and match between 1./3. then there is mismatch between 2./3.).
    1./2. mmmmm mmmmm ddddd ddddd
    1./3. mmmmm mmmmm mmmmm mmddd
    2./3. mmmmm mmmmm ddd__ __mmm mmmm


    #2 Experiment to test superposition of wavefunction before detection but after polarization.
    In QM it is considered that before measurement wavefunction exists in superposition of states. In order to test polarization of photons two pieces of equipment are used together - polarizer and detector. It is clear that we should consider that wavefunction is in superposition of polarization states before interaction with polarizer. However one can ask in what superposition wavefunction exists after polarizer but before detector. Detector separates sample of photons in two parts - one that is detected and the other that is not. So it seems to me that wavefunction exists in superposition of detectability before interaction with detector. Such viewpoint of course contradicts fair sampling assumption.

    And so actual experiment I have on mind is EPR type photon polarization experiment with two sites with PBSes and 4 detectors at each output of two PBSes. In one of four channels between detector and PBS we insert wave plate that rotate polarization angle by 90 deg.
    If there is detectability superposition of wavefunction then one can expect that this additional wave plate will change outcome of experiment (compared to outcome without this plate)?
    It seems to me that particular change to expect will be that wave plate for this modified channel will invert correlations with other site's two channels.


    #3 Experiment to test changes in coincidence detection rates as detection efficiency increases.
    It is usually believed that realistic explanation requires that whole sample (assuming it is possible to detect it) should show linear zigzag graph of polarization correlation depending from relative angle. There is another possibility if we do not assume fair sampling. It is possible to speculate that whole sample should show completely flat graph i.e. that there are no correlation between polarizations of entangle photons. Consequently correlation appears only for combined measurement of polarization and detectability.

    Experiment that can test this king of violation of fair sampling assumption would consist of usual EPR type photon polarization experiment. But measurements should be made with variable levels of detection efficiency. That can be achieved by varying bias voltage of avalanche silicon photodetectors. If we test two maximum correlation angles (minimum coincidences and maximum coincidences) then increasing efficiency should lead to faster growth of coincidence count for minimum and slower growth for maximum with possible tendency for growth to become even at 50% efficiency (increasing efficiency near 50% level should contribute to graph minimum and maximum by the same amount).

    Increasing photodetector efficiency leads to increased noise level (dark count rate) and that can be explanation for bias in not noticing that tendency as qualitatively two effects are indistinguishable and only quantitative analysis can separate two effects.
    So to make this experiment more reliable cooled detectors with decreased dark count rates would be preferable.
     
  2. jcsd
  3. Nov 19, 2009 #2

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    That is a "fair" question. :smile: But don't expect it to be easy! There are a lot of good reasons this is not considered so big a deal at this time, going from the general to the more specific. I will also address your points in a second post.

    Let's keep in mind and agree to the following at all times: If Bell tests results (violating inequalities) are invalid because fair sampling is a bad assumption, then:

    a) Local realism is true after all (otherwise the issue is moot, akin to the price of tea in China);
    b) The predictions of QM match relevant tests within experimental limits;
    c) The predictions of the true local realistic theory - which we will call LRT(theta) - are different than the predictions of QM - per Bell, they must be;
    d) We will acknowledge that there are detector inefficiencies, and therefore for any entangled pair emitted either 0, 1 or 2 photons may be detected within any given time window of size T; as a result, Bell tests
    e) There is a currently unknown mechanism by which an "unfair" sample is being presented;
    f) Since the "unfair" sample matches the predictions of QM where QM(theta)=cos^2 theta), the "bias" can be quantified as being BIAS(theta) = QM(theta) - LRT(theta);
    g) When testing Alice and Bob: we will discuss using photons from a PDC Type I source (matching polarizations for Alice and Bob), and use a PBS and 2 detectors on each side (2 splitters and 4 detectors total) so as rule out the issue that the detectors or the polarizing beam splitters are not consistent.

    If we don't agree on these points, it will be hard to go very far as I think these are all either explicit to the debate or simply definitions.
     
    Last edited: Nov 19, 2009
  4. Nov 19, 2009 #3

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    Now, assuming the above 7 rules, we can say the following:

    1. We are missing a hypothetical LRT(theta). What is the true coincidence rate?

    The usual math, assuming Malus and statistical independence of a single underlying polarization on both sides, gives us .25 + .5(cos^2(theta)) so that LRT(0) = .75. There are no perfect correlations! Of course, by our assumptions above that is not a problem. We simply have BIAS(0) = QM(0) - LRT(0) = 1.00 - .75 = .25. Similarly, we have BIAS(90) = QM(90) - LRT(90) = 0 - .25 = -.25. In fact, the only place where the two would agree is at theta=45 which yields QM(45) = LRT(45) = .5. Then BIAS(45) is 0.

    Understand that I am not asserting that the above function for LRT(theta) is the only one; I understand it is merely 1 of a large number of possibilities. I would welcome any you would care to propose, that won't matter.

    2. Regardless of whether you accept my LRT(theta) in 1. above, the bias mechanism varies with theta. Since QM(theta) varies between 0 and 1, there is NO LRT(theta) such that BIAS(theta) is a fixed amount. If there were, then the LRT(theta) would either be >1 or <0 at some points.

    3. If Alice and Bob are separated and reality is local, how can you have BIAS vary with theta? Clearly, there is no bias at 45 degrees but there will be bias at most other angles. For the bias mechanism to make sense, it must somehow know the delta between Alice and Bob. And yet this violates our premise a).

    I think you should be able to see the difficulties here.
     
  5. Nov 19, 2009 #4

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    Let's look at the BIAS function in a more detail manner:

    If we were somehow able to sample 100% of all events, then the true results would be LRT(theta) and not QM(theta). Therefore, the BIAS would be 0. So the BIAS function must include as a parameter the sampling percentage (SP). As SP -> 1, BIAS(theta) -> 0.

    Of course, the problem with this - which is not absolute but certainly a difficulty - is that in actuality as SP has increased there has been NO change in the BIAS function at all. Instead, we just get more and more confirmation of QM(theta)!
     
  6. Nov 19, 2009 #5

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    You can't go very far with this argument, as this is precisely the Bell argument reformulated. Bell's argument is that IF you assume there are well-defined answers for things you cannot observe - as in your example above - the results are not self-consistent. Therefore, the assumption is wrong.

    Now, why didn't Bell ask himself whether or not the fair sampling assumption is to blame instead? Because Bell's Theorem itself does not depend on fair sampling at all! He asserts that NO local realistic theory can give the same results as QM. If you did a Bell test with 100% sample size, your results above could not make sense. You have actually demonstrated why local realism fails. Now, this result would not apply IF you had some OTHER function than the cos^2(theta) formula. OK, what is it? You will see quickly that finding an alternative formula which IS consistent is no piece of cake. And of course, it varies from experiment.
     
  7. Nov 20, 2009 #6

    zonde

    User Avatar
    Gold Member

    a) Local realism is true after all (otherwise the issue is moot, akin to the price of tea in China);
    Yes

    b) The predictions of QM match relevant tests within experimental limits;
    Yes but with reservations. The reason is that main aim of experiments is proving violation of Bell inequalities and if we analyze results from different perspective some questions may or may not arise.

    c) The predictions of the true local realistic theory - which we will call LRT(theta) - are different than the predictions of QM - per Bell, they must be;
    Let's formulate local realistic theory as LRT(theta,SP) where SP - sampling percentage.

    d) We will acknowledge that there are detector inefficiencies, and therefore for any entangled pair emitted either 0, 1 or 2 photons may be detected within any given time window of size T; as a result, Bell tests
    Yes

    e) There is a currently unknown mechanism by which an "unfair" sample is being presented;
    Let's say there is no well formulated model that propose mechanism by which an "unfair" sample is being presented. But I think I can formulate something as a starting point for a discussion if there will be interest.

    f) Since the "unfair" sample matches the predictions of QM where QM(theta)=cos^2 theta), the "bias" can be quantified as being BIAS(theta) = QM(theta) - LRT(theta);
    Lets formulate it that way: BIAS(theta,SP) = SP*QM(theta) - LRT(theta,SP)

    g) When testing Alice and Bob: we will discuss using photons from a PDC Type I source (matching polarizations for Alice and Bob), and use a PBS and 2 detectors on each side (2 splitters and 4 detectors total) so as rule out the issue that the detectors or the polarizing beam splitters are not consistent.
    Yes


    Now more about the contradiction with QM.
    In real experiment for QM(90)=0 to mach experiment we have to assume that there is some small amount of pairs that disentangle due to decoherence so that we have EXP(90)=SP*QM(90)+SP*DEC+N where EXP(theta) is actual value from experiment; DEC is disentangled pairs; N - noise due to dark counts.
    Now about these disentangled pairs. What are predictions of QM about this value? I assume there are no definite predictions except that there is such thing and it should be minimized in experiments.

    But now we have assumed that proportion between amount of entangled pairs and amount of disentangle pairs is constant in respect to SP value.
    But let's change that. What we have now?
    EXP(theta)=F1(SP)*QM(theta)+F2(SP)*DEC+N
    I will chose F1(SP) and F2(SP) in certain way. F2(SP) will rise from 0 to maximum along the interval but F1(SP) will vary from maximum at start of the interval then droping to 0 at 1/2 interval and then becoming maximum negative at the end of the interval so that integral over the interval is 0.

    And now it is possible to have LRT(theta,SP) so that BIAS(theta,SP) = F1(SP)*QM(theta) - LRT(theta,SP) = -F2(SP)*DEC
    And we can even drop theta from BIAS().
    Of course not very nice thing is negative probability.

    What do you say? Does it directly contradict QM or it does not?
     
  8. Nov 20, 2009 #7

    zonde

    User Avatar
    Gold Member

    No I do not make any assumptions in description of this experiment about things behind measurement equipment readings. That's not true. Or if you have spotted something point it out and I will try to fix it.
    Only assumption is about cos^2(theta) and that it does not depend from sampling percentage i.e. prediction is SP*cos^2(theta) even if SP=100%.

    Try to analyze this experiments using some interpretation that claims to have solution for violation of Bell's inequalities and see what you get. Can you still hold to cos^2(theta) formula?
     
  9. Nov 20, 2009 #8

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    A negative probability would ruin it completely. That was essentially my point, that all attempts to create such a function will create contradictions. I don't understand the point of the DEC as this is not an important element in actual tests and may as well be zero (otherwise the experimental result would not support QM). Same for N, which is known precisely. You also can't have EXP() different than QM() as these are idential, as we know from experiment.

    And you cannot have the LRT() function dependent on SP, since the idea is that LRT() yields a value different than QM when all pairs are seen - per Bell. In other words, at no point can LRT() yield the same predictions as QM(). Adding in SP as a variable has the effect of including the bias mechanism in the function and we don't want to do that because we need to keep it separate. And we need to identify the LRT() function explicitly, or at least the values it generates.
     
  10. Nov 23, 2009 #9

    zonde

    User Avatar
    Gold Member

    About DEC the idea is that it is important element and it can't be zero (maybe except when SP->0) that way making QM prediction not entirely correct.
    A bit of mistake from my side - there is not negative probability in my equation but only negative derivative i.e. DEC "eats away" cos^2(theta) as SP increases. This is not clear contradiction but is a bit counter intuitive that one factor decreases as sampling percentage increases. However I found a way how to get away from this.

    We can rewrite this equation differently (I will drop N from equation because it can be found out fairly well):
    EXP(theta)=F1(SP)*QM(theta)+F2(SP)*DEC+N

    we modify it this way:
    EXP(theta)=F1(SP)*cos^2(theta)+F2(SP)*DEC = F1(SP)*cos^2(theta)+F2(SP)*cos^(theta)+F2(SP)*sin^(theta) =
    (F1(SP)+F2(SP))*cos^2(theta) + F2(SP)*sin^2(theta)

    That way there is no counter intuitive tendencies in equation.


    Look, what is fair sampling assumption is QM? We can express it as SP*QM(theta) i.e. QM(theta) is allways proportional to SP.
    So if LRT() should describe unfair sampling situation it can't have the same form as QM() meaning SP*LRT() but SP should be as parameter of LRT() that way LRT(SP).

    Reread definition of fair sampling assumption - "photon sample detected is faithful representative of photon sample emitted". If we hypothetically reject that assumption we say that - "photon sample detected is not faithful representative of photon sample emitted" i.e. we have two different functions for part of the sample and for full sample. To unify these two functions in one we have to include SP in function.
     
  11. Nov 23, 2009 #10

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    I want you to tell me what a faithful representation of the entire sample (no undetected pairs) would look like. That is LRT(theta) and does not depend on SP. We already know that the experimentally detected values match QM(theta), for all existing SP. We need to know what to expect when SP=100 because that is where Bell's Inequality will kick in!

    It is funny, I have had this conversation with Caroline Thompson and other local realists. NEVER once have I been able to get anyone to tell me what a value of LR(theta) is for ANY angle setting. All I want to hear is a number. If there is a number that works, what is it? If not, supporters of LR should quit telling me that LR is viable. Here is what I am looking for:

    LR(0) = ?
    LR(22.5) = ?
    LR(45) = ?
    LR(67.5) = ?
    LR(90) = ?

    What is the TRUE value for correlations at the above angles, when the entire sample is considered and there are no undetected pairs (SP=100%) ?

    Now obviously, since I know the experimental values, I can calculate the Bias and see if it varies with theta (which violates strict locality, which was tested by Weihs et al). I also can determine if Bell's Inequality is violated on the full sample (which means the values cannot be realistic). You cannot put in numbers for the above that won't run afoul of one or the other the prime assumptions of locality and realism.
     
    Last edited: Nov 23, 2009
  12. Nov 24, 2009 #11

    zonde

    User Avatar
    Gold Member

    We are discussing photon experiments, right? In that case this statement is clearly wrong.
    We know QM(theta) only for SP~=5%.
    Can you give reference for other levels of SP?

    I thought I have stated this already but maybe it was not clear enough.

    For SP=100% we have
    LR(0) = 0.5
    LR(22.5) = 0.5
    LR(45) = 0.5
    LR(67.5) = 0.5
    LR(90) = 0.5
     
  13. Nov 24, 2009 #12

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    The SP issue is a bit more complicated so I will discuss this a little later. (It doesn't really matter whether the SP for existing experiments is 5% or 25% since they all say the same thing.)

    But LR(0)=.5? This is in direct contradiction to realism. A realistic theory says that there are pre-existing hidden variables being revealed by observation. Measuring the same attributed on particles fulfilling a conservation rule should always yield an exact match. Therefore LR(0)=1 is required.
     
  14. Nov 25, 2009 #13

    zonde

    User Avatar
    Gold Member

    Different realistic theories can give different descriptions for entanglement. Your definition of entanglement for realistic theory is not viable from my viewpoint (you implicitly include it in your statement that LR(0)=1).
     
  15. Nov 25, 2009 #14

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    Sure, they don't presume that entanglement is itself a real phenomenon because there is no ongoing connection. EPR's example wasn't even entanglement as we know it. The question is whether the knowledge of one member of a system is sufficient to tell you something about the other one. In that case, LR(0) must be 1. If it didn't, then there are no hidden variables being revealed (when conservation rules are considered, of course). How would you formulate a realistic scenario with conservation of spin, for example?
     
  16. Nov 26, 2009 #15

    zonde

    User Avatar
    Gold Member

    I would have to agree with you about LR(0)=1 if we would be discussing the case assuming that fair sampling holds good.
    But that is not the case. We are discussing unfair sampling.
    When we talk about theoretical detection of full sample we revert to fair sampling (full sample can be viewed only as fair sample because there is no "place" left for unfair sampling). So when we compare predictions or realistic experiments (assuming it features unfair sampling) and theoretical full sample experiment we confront unfair sampling with fair sampling.

    So how does unfair sampling change the things? First of all there should be some factor that determines unfair sampling i.e. detection of some photons and non detection of others.
    When we resort to full sample we clearly discard any information about that unknown parameter that is revealed by unfair subsample.
    If I hypothesize that photons are entangled with detectability parameter then full sample should not show any sign of entanglement because we are completely ignoring that parameter with full sample.
    So obviously your point about "If it didn't, then there are no hidden variables being revealed" is true. But it does not mean that hidden variables are not being revealed by unfair subsample.
     
  17. Nov 27, 2009 #16

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    1. I am not talking about the experimental values, simply the LR() predictions for 100% of the sample. (No fair sampling assumption required.) It is axiomatic that LR(0)=1 because that is the realistic requirement if there are hidden variables.

    Now, in all fairness, there *are* some LR variations that yield LR(0)<1 - as you suggest - but they do not support perfect correlations AND they have even larger deviations from observation. So the question becomes: what is the LR assumption you want to use to explain correlations? If LR()=.5 then you are saying that underlying conditions are purely by chance, which completely flies in the face of the observed correlations.

    So we can go this route if you like, but it seems weird to me.

    2. I am OK with this.
     
  18. Nov 28, 2009 #17

    zonde

    User Avatar
    Gold Member

    I do not understand. On the one hand you are saying this:
    But on the other hand you are saying this:
    Additionally there are no photon entanglement experiments where full sample is observed so how can you say that this "flies in the face of the observed correlations".

    So maybe yo can explain a bit more your point?
     
  19. Nov 28, 2009 #18

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    If you have LR()=.5 then you have gigantic deviations from experiment. It doesn't come close to Malus, so you are now almost rejecting that too (a classical result from 1809). So you are now saying that the Bias function is:

    Bias(theta, SP)=f(SP) * .5-cos^2(theta)

    Also as before: this varies with theta, is positive and some points and negative at others, and can vary in magnitude from from 0 to 50%. That is a steep hill to climb, but it at least has this going for it: it does not violate Bell.
     
  20. Nov 30, 2009 #19

    zonde

    User Avatar
    Gold Member

    I guess that you mean deviations from empirical law SP*cos^2(theta) when you say "deviations from experiment".
    About that empirical law SP*cos^2(theta). To consider it tested experimentally it should be tested not only with different theta but with different SP as well. That is exactly motivation for experiment #3 from my opening post.

    About Malus law. If entangled particles are entangled with detectability but not polarization then there is no direct relation to Malus law.
    It seems that Malus law is still intuitively understood using photon hidden variables but then traditional approach to hidden variables don't work for entanglement so the situation is not very consistent anyways. But discussion about Malus law and how the situation can be made consistent does not promise to be very easy so I propose not to continue it unless we have no other points to discuss.

    First of all bias function is:
    Bias(theta, SP)=f(SP) * .5 - SP*cos^2(theta)

    This bias function as I see describes deviation from fair sampling. I do not see that negative values are problematic with my correction to formula. As we are comparing SP*cos^2(theta) with averaged value there should be some values above average and some below average.
    However this fair sampling makes me uncomfortable because it is borrowed from macro level and as I see it implies that QM level can be thought as colliding billiard balls.
    So I am not sure that it is physically meaningful to make comparison with some averaged subsample (f(SP) * .5 part).
     
  21. Nov 30, 2009 #20

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    1. The issue about Malus is that classically, photons have polarization. Now, with your hypothesis it may be true that the detection of a photon is dependent on a hidden parameter associated with SP. But clearly: if you detect a photon, it matters not how many polarizers are between the source and the detector; it matters only what angles they are set at. And those settings always follow Malus. So it is hard to say that 2 entangled photons lose any connection at all to Malus, but I agree it is at least conceivable (as a hypothesis).

    2. We agree that in your hypothesis it is as follows:
    Bias(theta, SP)=f(SP) * ( .5 - SP*cos^2(theta) )

    Just making sure I have the grouping as you see it.

    Regardless, we still have the issue that Bias() is a function of theta, which it shouldn't be if locality applies.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Photon entanglement and fair sampling assumption
  1. Entangled Photons (Replies: 13)

Loading...