Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Arguments in favour of the fair sampling assumption

  1. Mar 12, 2013 #1
    Hi all,

    I'm no expert in quantum mechanics by any means, but I've been quite interested in, and done quite some research on, Bell's theorem and related inequalities such as CH and CHSH. The theories all look perfectly sound, except that they all contain the "no enhancement assumption", some even in the form of an even stronger "fair sampling assumption".
    Now it seems quite doable to formulate a theory that contains regular, classical hidden variables that match CH and CHSH in all correlations (ie. coincidence rate between detected photons with polarizes either set at specific angles or removed). Simply by assuming there exist hidden variables that determines the chance the particle is detected, it seems possible to match these coincidence rates that Quantum Mechanics also predicts.

    I have two questions regarding this:
    1. Do such hidden variable theorems exist, that are completely classical and match the coincidence rates as predicted by QM?
    2. Regardless of whether such a theorem exists, why is this "no enhancement assumption" assumed to be true? What are the arguments in favour of it?


    Thanks in advance,
    Gespex
     
  2. jcsd
  3. Mar 15, 2013 #2
    I don't like bumping - but isn't this a fair question? I've found a paper that seems to correspond with the QM predictions here:
    http://arxiv.org/pdf/quant-ph/9905018v1.pdf

    I know it's "only" arxiv, but the maths look sound...
     
  4. Mar 15, 2013 #3

    Nugatory

    User Avatar

    Staff: Mentor

    It's harder than it sounds, because what matters is the relative angle between the two detectors. The hidden variable theory would have to produce the same result with A at zero degrees and B at 120 degrees as with A at 10 degrees and B at 130 degrees; and it would have to produce the same result with A at zero degrees and B at zero degrees as it would with A at 10 degrees and B at 10 degrees.... and if the theory is to be local the probability of detection at A must be written as a function of just the hidden variables and the setting of A; that is, it must be the same for all the cases in which A is set to ten degrees or zero degrees or any other value.
     
  5. Mar 15, 2013 #4
    Thanks for your reply!
    Though the article I provided (http://arxiv.org/pdf/quant-ph/9905018v1.pdf) seems to have such a local hidden variable theorem, which completely agrees with QM's predictions.

    So why is it still assumed that there is some sort of collapse, rather than that very model being true?
     
  6. Mar 15, 2013 #5

    jtbell

    User Avatar

    Staff: Mentor

    While you're waiting for a respose to your question, I'd just like to point out that a forum search for "Gisin" (use the "Search this forum" link at the top right of the thread list) turns up a number of hits, some of which probably discuss this paper.
     
  7. Mar 15, 2013 #6

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    You must understand that if you are to reproduce the QM results, you need something somewhere to be sensing the global context or otherwise be leaving some kind of trace. If it is a global context, then it is not local. If there is some other trace, then it is experimentally falsifiable.

    In this case, the comment is that there is a dependency on the likelihood of detection between the hidden angle and the angle being measured. Were that the case, then the total intensity of light emerging from a polarizing beam splitter would vary with the input angle. That doesn't happen, ergo it is falsified before you start.

    After his (2):

    "when ~a happens to be close to ~λA then the probability that an outcome is produced is larger than when ~a happens to be nearly orthogonal to ~λA."

    There is always something like this hanging around (and of course it is not a prediction of QM). Has to be, because of Bell!
     
    Last edited: Mar 15, 2013
  8. Mar 15, 2013 #7

    Cthugha

    User Avatar
    Science Advisor

    Some comments. First, it is not just ArXiv. It has been published in Physics Letters A Volume 260, Issue 5, 20 September 1999, Pages 323–327. (http://dx.doi.org/10.1016/S0375-9601(99)00519-8.

    Second, I do not know what you interpret into the article, but Gisin is an opponent of local hidden variable theories. However, as this article dates back to 1999, one should rather read it as giving an estimate of what detector quantum efficiencies one might need to close the detector efficiency loophole in experiments on entanglement. Actually, that is rather clever, as once he outlined the importance of these loopholes, he went on to perform experiments on closing them and published those in high-impact journals.
     
  9. Mar 15, 2013 #8
    Thanks for your answers, guys! Really appreciated, but I have a few questions left.

    @DrChinese:
    But isn't λA completely random per particle? Yes, if it's close to the measured angle it has a greater chance to be measured (at one side, not at the other). However, the angle between λA and a is completely random. So given a large number of particles the chance of an outcome being produced is the same for every angle, being 50% for half of the entangled particles and 100% for the other half, so on average 75%. That's completely independent of the actual detected angle, right?

    I actually wrote a program to simulate a similar theorem. I made one function for each part of the experiment, to prevent some small bug from allowing one detector of accidentally influencing the other particle or detector. It produced perfectly (within extremely reasonable margins) the same outcome as CHSH and CH74, and as this paper. The number of particles measured would be constant for every measured angle.
    (Unfortunately, I no longer have this program, but it seems to follow logically from this paper anyway)
    Or am I missing something here?

    @Cthugha:
    So have people done the experiment with a detector efficiency of >75% yet? And also, how can we be so certain that this missed detection isn't inherent to detecting it at all? In other words, how can we measure the real detector efficiency for particles that simply *can't* be measured? We can't but in another detector, as it will suffer from the same flaws, right? Can we tell some other way?
    The only way I can think of we can be certain is by knowing exactly how many particles should be measured. As it may be inherently impossible to measure this number, to tell for sure we must know exactly how many particles will be emitted by some source. But how can we know this for sure, if we have no way to actually test it? We may simply be wrong there...
    Or is there some other method here?


    Again, thanks for your replies guys, I really appreciate it! I hope I'm not being ignorant here, as I said, I'm not an expert, just a programmer with some interest in QM.
     
  10. Mar 15, 2013 #9

    Cthugha

    User Avatar
    Science Advisor

    Sure, this has been done as early as 2001 by using ions instead of photons in David Winelands group (Nature 409, 791-794 (15 February 2001)). For ions the detection efficiency is essentially 100%.

    Well, if it cannot be detected at all, it is not existing. If you rather consider an unfair sampling scenario, you can rule that out by having efficient detectors. At high efficiencies, you have different prediction between typical entanglement and unfair sampling predictions which are testable (like in the paper mentioned above).

    Well, you obviously can know the total output within some time interval by placing many detectors or using non-linear detectors.

    I see no problem with measuring the total output power and the statistics of the light field in question.

    Also you can calibrate your detector and find out its quantum efficiency. This can be done by using down-converted light or by subjecting light to a non-linearity (like second harmonic generation). You can also perform detector tomography (Nature Physics 5, 27 - 30 (2009)) to fully classify your detector.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Arguments in favour of the fair sampling assumption
  1. Bell's assumption (Replies: 79)

Loading...