Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Fair Sampling Loophole now closed for Photons

  1. Dec 4, 2012 #1

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    A team including Anton Zeilinger has performed an experiment closing the so-called "fair sampling loophole" for photons.

    http://arxiv.org/abs/1212.0533

    Bell violation with entangled photons, free of the fair-sampling assumption

    Marissa Giustina, Alexandra Mech, Sven Ramelow, Bernhard Wittmann, Johannes Kofler, Adriana Lita, Brice Calkins, Thomas Gerrits, Sae Woo Nam, Rupert Ursin, Anton Zeilinger

    (Submitted on 3 Dec 2012)

    "The violation of a Bell inequality is an experimental observation that forces one to abandon a local realistic worldview, namely, one in which physical properties are (probabilistically) defined prior to and independent of measurement and no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction making them vulnerable to so-called "loopholes." Here, we use photons and high-efficiency superconducting detectors to violate a Bell inequality closing the fair-sampling loophole, i.e. without assuming that the sample of measured photons accurately represents the entire ensemble. Additionally, we demonstrate that our setup can realize one-sided device-independent quantum key distribution on both sides. This represents a significant advance relevant to both fundamental tests and promising quantum applications."

    Previously, a Bell Inequality had been violated free of the fair sampling assumption using Be+ ions (2001). That team included 2012 Nobel prize winner David Wineland of NIST. This new experiment has the advantage of having been performed with photons. I anticipate that a future variation might be performed to close both the fair sampling AND the locality assumptions simultaneously.
     
  2. jcsd
  3. Dec 4, 2012 #2
    Another conspiracy theory laid to rest. They've been beating a dead horse though.
     
  4. Dec 4, 2012 #3
    Even if all loopholes are closed simultaneously, some day, I have a feeling the battle will continue among those who will want to avoid non-locality at all costs; that is, superdeterminists. As for example, this fairly recent paper:
    Bell’s Theorem: the Neglected Solution.
    http://lanl.arxiv.org/ftp/arxiv/papers/1203/1203.6587.pdf
     
    Last edited: Dec 4, 2012
  5. Dec 4, 2012 #4

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    There are already a number of folks who are basically science deniers. You can never please them.

    I am excited though that this team was able to put this together. It seems that these experiments keep pushing the boundaries on overall understanding of entanglement.
     
  6. Dec 4, 2012 #5

    zonde

    User Avatar
    Gold Member

    Hey, is this some gathering of trolls or what?

    I am too excited in a sense. This experimental result is argument against all LHV based on valid foundation. Well of course such a important result needs to be scrutinized but if confirmed ... well then we have a real mystery IMO.
     
  7. Dec 5, 2012 #6
    Interesting that you put Gerard 't Hooft in that category, especially after highlighting that the 2012 Nobel Prize winner was part of the work...
     
  8. Dec 5, 2012 #7

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    Well first off, I didn't intend to imply that all Superdeterminists are science deniers. I believe Gerard 't Hooft is well aware of the pressures Bell places on theory development, and acknowledges the Bell essentials. There is different group of Bell attackers that essentially deny element after element of the (Bell et al) arguments against local hidden variable theories (as there as pseudo-scientists who deny elements of evolution). I was really talking about that crew. For them, their focus changes as their arguments are torn down one by one. Joy Christian is more an example of that group. After all, you can always win an argument by denying your opponent's basic scientific tenets and that is what I call a "science denier". (But that never leads to any useful scientific advance, otherwise a testable prediction would result.)

    On the other hand, I think Superdeterminists (like 't Hooft, esteemed as he is) have yet to demonstrate that their argument IS actually scientific in a traditional sense. I assert that Superdeterminism qualifies as a religion more than science: it is a belief that A THEORY COULD EXIST that would explain something rather than a falsifiable theory itself. I would challenge 't Hooft to demonstrate any concrete element of Superdeterminism that explains the mystery of the Bell results any better than it explains why we measure c to be a constant in any reference frame. (In other words, why Superdeterminism should be invoked for one scientific area and not all others, including as an explanation of human evolution.) For example, 't Hooft says in a recent paper:

    http://arxiv.org/abs/1207.3612

    "Bell’s inequalities[1] and similar observations[2][3] are applied with mathematical rigor to prove that quantum mechanics will be the backbone of all theories for sub-atomic particles of the future, unless, as some string theorists have repeatedly stressed, “an even stranger form of logic might be needed”[4].

    The author of this paper takes a minority’s point of view[5][6], which is that, in order
    to make further progress in our understanding of Nature, classical logic will have to be
    restored, while at the same time our mathematical skills will need to be further improved.
    We have reasons to believe that the mathematics of ordinary statistics can be rephrased in
    a quantum mechanical language and notation; indeed this can be done in a quite natural
    manner, such that one can understand where quantum phenomena come from."


    Basically, he is developing an ad hoc theory to explain behavior that is already described by another theory (quantum mechanics) with a particular non-standard agenda (and he explicitly acknowledges this). Yet there is no explanation of why the same logic is not being applied elsewhere in science. Nor does it explain the hundreds of experiments evidencing entanglement. Instead he attempts to justify his position on heuristic grounds when there are obvious counter-arguments to his position.

    So I question how this really qualifies as normal science; and to the extent it does, I would then say it IS a "science denier's" argument. I can't see any evidence his arguments (or Superdeterminism in general) are being seriously followed by the community, but I am subject to correction. About all you can say is that Superdeterminism is thrown around as an escape to Bell, without there being any serious discussion of how that could occur. You may as well qualify EVERY scientific theory as having Superdeterminism as an escape.
     
    Last edited: Dec 5, 2012
  9. Dec 6, 2012 #8
    Good remark! Did you already find the time to read that article? I hope to find time in the coming week - but that's unlikely...
     
  10. Dec 6, 2012 #9
    This is really something! Now it's almost impossible for people to hide behind loopholes to evade Bell's theorem. Now they have to say something extreme, like "different loopholes explain different photon Bell tests", and that gets into superdeterministic territory, since the photons would have to somehow know in advance what loophole the experimenter is going to close in any given experiment.
     
  11. Dec 6, 2012 #10

    zonde

    User Avatar
    Gold Member

    I found out that these two references are behind paywall:
    14. P. H. Eberhard, Background Level and Counter Efficiencies Required for a Loophole-Free
    Einstein-Podolsky-Rosen Experiment, Physical Review A 47, 747–750 (1993). link
    18. J. F. Clauser, M. A. Horne, Experimental consequences of objective local theories, Physical
    Review D 10, 526–535 (1974). link

    But it seems that the proof of inequality that is used in the paper for interpretation of results is rather simple. Inequality (3) is:
    [tex]J=S(\alpha_1)+S(\beta_1)+C(\alpha_2,\beta_2)-C(\alpha_1,\beta_1)-C(\alpha_1,\beta_2)-C(\alpha_2,\beta_1)\geqslant 0[/tex]

    I will check my proof and then I will post it here.
     
  12. Dec 7, 2012 #11
    Where was it published, or which journal accepted it for publication? I don't see any indication of that...
     
  13. Dec 7, 2012 #12
    Attached are both papers.
     
    Last edited by a moderator: Dec 11, 2012
  14. Dec 8, 2012 #13

    zonde

    User Avatar
    Gold Member

    Thanks a lot.
    So the proof of inequality is in second paper "APPENDIX A: TWO INEQUALITIES" first part. It's very simple indeed.

    My version was about writing down all possible combinations of detections/non-detections at different settings and finding out that no combination can produce negative value. Like this:
    Code (Text):

    S(α2) -C(α2,β1) +S(β1) -C(α1,β1) +S(α1) -C(α1,β2) S(β2) +C(α2,β2)
      0                                                                  0
                      +                                                  +
                                       +                                 +
                                                        0                0
      0        -      +                                                  0
                      +         -      +                                 +
                                       +         -      0                0
      0                                                 0       +        +
      0        -      +         -      +                                 0
                      +         -      +         -      0                0
      0                                +         -      0       +        +
      0        -      +                                 0       +        +
      0        -      +         -      +         -      0       +        0
     
     
  15. Dec 8, 2012 #14

    zonde

    User Avatar
    Gold Member

    Arxiv is preprint site :wink:. But you could ask to which journal it is submitted.
    And I would like to see how it will pass peer review with calculation of standard deviation like it is presented in the paper. I would say it is invalid.
     
  16. Dec 8, 2012 #15
    OK, but then: which Physicsforums Mentor reviewed it and accepted it for discussion here? :confused:
     
  17. Dec 8, 2012 #16

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    That's not the process. It is not a black or white rule. Arxiv papers are usually acceptable as references when the results or direction are consist with accepted science. Or when the authors are well respected in their field. This paper is from a world class team. And the result is consistent with a published paper from a Wineland team.

    Lacking objections per the above, you would not be likely to get a mentor to step in against the reference. On the other hand, you are free to deny it in private. :smile:
     
  18. Dec 8, 2012 #17

    Wouldn't that qualify Zeilinger as a troll as well?




    There is a mystery only if one insists that qm be local realistic. It's your obligation to prove how that should be possible given all the evidence to the contrary.
     
  19. Dec 9, 2012 #18

    zonde

    User Avatar
    Gold Member

    I don't follow your logic, sorry.

    Am I obligated to prove that QM can be local realistic? :eek:
    Why I am obligated? And how should I do that when it's known since Bell theorem that it's not possible?
     
  20. Dec 9, 2012 #19
    I think Maui meant that if you want to believe in local realism, the burden of proof is on you to prove that local realism is compatible with the overwhelming amount of experimental evidence from Bell tests that seems to point in the other direction.
     
  21. Dec 9, 2012 #20

    zonde

    User Avatar
    Gold Member

    I would like to discuss something from the paper.
    "We used a value of ~0.3 for r and measured for a total of 300 seconds per setting at each of the four settings α1β1, α1β2, α2β1, and α2β2 described by angles α1 = 85.6°, α2 = 118.0°, β1 = −5.4°, and β2 = 25.9°."

    "After recording for a total of 300 seconds per setting we divided our data into 10-second blocks and calculated the standard deviation of the resulting 30 different J-values. This yields a sigma of 1837 for our aggregate J-value of J = −126715, a 69-σ violation."

    So we have four datasets recorded at different times. We calculate a J-value based on assumption that there is certain correspondence between datasets. Then we divide each dataset into 30 small datasets and calculate J-value using four smaller datasets.
    But that gives no idea how good is that correspondence between four datasets. PDC sources require high power lasers and as a result drifts are nothing unexpected for such sources.

    So it seems that separate runs with the same settings should be compared in order to talk about some error estimates.

    Or to make it more clearer we can look at it from slightly different side. Two runs - α1β1 and α1β2 both record S(α1) value. So which one is used in the calculations? And how big is the difference between the two (assuming that length of dataset is determined by time and not single counts)?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook