Fair Sampling Loophole now closed for Photons

In summary, a team led by Anton Zeilinger performed an experiment with entangled photons, closing the "fair sampling loophole" and demonstrating the violation of a Bell inequality without making any additional assumptions. This represents a significant advancement in both fundamental tests and potential quantum applications. The experiment also has the advantage of using photons instead of Be+ ions, as was previously done in 2001. However, despite this progress, there may still be a continued debate among those who try to avoid non-locality at all costs, such as superdeterminists. There are also some who deny the scientific validity of Bell's theorem and similar observations, but this experiment provides strong evidence against their arguments.
  • #1
DrChinese
Science Advisor
Gold Member
8,128
1,878
A team including Anton Zeilinger has performed an experiment closing the so-called "fair sampling loophole" for photons.

http://arxiv.org/abs/1212.0533

Bell violation with entangled photons, free of the fair-sampling assumption

Marissa Giustina, Alexandra Mech, Sven Ramelow, Bernhard Wittmann, Johannes Kofler, Adriana Lita, Brice Calkins, Thomas Gerrits, Sae Woo Nam, Rupert Ursin, Anton Zeilinger

(Submitted on 3 Dec 2012)

"The violation of a Bell inequality is an experimental observation that forces one to abandon a local realistic worldview, namely, one in which physical properties are (probabilistically) defined prior to and independent of measurement and no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction making them vulnerable to so-called "loopholes." Here, we use photons and high-efficiency superconducting detectors to violate a Bell inequality closing the fair-sampling loophole, i.e. without assuming that the sample of measured photons accurately represents the entire ensemble. Additionally, we demonstrate that our setup can realize one-sided device-independent quantum key distribution on both sides. This represents a significant advance relevant to both fundamental tests and promising quantum applications."

Previously, a Bell Inequality had been violated free of the fair sampling assumption using Be+ ions (2001). That team included 2012 Nobel prize winner David Wineland of NIST. This new experiment has the advantage of having been performed with photons. I anticipate that a future variation might be performed to close both the fair sampling AND the locality assumptions simultaneously.
 
Physics news on Phys.org
  • #2
Another conspiracy theory laid to rest. They've been beating a dead horse though.
 
  • #3
Maui said:
Another conspiracy theory laid to rest. They've been beating a dead horse though.
Even if all loopholes are closed simultaneously, some day, I have a feeling the battle will continue among those who will want to avoid non-locality at all costs; that is, superdeterminists. As for example, this fairly recent paper:
Besides a standard position with respect to Bell’s theorem (indeterminism) and a non-standard solution (strong nonlocality) there is a neglected solution – total determinism. At least for the moment, each of these positions is metaphysical, i.e. stands on hypotheses not part of physical theories in the strict sense. (A first conclusion seems therefore that a good dose of agnosticism is healthy in this debate.) Total determinism is neglected in the community of quantum physicists and philosophers because it has been dismissed as ‘conspiratorial’ from the start, by John Bell and others. However, we believe such a dismissal rests on heavy assumptions. We argued that from another point of view, determinism is the most straightforward interpretation of Bell’s theorem, resting on the simplest ontology.
Bell’s Theorem: the Neglected Solution.
http://lanl.arxiv.org/ftp/arxiv/papers/1203/1203.6587.pdf
 
Last edited:
  • #4
bohm2 said:
Even if all loopholes are closed simultaneously, some day, I have a feeling the battle will continue among those who will want to avoid non-locality at all costs; that is, superdeterminists.

There are already a number of folks who are basically science deniers. You can never please them.

I am excited though that this team was able to put this together. It seems that these experiments keep pushing the boundaries on overall understanding of entanglement.
 
  • #5
Maui said:
Another conspiracy theory laid to rest. They've been beating a dead horse though.

bohm2 said:
Even if all loopholes are closed simultaneously, some day, I have a feeling the battle will continue among those who will want to avoid non-locality at all costs; that is, superdeterminists.

DrChinese said:
There are already a number of folks who are basically science deniers. You can never please them.
Hey, is this some gathering of trolls or what?

DrChinese said:
I am excited though that this team was able to put this together. It seems that these experiments keep pushing the boundaries on overall understanding of entanglement.
I am too excited in a sense. This experimental result is argument against all LHV based on valid foundation. Well of course such a important result needs to be scrutinized but if confirmed ... well then we have a real mystery IMO.
 
  • #6
DrChinese said:
There are already a number of folks who are basically science deniers. You can never please them.

Interesting that you put Gerard 't Hooft in that category, especially after highlighting that the 2012 Nobel Prize winner was part of the work...
 
  • #7
Quantumental said:
Interesting that you put Gerard 't Hooft in that category, especially after highlighting that the 2012 Nobel Prize winner was part of the work...

Well first off, I didn't intend to imply that all Superdeterminists are science deniers. I believe Gerard 't Hooft is well aware of the pressures Bell places on theory development, and acknowledges the Bell essentials. There is different group of Bell attackers that essentially deny element after element of the (Bell et al) arguments against local hidden variable theories (as there as pseudo-scientists who deny elements of evolution). I was really talking about that crew. For them, their focus changes as their arguments are torn down one by one. Joy Christian is more an example of that group. After all, you can always win an argument by denying your opponent's basic scientific tenets and that is what I call a "science denier". (But that never leads to any useful scientific advance, otherwise a testable prediction would result.)

On the other hand, I think Superdeterminists (like 't Hooft, esteemed as he is) have yet to demonstrate that their argument IS actually scientific in a traditional sense. I assert that Superdeterminism qualifies as a religion more than science: it is a belief that A THEORY COULD EXIST that would explain something rather than a falsifiable theory itself. I would challenge 't Hooft to demonstrate any concrete element of Superdeterminism that explains the mystery of the Bell results any better than it explains why we measure c to be a constant in any reference frame. (In other words, why Superdeterminism should be invoked for one scientific area and not all others, including as an explanation of human evolution.) For example, 't Hooft says in a recent paper:

http://arxiv.org/abs/1207.3612

"Bell’s inequalities[1] and similar observations[2][3] are applied with mathematical rigor to prove that quantum mechanics will be the backbone of all theories for sub-atomic particles of the future, unless, as some string theorists have repeatedly stressed, “an even stranger form of logic might be needed”[4].

The author of this paper takes a minority’s point of view[5][6], which is that, in order
to make further progress in our understanding of Nature, classical logic will have to be
restored, while at the same time our mathematical skills will need to be further improved.
We have reasons to believe that the mathematics of ordinary statistics can be rephrased in
a quantum mechanical language and notation; indeed this can be done in a quite natural
manner, such that one can understand where quantum phenomena come from."


Basically, he is developing an ad hoc theory to explain behavior that is already described by another theory (quantum mechanics) with a particular non-standard agenda (and he explicitly acknowledges this). Yet there is no explanation of why the same logic is not being applied elsewhere in science. Nor does it explain the hundreds of experiments evidencing entanglement. Instead he attempts to justify his position on heuristic grounds when there are obvious counter-arguments to his position.

So I question how this really qualifies as normal science; and to the extent it does, I would then say it IS a "science denier's" argument. I can't see any evidence his arguments (or Superdeterminism in general) are being seriously followed by the community, but I am subject to correction. About all you can say is that Superdeterminism is thrown around as an escape to Bell, without there being any serious discussion of how that could occur. You may as well qualify EVERY scientific theory as having Superdeterminism as an escape.
 
Last edited:
  • #8
zonde said:
Hey, is this some gathering of trolls or what?
Good remark! Did you already find the time to read that article? I hope to find time in the coming week - but that's unlikely...
 
  • #9
This is really something! Now it's almost impossible for people to hide behind loopholes to evade Bell's theorem. Now they have to say something extreme, like "different loopholes explain different photon Bell tests", and that gets into superdeterministic territory, since the photons would have to somehow know in advance what loophole the experimenter is going to close in any given experiment.
 
  • #10
harrylin said:
Did you already find the time to read that article? I hope to find time in the coming week - but that's unlikely...
I found out that these two references are behind paywall:
14. P. H. Eberhard, Background Level and Counter Efficiencies Required for a Loophole-Free
Einstein-Podolsky-Rosen Experiment, Physical Review A 47, 747–750 (1993). link
18. J. F. Clauser, M. A. Horne, Experimental consequences of objective local theories, Physical
Review D 10, 526–535 (1974). link

But it seems that the proof of inequality that is used in the paper for interpretation of results is rather simple. Inequality (3) is:
[tex]J=S(\alpha_1)+S(\beta_1)+C(\alpha_2,\beta_2)-C(\alpha_1,\beta_1)-C(\alpha_1,\beta_2)-C(\alpha_2,\beta_1)\geqslant 0[/tex]

I will check my proof and then I will post it here.
 
  • #11
DrChinese said:
A team including Anton Zeilinger has performed an experiment closing the so-called "fair sampling loophole" for photons.

http://arxiv.org/abs/1212.0533

Bell violation with entangled photons, free of the fair-sampling assumption

Marissa Giustina, Alexandra Mech, Sven Ramelow, Bernhard Wittmann, Johannes Kofler, Adriana Lita, Brice Calkins, Thomas Gerrits, Sae Woo Nam, Rupert Ursin, Anton Zeilinger

(Submitted on 3 Dec 2012) [..]
Where was it published, or which journal accepted it for publication? I don't see any indication of that...
 
  • #12
zonde said:
14. P. H. Eberhard, Background Level and Counter Efficiencies Required for a Loophole-Free
Einstein-Podolsky-Rosen Experiment, Physical Review A 47, 747–750 (1993). link
18. J. F. Clauser, M. A. Horne, Experimental consequences of objective local theories, Physical
Review D 10, 526–535 (1974). link
Attached are both papers.
 
Last edited by a moderator:
  • #13
lugita15 said:
Attached are both papers.
Thanks a lot.
So the proof of inequality is in second paper "APPENDIX A: TWO INEQUALITIES" first part. It's very simple indeed.

My version was about writing down all possible combinations of detections/non-detections at different settings and finding out that no combination can produce negative value. Like this:
Code:
S(α2) -C(α2,β1) +S(β1) -C(α1,β1) +S(α1) -C(α1,β2) S(β2) +C(α2,β2)
  0                                                                  0
                  +                                                  +
                                   +                                 +
                                                    0                0
  0        -      +                                                  0
                  +         -      +                                 +
                                   +         -      0                0
  0                                                 0       +        +
  0        -      +         -      +                                 0
                  +         -      +         -      0                0
  0                                +         -      0       +        +
  0        -      +                                 0       +        +
  0        -      +         -      +         -      0       +        0
 
  • #14
harrylin said:
Where was it published, or which journal accepted it for publication? I don't see any indication of that...
Arxiv is preprint site :wink:. But you could ask to which journal it is submitted.
And I would like to see how it will pass peer review with calculation of standard deviation like it is presented in the paper. I would say it is invalid.
 
  • #15
zonde said:
Arxiv is preprint site :wink:. But you could ask to which journal it is submitted.
And I would like to see how it will pass peer review with calculation of standard deviation like it is presented in the paper. I would say it is invalid.
OK, but then: which Physicsforums Mentor reviewed it and accepted it for discussion here? :confused:
 
  • #16
harrylin said:
OK, but then: which Physicsforums Mentor reviewed it and accepted it for discussion here? :confused:

That's not the process. It is not a black or white rule. Arxiv papers are usually acceptable as references when the results or direction are consist with accepted science. Or when the authors are well respected in their field. This paper is from a world class team. And the result is consistent with a published paper from a Wineland team.

Lacking objections per the above, you would not be likely to get a mentor to step in against the reference. On the other hand, you are free to deny it in private. :smile:
 
  • #17
zonde said:
Hey, is this some gathering of trolls or what?
Wouldn't that qualify Zeilinger as a troll as well?
I am too excited in a sense. This experimental result is argument against all LHV based on valid foundation. Well of course such a important result needs to be scrutinized but if confirmed ... well then we have a real mystery IMO.
There is a mystery only if one insists that qm be local realistic. It's your obligation to prove how that should be possible given all the evidence to the contrary.
 
  • #18
Maui said:
Wouldn't that qualify Zeilinger as a troll as well?
I don't follow your logic, sorry.

Maui said:
There is a mystery only if one insists that qm be local realistic. It's your obligation to prove how that should be possible given all the evidence to the contrary.
Am I obligated to prove that QM can be local realistic? :eek:
Why I am obligated? And how should I do that when it's known since Bell theorem that it's not possible?
 
  • #19
zonde said:
Am I obligated to prove that QM can be local realistic? :eek:
Why I am obligated? And how should I do that when it's known since Bell theorem that it's not possible?
I think Maui meant that if you want to believe in local realism, the burden of proof is on you to prove that local realism is compatible with the overwhelming amount of experimental evidence from Bell tests that seems to point in the other direction.
 
  • #20
I would like to discuss something from the paper.
"We used a value of ~0.3 for r and measured for a total of 300 seconds per setting at each of the four settings α1β1, α1β2, α2β1, and α2β2 described by angles α1 = 85.6°, α2 = 118.0°, β1 = −5.4°, and β2 = 25.9°."

"After recording for a total of 300 seconds per setting we divided our data into 10-second blocks and calculated the standard deviation of the resulting 30 different J-values. This yields a sigma of 1837 for our aggregate J-value of J = −126715, a 69-σ violation."

So we have four datasets recorded at different times. We calculate a J-value based on assumption that there is certain correspondence between datasets. Then we divide each dataset into 30 small datasets and calculate J-value using four smaller datasets.
But that gives no idea how good is that correspondence between four datasets. PDC sources require high power lasers and as a result drifts are nothing unexpected for such sources.

So it seems that separate runs with the same settings should be compared in order to talk about some error estimates.

Or to make it more clearer we can look at it from slightly different side. Two runs - α1β1 and α1β2 both record S(α1) value. So which one is used in the calculations? And how big is the difference between the two (assuming that length of dataset is determined by time and not single counts)?
 
  • #21
DrChinese said:
That's not the process. It is not a black or white rule. [..]
Actually it is a black and white rule:

"References that appear only on http://www.arxiv.org/ (which is not peer-reviewed) are subject to review by the Mentors. "

The motivation for internal review is also given:
"We recognize that in some fields this is the accepted means of professional communication, but in other fields we prefer to wait until formal publication elsewhere."
- https://www.physicsforums.com/showthread.php?t=414380

If they let this go then they give a precedent that it is OK to bend the rules to suit one's personal opinion.
And why are you so impatient? If the paper is good enough in this form then it will likely soon be published. But if not, then it's not suited like this for this forum and we'll hopefully get a better one to discuss later.
 
  • #22
Thread closed for moderator review.
 

What is the "Fair Sampling Loophole" in relation to photons?

The "Fair Sampling Loophole" refers to a loophole in quantum mechanics that allowed for the possibility of choosing which particles to measure in an experiment, potentially biasing the results.

How was the "Fair Sampling Loophole" closed for photons?

In 2015, a team of researchers successfully closed the "Fair Sampling Loophole" by using a method called "quantum entanglement swapping". This method involved linking the measurement of photons in two different experiments, ensuring that the same particles were measured in both experiments. This eliminated the possibility of biased results.

What impact does closing the "Fair Sampling Loophole" have on quantum mechanics research?

Closing the "Fair Sampling Loophole" has had a significant impact on quantum mechanics research. It has allowed for more accurate and reliable measurements to be made, leading to a better understanding of quantum phenomena and potentially opening up new avenues for research.

Can the "Fair Sampling Loophole" be closed for other particles besides photons?

Yes, the "Fair Sampling Loophole" can be closed for other particles using similar methods to the ones used for photons. However, the specific method used may differ depending on the type of particle being studied.

Are there any potential limitations or challenges in closing the "Fair Sampling Loophole" for other particles?

While closing the "Fair Sampling Loophole" for photons has been successfully achieved, there may be challenges in applying the same methods to other particles. This is because each particle may have unique properties and behaviors that could impact the effectiveness of the methods used to close the loophole.

Similar threads

Replies
0
Views
664
  • Quantum Physics
3
Replies
82
Views
10K
  • Quantum Physics
2
Replies
58
Views
8K
  • Quantum Physics
3
Replies
75
Views
11K
Replies
8
Views
2K
Replies
30
Views
6K
Replies
63
Views
6K
Replies
3
Views
2K
Replies
2
Views
1K
  • Quantum Physics
Replies
4
Views
3K
Back
Top