Did they do a Loopholes free Bell test?

  • Context: Graduate 
  • Thread starter Thread starter Nick666
  • Start date Start date
  • Tags Tags
    Bell Test
Click For Summary

Discussion Overview

The discussion centers around the status of loophole-free Bell tests in quantum physics, exploring whether such experiments have been successfully conducted and the implications of closing various loopholes. Participants discuss the theoretical and experimental aspects of Bell's inequalities, including the challenges associated with closing the communication, fair-sampling, and free will loopholes.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Experimental/applied

Main Points Raised

  • Some participants note that no complete loophole-free Bell test has been conducted yet, with only partial loophole closures reported in previous experiments.
  • Others mention that several research groups are planning or have begun experiments aimed at closing all loopholes, suggesting that results may be forthcoming soon.
  • One participant discusses the challenges of closing the communication and fair-sampling loopholes simultaneously due to their conflicting requirements regarding detector placement and photon loss.
  • Another participant highlights that while Bell inequalities are routinely violated, there is a general expectation that closing all loopholes will not change the observed outcomes, which raises questions about the necessity of such tests.
  • Some participants express interest in the implications of loophole-free violations for quantum cryptography, particularly in establishing secure keys without trusting measurement devices.
  • Concerns are raised regarding the clarity and completeness of recent experimental reports, with calls for more transparency in the methodologies used.
  • Discussion includes references to specific experimental setups and the role of entanglement swapping in achieving desired outcomes.

Areas of Agreement / Disagreement

Participants generally agree that complete loophole-free Bell tests have not yet been achieved, but there is no consensus on the implications of this status or the necessity of closing all loopholes. Multiple competing views on the significance of these tests and their outcomes remain present in the discussion.

Contextual Notes

Participants express uncertainty regarding the definitions and implications of various loopholes, as well as the specific methodologies used in recent experiments. There are also unresolved questions about the relationship between measurement outcomes and theoretical predictions in quantum mechanics.

Nick666
Messages
168
Reaction score
7
Did they manage to do a loopholes free Bell test ? The best I got from google was an article from february that says no , they only did one where 2 out of 3 loopholes were eliminated in one test.
 
Physics news on Phys.org
I don't think is has been done yet, but I know a number of groups are planning to do such an experiment in the near future (some might already have started). In at least one case I am aware of the modification to their setup should be more or less trivial (e..g moving part of their setup to the opposite side of campus) so it shouldn't really be that hard (famous last words...)

Hence, I wouldn't at all be surprised if something is published in the next few months.
 
  • Like
Likes   Reactions: atyy
I know about two loopholes. These are closed in two separate experiments:
Violation of Bell's inequality under strict Einstein locality conditions
Bell violation with entangled photons, free of the fair-sampling assumption

The third might be "free will" loophole but I'm not sure if it can be exploited in any scientific model. Maybe the idea of closing "free will" loophole is to eliminate possibility of poor random number generator.

Closing communication and fair-sampling loopholes in one experiment might take some time as they have conflicting requirements. Closing fair-sampling loophole requires that photons are not lost so you want detector closer to source, but closing communication loophole requires considerable distance between source and detectors and that of course increases photon losses unless you can perform experiment in vacuum.
 
  • Like
Likes   Reactions: atyy
This says that in the same experiment they closed the locality and freedom-of-choice loopholes.
 
I thought there were going to be like a flood of answers. Isnt the topic the hottest and most important thing in quantum physics right now ?
 
Nick666 said:
I thought there were going to be like a flood of answers. Isnt the topic the hottest and most important thing in quantum physics right now ?
Not really. I mean, Bell inequalities are being violated routinely these days. (Almost) Nobody believes that if all loopholes are closed the experimental outcome will be different, everybody still expects to see the same Bell violation. If we did close all the loopholes, and we saw to our surprise that now we don't get any Bell violation, that would mean that quantum theory is wrong since the latter predicts a violation. But nobody believes that the theory is wrong, at least not at the low energies at which Bell experiments are conducted. Therefore, nobody cares!

But some people do care about closing all the loopholes... but for other reasons...
In the field of quantum cryptography, a new sub-field has emerged the past ten years, the so-called Device-Independent Quantum Key Distribution. There it has been shown that a loophole free violation of a Bell-inequality is important so that a secure key is established between two parties even if the measurement devices of each party are not trusted themselves (e.g. may have been hacked). Therefore, loophole free violations do offer great technological advantages.
 
JK423 said:
Not really. I mean, Bell inequalities are being violated routinely these days. (Almost) Nobody believes that if all loopholes are closed the experimental outcome will be different, everybody still expects to see the same Bell violation.

So why don't we move on to explanations for the violations: In posts 214 and 219 here:
https://www.physicsforums.com/threads/von-neumann-qm-rules-equivalent-to-bohm.816876/page-11
@ vanhess71 shows an explanation ( non local correlations) for the 100% perfect correlations when detector settings are aligned.
From here is there an explanation for some of the Bell inequality violations when detector settings at A and B are not aligned ?
 
Last edited:
morrobay said:
So why don't we move on to explanations for the violations: In posts 214 and 219 here:
https://www.physicsforums.com/threads/von-neumann-qm-rules-equivalent-to-bohm.816876/page-11 @ vanhess71 shows an explanation for the 100% perfect correlations when detector settings are aligned. So from here can there be a progression to understanding the violations when detector settings at A and B are not aligned ?
Bell inequalities specify the limit of correlations that vanhess71 type models can reach when detector settings at A and B are not aligned. To state it more directly vanhess71 type model can not violate Bell inequalites.
 
  • #10
gill1109 said:
http://www.nature.com/news/quantum-spookiness-passes-toughest-test-yet-1.18255 (paper posted on arXiv, not passed peer review yet)
I have not studied the paper in detail, but would like to make some comments based on what the authors write in their article.

First, it was noted in another thread that the probability $p$=0.019/0.039 is not very impressive.

Second, authors write: "Our observation of a loophole-free Bell inequality violation thus rules out all local theories that accept ... that the outputs are final once recorded in the electronics." On the other hand, as I wrote here a few times, unitary evolution of quantum mechanics is, strictly speaking, incompatible with final outcomes of measurement, as far as I understand (for example, due to Poincare recurrence). Therefore, the authors' experimental results can only rule out local realistic theories that predict deviations from unitary evolution. For example, the local realistic theories of my article http://link.springer.com/content/pdf/10.1140/epjc/s10052-013-2371-4.pdf (Eur. Phys. J. C (2013) 73:2371) have the same evolution as unitary evolution of some quantum field theories.
 
  • #11
I hope the referees push them to be more clear in their descriptions. There is a lot hidden between the lines. I've already identified in the other closed thread. For example, are the "settings" different from the randomly chosen microwave pulses which generate the entangled photons?

Another description of the experiment, see diagram at https://www.newscientist.com/articl...roved-real-in-first-loophole-free-experiment/). In usual CHSH setups, Alice and Bob each have 2 settings [1,2] which they randomly switch between. In this experiment, that appears to be the microwave pulses. These pulses excite the crystals to produce photons which are entangled with the electrons. Both photons are then sent to station C, where they are post-selected in order to find an ensemble for which the electrons at A and B could be considered as entangled (This is what entanglement swapping is all about, the two electrons being previously unentangled). Some time after the photons have left to be "filtered" at C, but before any signal can travel from C back to A and B, the state of the electrons at A and B are "read-out". Only those results at A and B which correspond to successful filtration at C are kept. Everything else is discarded. This corresponds to a success rate of 6.4e-9.

My suspicion is that the post-processing or "entanglement swapping" will be the key to unlock this experiment.
 
  • #12
billschnieder said:
I hope the referees push them to be more clear in their descriptions. There is a lot hidden between the lines. I've already identified in the other closed thread. For example, are the "settings" different from the randomly chosen microwave pulses which generate the entangled photons?

Another description of the experiment, see diagram at https://www.newscientist.com/articl...roved-real-in-first-loophole-free-experiment/). In usual CHSH setups, Alice and Bob each have 2 settings [1,2] which they randomly switch between. In this experiment, that appears to be the microwave pulses. These pulses excite the crystals to produce photons which are entangled with the electrons. Both photons are then sent to station C, where they are post-selected in order to find an ensemble for which the electrons at A and B could be considered as entangled (This is what entanglement swapping is all about, the two electrons being previously unentangled). Some time after the photons have left to be "filtered" at C, but before any signal can travel from C back to A and B, the state of the electrons at A and B are "read-out". Only those results at A and B which correspond to successful filtration at C are kept. Everything else is discarded. This corresponds to a success rate of 6.4e-9.

My suspicion is that the post-processing or "entanglement swapping" will be the key to unlock this experiment.
The settings are chosen by a quantum RNG http://arxiv.org/pdf/1506.02712v1.pdf So it's an "independent" piece of quantum optics / electronics. Personally I would have preferred a state of the art pseudo RNG. I don't know if they can be fast enough. It would be fine by me even that pseudo random settings are generated in advance and read from a file as needed. The point is to make it ludicrous (contrary to Occam's razor) that by some kind of conspiratorial and unknown physics, Alice's spin measurement somehow already "knows" Bob's setting.

Entanglement swapping is not "post-processing". The central location (C) preserves a record of which pairs of measurements (at A and B) are interesting to look at. Sure, you only then go and fish out those pairs after the experiment was done. At some point you have to look at the correlations between the experimental records generated at A, B and C. The timing is very delicate and has to be very careful. That's what the referees have to look at closely. But the "marks" saying which ones count were made *before* the corresponding measurements were done.
 
Last edited:
  • #13
gill1109 said:
Entanglement swapping is not "post-processing".
Apologies, I see this is somewhat off tangent from the thread topic itself but I just have to object here. Entanglement swapping is completely post-processing. The correlations between A and B are noticeable and interesting only if you post-select samples where C decided to do a measurement, and then even take into account his measurement result to see if it means correlation or anti-correlation.
 
  • #14
georgir said:
Apologies, I see this is somewhat off tangent from the thread topic itself but I just have to object here. Entanglement swapping is completely post-processing. The correlations between A and B are noticeable and interesting only if you post-select samples where C decided to do a measurement, and then even take into account his measurement result to see if it means correlation or anti-correlation.
I agree that the calculation of correlations is done *post experiment* but the calculated correlations already exist from the moment that the outcomes at A, B and C exist. And indeed the timing is very important so you have to rule out not only that A's settings could have influenced B's outcomes, but also that C's "seal of approval" could have influence A and B's settings. Which the authors do, in their paper.

And C's measurement result, on the basis of which A and B data gets selected, is quite simply: a photon is detected, one in each channel.
 
  • #15
billschnieder said:
My suspicion is that the post-processing or "entanglement swapping" will be the key to unlock this experiment.

Although only repeating what Gill said above: entanglement swapping is "processing" but you shouldn't confuse it with "post-selection". Regardless of how frequently the right circumstances occur, each event ready occurrence both causes and heralds a Bell pair that is being created in another spacetime region.

And it would take chutzpah to claim this setup doesn't disprove local realism, when the entanglement is performed non-local to the A and B measurements (and the selection of measurement settings). The local realist, after all, would say that what happens at C can have no bearing on any measurement outcome at A or B. I.e. there is no such physical state as entanglement! So why would one group of A's and B's exhibit measurement correlation differently than another, based on what is done at C? A sample of entangled pairs of electrons gives a different correlation rate (perfect correlations in the ideal case) than an un-entangled sample.
 
  • #16
DrChinese said:
Although only repeating what Gill said above: entanglement swapping is "processing" but you shouldn't confuse it with "post-selection".
You will find that it is indeed post-processing by selection of sub-ensembles. You take four ensembles A,B,C,D of particles, with every particle in A entangled with a sibling in B, and every particle in C entangled with a sibling in D. No entanglement between AB and CD pairs. You measure B and C together and based on the joint result of B and C, you select a subset of A and D that would now be entangled with each other. It is obviously post-selection.

when the entanglement is performed non-local to the A and B measurements (and the selection of measurement settings).
The swapping is done non-local to A and B, but it uses information from A and B to do the post-selection. A key question is whether in this experiment, the microwave pulses are "settings" or not?
 
  • #17
billschnieder said:
The swapping is done non-local to A and B, but it uses information from A and B to do the post-selection. A key question is whether in this experiment, the microwave pulses are "settings" or not?
Yes, microwave pulses are "settings".
But information from A and B for post-selection is only about initial state of A and B, not later generated measurement settings.
They say: "As can be seen in the spacetime diagram in Fig. 2a, we ensure that this event-ready signal is space-like separated from the random input bit generation at locations A and B."
 
  • #18
gill1109 said:
The settings are chosen by a quantum RNG http://arxiv.org/pdf/1506.02712v1.pdf So it's an "independent" piece of quantum optics / electronics. Personally I would have preferred a state of the art pseudo RNG. I don't know if they can be fast enough. It would be fine by me even that pseudo random settings are generated in advance and read from a file as needed. The point is to make it ludicrous (contrary to Occam's razor) that by some kind of conspiratorial and unknown physics, Alice's spin measurement somehow already "knows" Bob's setting.
I agree with this. Using quantum RNG is sort of begging the question fallacy (we assume quantum RNG behaves as QT says in order to test what QT says).
 
  • #19
Let me describe the traditional Bell-CHSH experiment and the entanglement-swapping version through computer analogies (known as the "Bell game")

Standard Bell game
==================

You control three computers A, B, C
You may write computer programmes for them.
Many times, the following happens:

Computer C sends messages to A and B
No more communication allowed between A, B and C
I toss two fair coins and deliver outcomes (H/T) to A and B
A and B output binary outcomes +/-1

Your aim: when A and B both receive "H" the outcomes should be the same (both +1 or both -1)
When either or both of A and B receive "T" the outcomes should be different (+1 and -1)

We call each of these exchanges a "trial". Each trial, you either win or lose
(you either achieve your per-trial aim or you don't).

The whole game: your overall aim is to "win" statistically significantly more than 75% of the trials (say: 80%, with N pretty large). Bell says it can't be done. (Well - just once in a while it could happen purely by chance, obviously, but you can't write those computer programs so that you systematically win).
Modifed Bell game
=================

You control three computers A, B, C
You may write computer programmes for them.
Many times, the following happens:

Computers A and B send messages to C
No more communication allowed between A, B and C
I toss two fair coins and deliver outcomes (H/T) to A and B
A and B output binary outcomes +/-1
C delivers a binary outcome "go"/"no-go"

Your aim: when C outputs "go" *and* A and B both receive "H" the outcomes should be the same (both +1 or both -1)
When C outputs "go" *and* either or both of A and B receive "T" the outcomes should be different (+1 and -1)

We call each of these exchanges a "trial". Each trial in which C says "go", you either win or lose
(you either achieve the aim or you don't).

The whole game: your aim is to "win" statistically significantly more than 75% of the trials for which C said "go"(say: 80%, with N pretty large). Bell says it can't be done. (Well - just once in a while it could happen purely by chance, obviously, but you can't write those computer programs so that you systematically win).
 
  • #20
Gill, are you trying to formulate a "Dr. Chinese challenge" for the swapping experiment?
I may be misunderstanding either you or the entanglement swapping experiment itself, but I don't think your "games" are good description of what happens. Remember, even though A and B do not have anything shared between them, they do each have an entangled pair shared with C.
 
  • #21
With any random pair of binary data there is either correlation or anti-correlation. C is able to compare both and say what it is going to be. C does not literally create the entanglement, and his actions do not have some magical effect that changes anything between A and B.

Edit: Still, I am not saying this can be explained classically or with hidden variables. It may still very well violate some inequality or something, just because the entanglement that exists in the A-C and B-C pairs violates it. But not because C does anything to A and B.
 
  • #22
georgir said:
Gill, are you trying to formulate a "Dr. Chinese challenge" for the swapping experiment?
I may be misunderstanding either you or the entanglement swapping experiment itself, but I don't think your "games" are good description of what happens. Remember, even though A and B do not have anything shared between them, they do each have an entangled pair shared with C.
I think these two games are a good description of what "local realism" can allow is happening, both in a traditional Bell-CHSH experiment (particles leave the source C and go to two locations A and B) and in the new generation of experiment using entanglement swapping (particles leave locations A and B and meet at C where they are measured and where a selection occurs of "favourable" situations). The whole point is that under local realism we can't expect a better than 75% success rate. In either game. However replace my computers A, B and C with quantum devices and quantum communication and in theory we could have an 85% success rate. (The Delft experiment had an 80% success rate).

Obviously, once we have selected pairs of particles at A and B on the basis of a particular outcome of some measurement of particles at C which came from A and B, we can create statistical dependence between subsequent measurement outcomes at A and B. The exercise to the student is to understand why, for both games, 75% is the best success rate you can hope for. Do it for the more simple (conventional) game first. Then figure out how to adapt your solution to the newer game.

Of course this doesn't help you understand how quantum mechanics can be harnessed to achieve an 85% success rate. The whole point of Bell's theorem that there is no way we can understand this in traditional terms - ie with local hidden variables, and free of conspiracy or superdeterminism.

Incidentally, I posed a computer challenge more than 10 years ago first to Bell-denier Accardi, later to Hess and Philipp, later to others. And I figured out how to set up the challenge so I would only have a tiny chance of losing a 5000 Euro bet, even if my opponent used memory and time variation. And this bit of probability theory is what the Delft experimenters are actually using in their "paranoid" analysis of the experiment. http://arxiv.org/abs/quant-ph/0110137, http://arxiv.org/abs/quant-ph/0301059, http://arxiv.org/abs/1207.5103

By the way nobody is saying that what happens at C alters what is going on at A and B. We don't believe in action at a distance. Quantum mechanical entanglement can't be used to send signals faster than the speed of light or even to help increase a classical communication rate ("information causality": Pawlowski et al)
 
Last edited:
  • #23
I am not quite sure how to interpret your game. What do the messages and coins represent?
It is my impression that in order to get a Bell test you need to consider at least 3 different measurement bases.
In most understandable explanations I've seen, you use i.e. 0 deg and +/- 30 deg and have to have 25% difference between 0 and +/-30 but 75% between +/-30, etc.
 
  • #24
georgir said:
I am not quite sure how to interpret your game. What do the messages and coins represent?
It is my impression that in order to get a Bell test you need to consider at least 3 different measurement bases.
In most understandable explanations I've seen, you use i.e. 0 deg and +/- 30 deg and have to have 25% correlation between 0 and +/-30 but 75% between +/-30, etc.
Please learn about the CHSH inequality and read Bell (1981) "Bertlmann's socks". Alice and Bob each choose between two measurement directions. Alice chooses between 0 and 90 degrees; Bob between 45 and 135. IMHO it's just as easy to understand as stories where Alice and Bob each have three measurement directions.

Remember the original Bell (1964) inequality also built in an assumption of perfect anti-correlation when Alice and Bob used the same measurement settings. If you would test that assumption experimentally, you would find that it's not exactly true. So the original Bell inequality was rapidly discarded in favour of CHSH, as far as real experiments are concerned. For tutorial introductions, there are advantages and disadvantages to both versions...

The idea of a Bell game goes back a long time. Already in the 80's people understood that Bell's inequality is driven by elementary logic and probability, not by calculus. (NB Bell's original inequality is a special case of CHSH, which results by setting one of the four correlations equal to +/-1).
 
  • #25
gill1109 said:
Please learn about the CHSH inequality and read Bell (1981) "Bertlmann's socks". Alice and Bob each choose between two measurement directions. Alice chooses between 0 and 90 degrees; Bob between 45 and 135. IMHO it's just as easy to understand as stories where Alice and Bob each have three measurement directions.

Remember the original Bell (1964) inequality also built in an assumption of perfect anti-correlation when Alice and Bob used the same measurement settings. If you would test that assumption experimentally, you would find that it's not exactly true. So the original Bell inequality was rapidly discarded in favour of CHSH, as far as real experiments are concerned. For tutorial introductions, there are advantages and disadvantages to both versions...

The idea of a Bell game goes back a long time. Already in the 80's people understood that Bell's inequality is driven by elementary logic and probability, not about calculus. (NB Bell's original inequality is a special case of CHSH, which results by setting one of the four correlations equal to +/-1).
georgir said:
I am not quite sure how to interpret your game. What do the messages and coins represent?
It is my impression that in order to get a Bell test you need to consider at least 3 different measurement bases.
In most understandable explanations I've seen, you use i.e. 0 deg and +/- 30 deg and have to have 25% difference between 0 and +/-30 but 75% between +/-30, etc.
The messages represent particles, or if you prefer, physical transmission of information. The coins represent random choices of measurement settings. Each computers represents a source of some particles or a measurement device (destination of particles). The computer programs running on the computers represent pieces of a local hidden variables theory. So the messages might just contain the values of all hidden variables in the theory we are simulating.
 
  • #26
Ok, I hate the feeling that I'm missing something obvious. But apparently I am. I can't see how using only 0, 45 and 90 can work to formulate a Bell test. 0 and 90 are always perfectly anti-correlated, 45 or 135 is always 1/2 correlated to either of them... any choice between those pairs of settings seems pointless to me as it is not affecting the 1/2 correlation. Anyway, I'll try to find more time and read up on "CHSH" in the future.
 
  • #27
georgir said:
Ok, I hate the feeling that I'm missing something obvious. But apparently I am. I can't see how using only 0, 45 and 90 can work to formulate a Bell test. 0 and 90 are always perfectly anti-correlated, 45 or 135 is always 1/2 correlated to either of them... any choice between those pairs of settings seems pointless to me as it is not affecting the 1/2 correlation. Anyway, I'll try to find more time and read up on "CHSH" in the future.

Maybe we have to halve these angles. Are we talking about spin or about polarization? These are the settings which result in correlations equal to +/- 1 / sqrt 2. When we add one positive correlation and subtract three negative ones we get 4 / sqrt 2 = 2 sqrt 2 = 2.828... (the Tsirelson bound, the best that QM can do).

See page 14 of https://cds.cern.ch/record/142461/files/198009299.pdf (CERN preprint of Bell's "Bertlmann's socks")
 
  • #28
Ok, half those angles makes sense. Should've seen that... And I see now how QM wins your game in cos(22.5deg)^2 or 85% of the times. This is cool.
EDIT: I still don't understand what's the go/no go part though.
 
Last edited:
  • #29
georgir said:
Ok, half those angles makes sense. Should've seen that... And I see now how QM wins your game in cos(22.5deg)^2 or 85% of the times. This is cool.
EDIT: I still don't understand what's the go/no go part though.
The go/no-go part is a way to implement the strict timing requirements. We want to do measurements at the two locations A and B, very rapidly, far apart. We want to engineer that there are two parts of a two-component quantum system localized in these times and places, when we do those measurements. How to arrange that? It rutns out to be very difficult to get quantum systems at distant locations A and B in the good entangled state at the same *prespecified* time "to order". ie at a time chosen and fixed in advance. The entanglement-swapping method is just one of many tricks which finesses this difficulty.
 
  • #30
georgir said:
EDIT: I still don't understand what's the go/no go part though.

The go/nogo stuff is there to ensure that we commit to counting a pair in our final results BEFORE we've performed the measurements. That's why this experiment closes the "detection loophole".

Previous experiments left open the possibility that the measurement altered the states of the particles in such a way that we'd fail to count pairs subjected to one measurement more often than we fail to count pairs subjected to the other - in terms of Gill's games, we'd cheat by throwing out some of the trials in which we didn't make the winning play. Here, we're committed to counting a trial before we know the outcome, so that form of cheating is precluded.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 82 ·
3
Replies
82
Views
12K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 75 ·
3
Replies
75
Views
12K
  • · Replies 71 ·
3
Replies
71
Views
6K
  • · Replies 63 ·
3
Replies
63
Views
8K
  • · Replies 25 ·
Replies
25
Views
3K