Did they do a Loopholes free Bell test?

  • Thread starter Thread starter Nick666
  • Start date Start date
  • Tags Tags
    Bell Test
Click For Summary
The discussion centers on the status of loophole-free Bell tests, with participants noting that while some experiments have closed two out of three loopholes, a fully loophole-free test has not yet been achieved. Several research groups are reportedly planning or have begun such experiments, which are considered significant for quantum cryptography, particularly in establishing secure keys without trusting measurement devices. The conversation also touches on the complexities of closing all loopholes simultaneously, as conflicting requirements complicate experimental design. Participants express skepticism about the necessity of closing all loopholes, as many believe that existing violations of Bell's inequality will remain consistent regardless. Overall, the topic remains a critical area of inquiry in quantum physics, especially regarding its implications for theories of local realism and quantum mechanics.
  • #31
georgir said:
EDIT: I still don't understand what's the go/no go part though.
QM predicts a violation of the CHSH-inequality when the observed pairs are entangled. Ensuring that pairs are entangled is notoriously difficult, and particularly when the distance between them increase. This experiment has the nice property that one can generate signals that tells us whether the particles are entangled or not (the go/no go), just before the settings are randomly chosen and the results read out (or to be more precise, the signal is recorded outside the lightcone of the read out). Which means that we can discard all the uninteresting unentangled pairs, which would otherwise just add random noise to the correlations.
 
Last edited:
Physics news on Phys.org
  • #32
  • #33
billschnieder said:
And the photons are produced by the microwave pulses hitting the crystals https://d1o50x50snmhul.cloudfront.net/wp-content/uploads/2015/08/09_12908151-800x897.jpg?And the post-selection is based on the photons?
No, the photons used for signalling that "the event is ready" is not produced by the microwave puls that reads out the spin. You should read the paper, and in particular the long texts accompanying the figures.
 
  • #34
billschnieder said:
You will find that it is indeed post-processing by selection of sub-ensembles. You take four ensembles A,B,C,D of particles, with every particle in A entangled with a sibling in B, and every particle in C entangled with a sibling in D. No entanglement between AB and CD pairs. You measure B and C together and based on the joint result of B and C, you select a subset of A and D that would now be entangled with each other. It is obviously post-selection.

1. Of course there is initial entanglement of AB and CD before the entanglement swap to make AD entangled. (You said: "No entanglement between AB and CD pairs.")

2. Of course there is entanglement swapping (which doesn't even exist if you are a local realist). If you simply perform the same measurements on B and C without bringing them together for the swap, nothing happens to cause AD to be entangled. Just route them through separate beam splitters and wait for the otherwise similar signature.
billschnieder said:
You will find that it is indeed post-processing by selection of sub-ensembles. You take four ensembles A,B,C,D of particles, with every particle in A entangled with a sibling in B, and every particle in C entangled with a sibling in D. No entanglement between AB and CD pairs. You measure B and C together and based on the joint result of B and C, you select a subset of A and D that would now be entangled with each other. It is obviously post-selection.

If it were post selection, then you could get the same result by looking for the same arrival "signature" without* allowing the swapping to occur. That cannot happen. Unless the photons are brought together indistinguishably, there is no swapping. No swapping, no Bell inequality violation.

*Just bring them near to each other so the timing is the same when they go through the beam splitter and are detected. Apparently you don't see that the swapping of B&C is causing the entanglement, and yet you acknowledge that the ready events are the ones that lead to Bell inequality violation.
 
  • #35
Heinera said:
This experiment has the nice property that one can generate signals that tells us whether the particles are entangled or not (the go/no go), just before the settings are randomly chosen and the results read out (or to be more precise, the signal is recorded outside the lightcone of the read out).

As I read it: the settings are selected before the swapping is done and the measurement is performed around the time the swapping is done. And as you say, the measurement is performed outside the causal light cone of the swapping, and the swapping is performed outside the causal light cone of the measurement. So neither can affect the other.
 
  • #36
I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".
 
  • #37
georgir said:
I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".
It's a technical term. It has been around for more than 20 years. http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.71.4287

‘‘Event-ready-detectors’’ Bell experiment via entanglement swapping
M. Żukowski, A. Zeilinger, M. A. Horne, and A. K. Ekert
Phys. Rev. Lett. 71, 4287 – Published 27 December 1993
 
  • #38
georgir said:
I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".

Well, it is a very special and unusual sort of "detection". We start with two photon-electron entangled pairs, and if we detect (that is, interact with) the photons in just the right way, we end up with the electrons entangled instead. When this happens, we say that we've "swapped" the photon-electron entanglement that we had for the electron-electron entanglement that we wanted. And as Gill says, the term has used that way for decades.

Googling for "Barrett-Kok entanglement swapping" will bring up a whole bunch of fairly technical stuff on the techniques used in this experiment.
 
  • #39
Nugatory said:
Well, it is a very special and unusual sort of "detection". We start with two photon-electron entangled pairs, and if we detect (that is, interact with) the photons in just the right way, we end up with the electrons entangled instead. When this happens, we say that we've "swapped" the photon-electron entanglement that we had for the electron-electron entanglement that we wanted. And as Gill says, the term has used that way for decades.

Googling for "Barrett-Kok entanglement swapping" will bring up a whole bunch of fairly technical stuff on the techniques used in this experiment.
If we had been talking about correlation instead of entanglement it would not have been weird at all. Suppose particles A and B have equal and opposite momentum, but whose amount is random. Suppose that completely independently of this, particles C and D have equal and opposite, randomly varying, momentum. Now catch particles B and C and if their momentum is equal and opposite say "go". It is no surprise that particles A and D have highly correlated momentum if we only look at them both on those occasions that we got the "go" signal. The extraordinary (and beautiful) thing is how the mathematics of Hilbert space quantum entanglement works in just the same way...
 
  • #40
Perhaps a stupid question, but do they also do these kinds of test with un-entangled particles for calibration? And if so how different are the results from entangled particles?
 
  • #41
georgir said:
I'm still finding it weird, even funny, that you (and "they") are calling it "swapping" instead of just "detecting".

It is an active process that creates the entangled state. The requirement is that the photons are i) detected in a certain signature manner AND ii) are indistinguishable. It is theoretically possible to perform i) without ii). If you were to do that, you would not get entanglement, thus proving that the entanglement of A & D is dependent on the swapping operation of B & C. This is the creation of the event ready pairs.

An example of how you would accomplish such detection (of B & C) without creating entanglement (for A & D) would be to use photons B & C that are of different frequencies. So you would be post selecting ONLY, and there are no entangled pairs (of A & D) that result. So post selection alone won't accurately describe anything.
 
  • #42
akhmeteli said:
First, it was noted in another thread that the probability $p$=0.019/0.039 is not very impressive.
I'm guessing that by the time of peer-reviewed publication comes out more data will bring that number down considerably (see second link). That's what I have read and today there was another piece discussing the experiment:

Physicists claim 'loophole-free' Bell-violation experiment
http://physicsworld.com/cws/article...claim-loophole-free-bell-violation-experiment
There are still a few ways to quibble with the result. The experiment was so tough that the p-value – a measure of statistical significance – was relatively high for work in physics. Other sciences like biology normally accept a p-value below 5 per cent as a significant result, but physicists tend to insist on values millions of times smaller, meaning the result is more statistically sound. Hanson’s group reports a p-value of around 4 per cent, just below that higher threshold.That isn’t too concerning, says Zeilinger. “I expect they have improved the experiment, and by the time it is published they’ll have better data,” he says. “There is no doubt it will withstand scrutiny.”
Quantum weirdness proved real in first loophole-free experiment
https://www.newscientist.com/articl...roved-real-in-first-loophole-free-experiment/
 
Last edited:
  • #43
bohm2 said:
I'm guessing that by the time of peer-reviewed publication comes out more data will bring that number down considerably (see second link). That's what I have read and today there was another piece discussing the experiment:

Physicists claim 'loophole-free' Bell-violation experiment
http://physicsworld.com/cws/article...claim-loophole-free-bell-violation-experiment

Quantum weirdness proved real in first loophole-free experiment
https://www.newscientist.com/articl...roved-real-in-first-loophole-free-experiment/
As long as the experimenters only have N = 245 pairs of measurements and S = 2.4 that p-value is not going to go down. Here is a simple calculation which explains why: An empirical correlation between binary variables based on a random sample of size N has a variance of (1 - rho^2)/N. The worst case is rho = 0 and variance 1/N. In fact we are looking at four empirical correlations equal to approx +/- 0.6. So if we believe that we have four random samples of pairs of binary outcomes then each empirical correlation has a variance of about 0.64 / N where N is the number of pairs of observations for each pair of settings. If the four samples are statistically independent the variance of S is about 4 * 0.64 / N where N = 245 / 4. This gives a variance of 0.042 and a standard error of 0.2. We observed S = 2.4 but our null hypothesis says that its mean value is nor larger than 2. Since N is large enough that normal approximation is not bad, we can say that we have 0.4 / 0.2 = 2 standard deviations departure from local realism. The chance that this occurs by chance is about 0.025.

Here's a little Monte Carlo simulation experiment which shows that this rough calculation is pretty reliable http://rpubs.com/gill1109/delft

However, if actually they performed several experiments, and the N = 245 is just one of them, and they combine the results of several experiments in a statistically responsible way, then obviously their p-value can get much smaller.
 
  • #44
bohm2 said:
I'm guessing that by the time of peer-reviewed publication comes out more data will bring that number down considerably.
Yes, I guess that this was what they felt was the maximal p-value they could get away with and still establish priority (they are in a race here), and that they are still running the experiment to achieve p-values at a level that is now regarded as the norm in physics (4 - 5 sigmas).
 
Last edited:
  • #45
Michel_vdg said:
Perhaps a stupid question, but do they also do these kinds of test with un-entangled particles for calibration? And if so how different are the results from entangled particles?
Why aren't you experts answering my question, wouldn’t it be logic to also do these tests with a placebo, isn’t that standard procedure, like in clinical trials when testing the effectiveness of medications or devices?
 
  • #46
Michel_vdg said:
Why aren't you experts answering my question, wouldn’t it be logic to also do these tests with a placebo, isn’t that standard procedure, like in clinical trials when testing the effectiveness of medications or devices?

I am sure there were no end of null results while they were getting everything tuned up. Realistically, I am sure they were calibrating to get a maximum of perfect correlations - which is the sure fire way to see that you are experiencing entanglement. The closer you get to 100% (as opposed to 50% for non-entangled pairs), the better entanglement you are achieving.

Are you asking why they don''t publish their process for calibration? And along with that, the null results too? That is not usually published as a part of most papers because it is of little interest to the intended readers. If the result is in concert with theory (as in this case), not much incentive there. If there were controversy as to the accuracy of the results, or whether can be replicated, then I am sure they would gladly provide that information.

Besides, this is not the first experiment to use entanglement. You don't really need to prove out every element of an experiment every time. (Are the detectors reliable, beam splitters effective, etc?)
 
  • #47
Michel_vdg said:
Why aren't you experts answering my question, wouldn’t it be logic to also do these tests with a placebo, isn’t that standard procedure, like in clinical trials when testing the effectiveness of medications or devices?

Calibrating the test equipment is standard procedure, they did a fair amount of it, and they discussed the most important elements of it in the paper.
 
  • #48
Is entanglement swapping simply post-selection? From the paper:

"We generate entanglement between the two distant spins by entanglement swapping in the Barrett-Kok scheme using a third location C (roughly midway between A and B, see Fig. 1e). First we entangle each spin with the emission time of a single photon (time-bin encoding). The two photons are then sent to location C, where they are overlapped on a beam-splitter and subsequently detected. If the photons are indistinguishable in all degrees of freedom, the observation of one early and one late photon in different output ports projects the spins at A and B into the maximally entangled state..."


You can see that there MUST be action occurring at C to cause the entanglement. The detection part consists of i) photons arriving in different ports of the beam-splitter; and ii) arrival within a specified time window. Having them indistinguishable is NOT part of the detection and heralding! That is the part that causes (is necessary for) the entanglement swap: they are overlapping & indistinguishable. Because you could detect without the swapping by bringing the photons together in a manner that is NOT indistinguishable and there will be no swapping. For example: their paths do not overlap; their frequencies are different, etc.

By calling this POST-SELECTION you are really saying you think a local realistic explanation is viable. But if you are asserting local realism as a candidate explanation, obviously nothing that occurs at C can affect the outcomes at A & B; it occurs too late! So making the photons registered at C overlap and be indistinguishable cannot make a different to the sub-sample. But it does, because if they are distinguishable there will not be a Bell inequality violated.

So that is a contradiction. Don't call it post-selection unless you think that the overlap can be done away with and still get a sample that shows the same statistics as when they are indistinguishable pairs.
 
  • #49
Nugatory said:
Calibrating the test equipment is standard procedure, they did a fair amount of it, and they discussed the most important elements of it in the paper.

"Before running the Bell test we first characterize the setup and the preparation of the spin-spin entangled state."


They go on to provide background on the key points. There is also Supplementary Information available, which they refer to.
 
  • #50
The important thing here is that the recording at C (which basically tells that the pair is an interesting successfully entangled pair) is spacelike separated from the random selection of settings and read-out of the spins at both A and B. So the "post selection" can in no local way depend on the settings at A or B, or vice versa. This rules out LHV explanations, and the detection loophole.
 
Last edited:
  • #51
Nugatory said:
Calibrating the test equipment is standard procedure, they did a fair amount of it, and they discussed the most important elements of it in the paper.
Yes the equipment needs to be calibrated, that's also the case in medecine, but I thought once it is all calibrated that they also wound do a non-entagled testrun to lay next to the 245 pairs of measurements they now realized.
 
  • #52
Michel_vdg said:
... but I thought once it is all calibrated that they also wound do a non-entagled testrun to lay next to the 245 pairs of measurements they now realized.

If they could get entanglement without swapping, that would be even bigger news! :smile:
 
  • #53
DrChinese said:
You can see that there MUST be action occurring at C to cause the entanglement. The detection part consists of i) photons arriving in different ports of the beam-splitter; and ii) arrival within a specified time window. Having them indistinguishable is NOT part of the detection and heralding! That is the part that causes (is necessary for) the entanglement swap: they are overlapping & indistinguishable. Because you could detect without the swapping by bringing the photons together in a manner that is NOT indistinguishable and there will be no swapping. For example: their paths do not overlap; their frequencies are different, etc.
Do I miss something? Whether photons are indistinguishable or not is measured.
In the paper about fig. 3b they write:
"(b) Time-resolved two-photon quantum interference signal. When the NV centres at A and B emit indistinguishable photons (orange), the probability of a coincident detection of two photons, one in each output arm of the beam-splitter at C is expected to vanish. The observed contrast between the case of indistinguishable versus the case of distinguishable photons of 3 versus 28 events in the central peak yields a visibility of (90±6)% (Supplementary Information)."
 
  • #54
billschnieder said:
And the photons are produced by the microwave pulses hitting the crystals https://d1o50x50snmhul.cloudfront.net/wp-content/uploads/2015/08/09_12908151-800x897.jpg?And the post-selection is based on the photons?
Yes, post selection is based on the photons. But these photons are emitted earlier, before measurement settings are generated by RNG.
There are two pulses. One earlier that generates electron-photon entanglement and one later that can be one of two different pulses that do spin rotation by two different angles.
 
  • #55
zonde said:
Do I miss something? Whether photons are indistinguishable or not is measured.
In the paper about fig. 3b they write:
"(b) Time-resolved two-photon quantum interference signal. When the NV centres at A and B emit indistinguishable photons (orange), the probability of a coincident detection of two photons, one in each output arm of the beam-splitter at C is expected to vanish. The observed contrast between the case of indistinguishable versus the case of distinguishable photons of 3 versus 28 events in the central peak yields a visibility of (90±6)% (Supplementary Information)."

You didn't miss anything. But they are measured indirectly to determine indistinguishability, just not it at C.

At C they are made to overlap. This is done in the set up. That makes them indistinguishable, and causes the entanglement swap. The measurement is later performed at A and B by observing a violation of a Bell inequality. That demonstrates the entanglement swap occurred, in accordance with theory.
 
  • #56
From their paper (page 3):

"First we entangle each spin with the emission time of a single photon (time-bin encoding). The two photons are then sent to location C, where they are overlapped on a beam-splitter and subsequently detected. If the photons are indistinguishable in all degrees of freedom, the observation of one early and one late photon in different output ports projects the spins at A and B into the maximally entangled state[...]. These detections herald the successful preparation and play the role of the event-ready signal in Bell's proposed setup. As can be seen in the spacetime diagram in Fig. 2a, we ensure that this event-ready signal is space-like separated from the random input bit generation at locations A and B."
 
Last edited:
  • Like
Likes DrChinese
  • #57
gill1109 said:
This gives a variance of 0.042 and a standard error of 0.2. We observed S = 2.4 but our null hypothesis says that its mean value is nor larger than 2. Since N is large enough that normal approximation is not bad, we can say that we have 0.4 / 0.2 = 2 standard deviations departure from local realism.
Perhaps the null hypothesis is a bit pessimistic. S is a sample statistic and it could have a expectation much closer to zero than 2 under a no-correlation null hypothesis. Even adding the SD of this statistic to the mix could result in a much higher confidence level. But it could also go down.

One approach is to work out the randomization distribution of S ( actually S without the modulus) by randomizing the data between bins. This will give an estimate of the mean and variance of the statistic.
 
Last edited:
  • #58
Mentz114 said:
Perhaps the null hypothesis is a bit pessimistic. S is a sample statistic and it could have a expectation much closer to zero than 2 under a no-correlation null hypothesis. Even adding the SD of this statistic to the mix could result in a much higher confidence level. But it could also go down.

One approach is to work out the randomization distribution of S ( actually S without the modulus) by randomizing the data between bins. This will give an estimate of the mean and variance of the statistic.
They did also compute a randomization only based, possibly conservative, p-value. It was 0.039
 
  • #59
DrChinese said:
You didn't miss anything. But they are measured indirectly to determine indistinguishability, just not it at C.

At C they are made to overlap. This is done in the set up. That makes them indistinguishable, and causes the entanglement swap. The measurement is later performed at A and B by observing a violation of a Bell inequality. That demonstrates the entanglement swap occurred, in accordance with theory.
Just to be sure about details I looked at the paper about their previous experiment - http://arxiv.org/abs/1212.6136
They do a lot of things to tune the setups at A and B and make photons indistinguishable. Success of tuning is verified by observing HOM interference at C.But I still don't get your argument against calling detection at C a post-selection.
DrChinese said:
Is entanglement swapping simply post-selection? From the paper:

"We generate entanglement between the two distant spins by entanglement swapping in the Barrett-Kok scheme using a third location C (roughly midway between A and B, see Fig. 1e). First we entangle each spin with the emission time of a single photon (time-bin encoding). The two photons are then sent to location C, where they are overlapped on a beam-splitter and subsequently detected. If the photons are indistinguishable in all degrees of freedom, the observation of one early and one late photon in different output ports projects the spins at A and B into the maximally entangled state..."


You can see that there MUST be action occurring at C to cause the entanglement. The detection part consists of i) photons arriving in different ports of the beam-splitter; and ii) arrival within a specified time window. Having them indistinguishable is NOT part of the detection and heralding! That is the part that causes (is necessary for) the entanglement swap: they are overlapping & indistinguishable. Because you could detect without the swapping by bringing the photons together in a manner that is NOT indistinguishable and there will be no swapping. For example: their paths do not overlap; their frequencies are different, etc.

By calling this POST-SELECTION you are really saying you think a local realistic explanation is viable. But if you are asserting local realism as a candidate explanation, obviously nothing that occurs at C can affect the outcomes at A & B; it occurs too late! So making the photons registered at C overlap and be indistinguishable cannot make a different to the sub-sample. But it does, because if they are distinguishable there will not be a Bell inequality violated.

So that is a contradiction. Don't call it post-selection unless you think that the overlap can be done away with and still get a sample that shows the same statistics as when they are indistinguishable pairs.
Detection at C distinguishes different entanglement states:
|\psi^-\rangle = (|\uparrow\downarrow\rangle - |\downarrow\uparrow\rangle)/\sqrt{2}
|\psi^+\rangle = (|\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle)/\sqrt{2}
|\phi^\pm \rangle = (|\uparrow\uparrow\rangle \pm |\downarrow\downarrow\rangle)/\sqrt{2}
And without singling out (post-selecting) just one of those entangled states there is no Bell inequality violation.
And the action that is occurring at C is interference between two photons (photon modes) so that |\psi^-\rangle and |\psi^+\rangle can be told apart.
 
  • #60
Since they used a quantum random number generator instead of the human random decisions, can no conclusions be drawn regarding this aspect ? Something like... we have as much "free will" as a quantum random number generator ? They do mention "free will" in the arvix paper .
 

Similar threads

Replies
4
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 82 ·
3
Replies
82
Views
11K
Replies
19
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
6
Views
3K
  • · Replies 75 ·
3
Replies
75
Views
12K
Replies
63
Views
8K
  • · Replies 71 ·
3
Replies
71
Views
5K
  • · Replies 25 ·
Replies
25
Views
3K