- #1
zonde
Gold Member
- 2,961
- 224
I am wondering why there are no discussions about correctness of fair sampling assumption in photon entanglement experiments so I would like to start one.
Bell's inequalities are derived considering all emitted particles. But in real photon entanglement experiments only portion of emitted particles is detected and therefore in order to apply Bell's inequalities to real experiments so called "fair sampling assumption" is required (that photon sample detected is faithful representative of photon sample emitted).
So the statements about violation of Bell's inequalities and nonlocality lack creditability if some basic tests of this fair sampling assumption are not preformed.
Of course it is fair to say that it is doubtful that fair sampling assumption can be conclusively proved in photon experiments but on the other hand we can not conclusively declare any experiment to be free from systematic errors. However what makes a difference between qualitatively performed experiments and poor ones is tests against variations in experimental setup and it's environment to asses possible sensitivity against some systematic errors.
So I would like to ask if there are others who share this view? Or why such discussions are avoided?
But to give direction to possible discussion I would like to describe three experiments that can show what I have on my mind (I have in some form mentioned them in discussions already).
#1 Two photon correlations in three photon entanglement.
This experiment is meant more for theoretical consideration as motivation for doubting fair sampling assumption.
We have three entangled photons. Two photons interact with polarizers that have 45deg relative angle between them and third photon's polarizer is oriented so that it's angle is between first two (22.5 and 22.5 deg with the firs two).
Two entangled photons from polarizers at 45deg relative angle will have 50% coincidences in idealized case as cos^2(45deg)=0.5.
Third entangled photon has to have 85% coincidences with the first photon and 85% coincidences with second photon as cos^2(22.5deg)=0.85.
The maximum amount for what all three photons can coincidence is 50% (that's because that is the number for first two photon coincidences). So it means that the rest from both of 85% coincidences of the third photon should be separate for first photon and second photon and it is minimum 35% . But now for the third photon we have:
taking that tree photon coincidences are x <= 50%
x+(85%-x)+(85%-x)=170%-x >= 120%
The reason for arriving at this obviously wrong inequality can be only wrongly assumed fair sampling assumption if we do not question empirical cos^2(rel.angle) formula.
To illustrate what I mean there is simple diagram. First row show 50% coincidence between first two photon streams (m-matching polarization, d-different polarization). 1./3. row shows 85% coincidences between 1. and 3. photon streams. 2./3. row shows impossibility to have 85% coincidences between 2. and 3. photon streams (if there is match between 1./2. and 1./3. then there is match between 2./3.; if there is mismatch between 1./2. and match between 1./3. then there is mismatch between 2./3.).
1./2. mmmmm mmmmm ddddd ddddd
1./3. mmmmm mmmmm mmmmm mmddd
2./3. mmmmm mmmmm ddd__ __mmm mmmm
#2 Experiment to test superposition of wavefunction before detection but after polarization.
In QM it is considered that before measurement wavefunction exists in superposition of states. In order to test polarization of photons two pieces of equipment are used together - polarizer and detector. It is clear that we should consider that wavefunction is in superposition of polarization states before interaction with polarizer. However one can ask in what superposition wavefunction exists after polarizer but before detector. Detector separates sample of photons in two parts - one that is detected and the other that is not. So it seems to me that wavefunction exists in superposition of detectability before interaction with detector. Such viewpoint of course contradicts fair sampling assumption.
And so actual experiment I have on mind is EPR type photon polarization experiment with two sites with PBSes and 4 detectors at each output of two PBSes. In one of four channels between detector and PBS we insert wave plate that rotate polarization angle by 90 deg.
If there is detectability superposition of wavefunction then one can expect that this additional wave plate will change outcome of experiment (compared to outcome without this plate)?
It seems to me that particular change to expect will be that wave plate for this modified channel will invert correlations with other site's two channels.
#3 Experiment to test changes in coincidence detection rates as detection efficiency increases.
It is usually believed that realistic explanation requires that whole sample (assuming it is possible to detect it) should show linear zigzag graph of polarization correlation depending from relative angle. There is another possibility if we do not assume fair sampling. It is possible to speculate that whole sample should show completely flat graph i.e. that there are no correlation between polarizations of entangle photons. Consequently correlation appears only for combined measurement of polarization and detectability.
Experiment that can test this king of violation of fair sampling assumption would consist of usual EPR type photon polarization experiment. But measurements should be made with variable levels of detection efficiency. That can be achieved by varying bias voltage of avalanche silicon photodetectors. If we test two maximum correlation angles (minimum coincidences and maximum coincidences) then increasing efficiency should lead to faster growth of coincidence count for minimum and slower growth for maximum with possible tendency for growth to become even at 50% efficiency (increasing efficiency near 50% level should contribute to graph minimum and maximum by the same amount).
Increasing photodetector efficiency leads to increased noise level (dark count rate) and that can be explanation for bias in not noticing that tendency as qualitatively two effects are indistinguishable and only quantitative analysis can separate two effects.
So to make this experiment more reliable cooled detectors with decreased dark count rates would be preferable.
Bell's inequalities are derived considering all emitted particles. But in real photon entanglement experiments only portion of emitted particles is detected and therefore in order to apply Bell's inequalities to real experiments so called "fair sampling assumption" is required (that photon sample detected is faithful representative of photon sample emitted).
So the statements about violation of Bell's inequalities and nonlocality lack creditability if some basic tests of this fair sampling assumption are not preformed.
Of course it is fair to say that it is doubtful that fair sampling assumption can be conclusively proved in photon experiments but on the other hand we can not conclusively declare any experiment to be free from systematic errors. However what makes a difference between qualitatively performed experiments and poor ones is tests against variations in experimental setup and it's environment to asses possible sensitivity against some systematic errors.
So I would like to ask if there are others who share this view? Or why such discussions are avoided?
But to give direction to possible discussion I would like to describe three experiments that can show what I have on my mind (I have in some form mentioned them in discussions already).
#1 Two photon correlations in three photon entanglement.
This experiment is meant more for theoretical consideration as motivation for doubting fair sampling assumption.
We have three entangled photons. Two photons interact with polarizers that have 45deg relative angle between them and third photon's polarizer is oriented so that it's angle is between first two (22.5 and 22.5 deg with the firs two).
Two entangled photons from polarizers at 45deg relative angle will have 50% coincidences in idealized case as cos^2(45deg)=0.5.
Third entangled photon has to have 85% coincidences with the first photon and 85% coincidences with second photon as cos^2(22.5deg)=0.85.
The maximum amount for what all three photons can coincidence is 50% (that's because that is the number for first two photon coincidences). So it means that the rest from both of 85% coincidences of the third photon should be separate for first photon and second photon and it is minimum 35% . But now for the third photon we have:
taking that tree photon coincidences are x <= 50%
x+(85%-x)+(85%-x)=170%-x >= 120%
The reason for arriving at this obviously wrong inequality can be only wrongly assumed fair sampling assumption if we do not question empirical cos^2(rel.angle) formula.
To illustrate what I mean there is simple diagram. First row show 50% coincidence between first two photon streams (m-matching polarization, d-different polarization). 1./3. row shows 85% coincidences between 1. and 3. photon streams. 2./3. row shows impossibility to have 85% coincidences between 2. and 3. photon streams (if there is match between 1./2. and 1./3. then there is match between 2./3.; if there is mismatch between 1./2. and match between 1./3. then there is mismatch between 2./3.).
1./2. mmmmm mmmmm ddddd ddddd
1./3. mmmmm mmmmm mmmmm mmddd
2./3. mmmmm mmmmm ddd__ __mmm mmmm
#2 Experiment to test superposition of wavefunction before detection but after polarization.
In QM it is considered that before measurement wavefunction exists in superposition of states. In order to test polarization of photons two pieces of equipment are used together - polarizer and detector. It is clear that we should consider that wavefunction is in superposition of polarization states before interaction with polarizer. However one can ask in what superposition wavefunction exists after polarizer but before detector. Detector separates sample of photons in two parts - one that is detected and the other that is not. So it seems to me that wavefunction exists in superposition of detectability before interaction with detector. Such viewpoint of course contradicts fair sampling assumption.
And so actual experiment I have on mind is EPR type photon polarization experiment with two sites with PBSes and 4 detectors at each output of two PBSes. In one of four channels between detector and PBS we insert wave plate that rotate polarization angle by 90 deg.
If there is detectability superposition of wavefunction then one can expect that this additional wave plate will change outcome of experiment (compared to outcome without this plate)?
It seems to me that particular change to expect will be that wave plate for this modified channel will invert correlations with other site's two channels.
#3 Experiment to test changes in coincidence detection rates as detection efficiency increases.
It is usually believed that realistic explanation requires that whole sample (assuming it is possible to detect it) should show linear zigzag graph of polarization correlation depending from relative angle. There is another possibility if we do not assume fair sampling. It is possible to speculate that whole sample should show completely flat graph i.e. that there are no correlation between polarizations of entangle photons. Consequently correlation appears only for combined measurement of polarization and detectability.
Experiment that can test this king of violation of fair sampling assumption would consist of usual EPR type photon polarization experiment. But measurements should be made with variable levels of detection efficiency. That can be achieved by varying bias voltage of avalanche silicon photodetectors. If we test two maximum correlation angles (minimum coincidences and maximum coincidences) then increasing efficiency should lead to faster growth of coincidence count for minimum and slower growth for maximum with possible tendency for growth to become even at 50% efficiency (increasing efficiency near 50% level should contribute to graph minimum and maximum by the same amount).
Increasing photodetector efficiency leads to increased noise level (dark count rate) and that can be explanation for bias in not noticing that tendency as qualitatively two effects are indistinguishable and only quantitative analysis can separate two effects.
So to make this experiment more reliable cooled detectors with decreased dark count rates would be preferable.