Photon entanglement and fair sampling assumption

In summary, the discussion about correctness of fair sampling assumption in photon entanglement experiments is avoided for reasons of lack of creditability. I would like to ask if there are others who share this view?
  • #1
zonde
Gold Member
2,961
224
I am wondering why there are no discussions about correctness of fair sampling assumption in photon entanglement experiments so I would like to start one.

Bell's inequalities are derived considering all emitted particles. But in real photon entanglement experiments only portion of emitted particles is detected and therefore in order to apply Bell's inequalities to real experiments so called "fair sampling assumption" is required (that photon sample detected is faithful representative of photon sample emitted).
So the statements about violation of Bell's inequalities and nonlocality lack creditability if some basic tests of this fair sampling assumption are not preformed.
Of course it is fair to say that it is doubtful that fair sampling assumption can be conclusively proved in photon experiments but on the other hand we can not conclusively declare any experiment to be free from systematic errors. However what makes a difference between qualitatively performed experiments and poor ones is tests against variations in experimental setup and it's environment to asses possible sensitivity against some systematic errors.

So I would like to ask if there are others who share this view? Or why such discussions are avoided?



But to give direction to possible discussion I would like to describe three experiments that can show what I have on my mind (I have in some form mentioned them in discussions already).

#1 Two photon correlations in three photon entanglement.
This experiment is meant more for theoretical consideration as motivation for doubting fair sampling assumption.

We have three entangled photons. Two photons interact with polarizers that have 45deg relative angle between them and third photon's polarizer is oriented so that it's angle is between first two (22.5 and 22.5 deg with the firs two).
Two entangled photons from polarizers at 45deg relative angle will have 50% coincidences in idealized case as cos^2(45deg)=0.5.
Third entangled photon has to have 85% coincidences with the first photon and 85% coincidences with second photon as cos^2(22.5deg)=0.85.
The maximum amount for what all three photons can coincidence is 50% (that's because that is the number for first two photon coincidences). So it means that the rest from both of 85% coincidences of the third photon should be separate for first photon and second photon and it is minimum 35% . But now for the third photon we have:
taking that tree photon coincidences are x <= 50%
x+(85%-x)+(85%-x)=170%-x >= 120%
The reason for arriving at this obviously wrong inequality can be only wrongly assumed fair sampling assumption if we do not question empirical cos^2(rel.angle) formula.

To illustrate what I mean there is simple diagram. First row show 50% coincidence between first two photon streams (m-matching polarization, d-different polarization). 1./3. row shows 85% coincidences between 1. and 3. photon streams. 2./3. row shows impossibility to have 85% coincidences between 2. and 3. photon streams (if there is match between 1./2. and 1./3. then there is match between 2./3.; if there is mismatch between 1./2. and match between 1./3. then there is mismatch between 2./3.).
1./2. mmmmm mmmmm ddddd ddddd
1./3. mmmmm mmmmm mmmmm mmddd
2./3. mmmmm mmmmm ddd__ __mmm mmmm


#2 Experiment to test superposition of wavefunction before detection but after polarization.
In QM it is considered that before measurement wavefunction exists in superposition of states. In order to test polarization of photons two pieces of equipment are used together - polarizer and detector. It is clear that we should consider that wavefunction is in superposition of polarization states before interaction with polarizer. However one can ask in what superposition wavefunction exists after polarizer but before detector. Detector separates sample of photons in two parts - one that is detected and the other that is not. So it seems to me that wavefunction exists in superposition of detectability before interaction with detector. Such viewpoint of course contradicts fair sampling assumption.

And so actual experiment I have on mind is EPR type photon polarization experiment with two sites with PBSes and 4 detectors at each output of two PBSes. In one of four channels between detector and PBS we insert wave plate that rotate polarization angle by 90 deg.
If there is detectability superposition of wavefunction then one can expect that this additional wave plate will change outcome of experiment (compared to outcome without this plate)?
It seems to me that particular change to expect will be that wave plate for this modified channel will invert correlations with other site's two channels.


#3 Experiment to test changes in coincidence detection rates as detection efficiency increases.
It is usually believed that realistic explanation requires that whole sample (assuming it is possible to detect it) should show linear zigzag graph of polarization correlation depending from relative angle. There is another possibility if we do not assume fair sampling. It is possible to speculate that whole sample should show completely flat graph i.e. that there are no correlation between polarizations of entangle photons. Consequently correlation appears only for combined measurement of polarization and detectability.

Experiment that can test this king of violation of fair sampling assumption would consist of usual EPR type photon polarization experiment. But measurements should be made with variable levels of detection efficiency. That can be achieved by varying bias voltage of avalanche silicon photodetectors. If we test two maximum correlation angles (minimum coincidences and maximum coincidences) then increasing efficiency should lead to faster growth of coincidence count for minimum and slower growth for maximum with possible tendency for growth to become even at 50% efficiency (increasing efficiency near 50% level should contribute to graph minimum and maximum by the same amount).

Increasing photodetector efficiency leads to increased noise level (dark count rate) and that can be explanation for bias in not noticing that tendency as qualitatively two effects are indistinguishable and only quantitative analysis can separate two effects.
So to make this experiment more reliable cooled detectors with decreased dark count rates would be preferable.
 
Physics news on Phys.org
  • #2
That is a "fair" question. :smile: But don't expect it to be easy! There are a lot of good reasons this is not considered so big a deal at this time, going from the general to the more specific. I will also address your points in a second post.

Let's keep in mind and agree to the following at all times: If Bell tests results (violating inequalities) are invalid because fair sampling is a bad assumption, then:

a) Local realism is true after all (otherwise the issue is moot, akin to the price of tea in China);
b) The predictions of QM match relevant tests within experimental limits;
c) The predictions of the true local realistic theory - which we will call LRT(theta) - are different than the predictions of QM - per Bell, they must be;
d) We will acknowledge that there are detector inefficiencies, and therefore for any entangled pair emitted either 0, 1 or 2 photons may be detected within any given time window of size T; as a result, Bell tests
e) There is a currently unknown mechanism by which an "unfair" sample is being presented;
f) Since the "unfair" sample matches the predictions of QM where QM(theta)=cos^2 theta), the "bias" can be quantified as being BIAS(theta) = QM(theta) - LRT(theta);
g) When testing Alice and Bob: we will discuss using photons from a PDC Type I source (matching polarizations for Alice and Bob), and use a PBS and 2 detectors on each side (2 splitters and 4 detectors total) so as rule out the issue that the detectors or the polarizing beam splitters are not consistent.

If we don't agree on these points, it will be hard to go very far as I think these are all either explicit to the debate or simply definitions.
 
Last edited:
  • #3
Now, assuming the above 7 rules, we can say the following:

1. We are missing a hypothetical LRT(theta). What is the true coincidence rate?

The usual math, assuming Malus and statistical independence of a single underlying polarization on both sides, gives us .25 + .5(cos^2(theta)) so that LRT(0) = .75. There are no perfect correlations! Of course, by our assumptions above that is not a problem. We simply have BIAS(0) = QM(0) - LRT(0) = 1.00 - .75 = .25. Similarly, we have BIAS(90) = QM(90) - LRT(90) = 0 - .25 = -.25. In fact, the only place where the two would agree is at theta=45 which yields QM(45) = LRT(45) = .5. Then BIAS(45) is 0.

Understand that I am not asserting that the above function for LRT(theta) is the only one; I understand it is merely 1 of a large number of possibilities. I would welcome any you would care to propose, that won't matter.

2. Regardless of whether you accept my LRT(theta) in 1. above, the bias mechanism varies with theta. Since QM(theta) varies between 0 and 1, there is NO LRT(theta) such that BIAS(theta) is a fixed amount. If there were, then the LRT(theta) would either be >1 or <0 at some points.

3. If Alice and Bob are separated and reality is local, how can you have BIAS vary with theta? Clearly, there is no bias at 45 degrees but there will be bias at most other angles. For the bias mechanism to make sense, it must somehow know the delta between Alice and Bob. And yet this violates our premise a).

I think you should be able to see the difficulties here.
 
  • #4
Let's look at the BIAS function in a more detail manner:

If we were somehow able to sample 100% of all events, then the true results would be LRT(theta) and not QM(theta). Therefore, the BIAS would be 0. So the BIAS function must include as a parameter the sampling percentage (SP). As SP -> 1, BIAS(theta) -> 0.

Of course, the problem with this - which is not absolute but certainly a difficulty - is that in actuality as SP has increased there has been NO change in the BIAS function at all. Instead, we just get more and more confirmation of QM(theta)!
 
  • #5
zonde said:
#1 Two photon correlations in three photon entanglement.
This experiment is meant more for theoretical consideration as motivation for doubting fair sampling assumption.

We have three entangled photons. Two photons interact with polarizers that have 45deg relative angle between them and third photon's polarizer is oriented so that it's angle is between first two (22.5 and 22.5 deg with the firs two).
Two entangled photons from polarizers at 45deg relative angle will have 50% coincidences in idealized case as cos^2(45deg)=0.5.
Third entangled photon has to have 85% coincidences with the first photon and 85% coincidences with second photon as cos^2(22.5deg)=0.85.
The maximum amount for what all three photons can coincidence is 50% (that's because that is the number for first two photon coincidences). So it means that the rest from both of 85% coincidences of the third photon should be separate for first photon and second photon and it is minimum 35% . But now for the third photon we have:
taking that tree photon coincidences are x <= 50%
x+(85%-x)+(85%-x)=170%-x >= 120%
The reason for arriving at this obviously wrong inequality can be only wrongly assumed fair sampling assumption if we do not question empirical cos^2(rel.angle) formula.

To illustrate what I mean there is simple diagram. First row show 50% coincidence between first two photon streams (m-matching polarization, d-different polarization). 1./3. row shows 85% coincidences between 1. and 3. photon streams. 2./3. row shows impossibility to have 85% coincidences between 2. and 3. photon streams (if there is match between 1./2. and 1./3. then there is match between 2./3.; if there is mismatch between 1./2. and match between 1./3. then there is mismatch between 2./3.).
1./2. mmmmm mmmmm ddddd ddddd
1./3. mmmmm mmmmm mmmmm mmddd
2./3. mmmmm mmmmm ddd__ __mmm mmmm

You can't go very far with this argument, as this is precisely the Bell argument reformulated. Bell's argument is that IF you assume there are well-defined answers for things you cannot observe - as in your example above - the results are not self-consistent. Therefore, the assumption is wrong.

Now, why didn't Bell ask himself whether or not the fair sampling assumption is to blame instead? Because Bell's Theorem itself does not depend on fair sampling at all! He asserts that NO local realistic theory can give the same results as QM. If you did a Bell test with 100% sample size, your results above could not make sense. You have actually demonstrated why local realism fails. Now, this result would not apply IF you had some OTHER function than the cos^2(theta) formula. OK, what is it? You will see quickly that finding an alternative formula which IS consistent is no piece of cake. And of course, it varies from experiment.
 
  • #6
DrChinese said:
Let's keep in mind and agree to the following at all times: If Bell tests results (violating inequalities) are invalid because fair sampling is a bad assumption, then:
a) Local realism is true after all (otherwise the issue is moot, akin to the price of tea in China);
Yes

b) The predictions of QM match relevant tests within experimental limits;
Yes but with reservations. The reason is that main aim of experiments is proving violation of Bell inequalities and if we analyze results from different perspective some questions may or may not arise.

c) The predictions of the true local realistic theory - which we will call LRT(theta) - are different than the predictions of QM - per Bell, they must be;
Let's formulate local realistic theory as LRT(theta,SP) where SP - sampling percentage.

d) We will acknowledge that there are detector inefficiencies, and therefore for any entangled pair emitted either 0, 1 or 2 photons may be detected within any given time window of size T; as a result, Bell tests
Yes

e) There is a currently unknown mechanism by which an "unfair" sample is being presented;
Let's say there is no well formulated model that propose mechanism by which an "unfair" sample is being presented. But I think I can formulate something as a starting point for a discussion if there will be interest.

f) Since the "unfair" sample matches the predictions of QM where QM(theta)=cos^2 theta), the "bias" can be quantified as being BIAS(theta) = QM(theta) - LRT(theta);
Lets formulate it that way: BIAS(theta,SP) = SP*QM(theta) - LRT(theta,SP)

g) When testing Alice and Bob: we will discuss using photons from a PDC Type I source (matching polarizations for Alice and Bob), and use a PBS and 2 detectors on each side (2 splitters and 4 detectors total) so as rule out the issue that the detectors or the polarizing beam splitters are not consistent.
Yes


Now more about the contradiction with QM.
In real experiment for QM(90)=0 to mach experiment we have to assume that there is some small amount of pairs that disentangle due to decoherence so that we have EXP(90)=SP*QM(90)+SP*DEC+N where EXP(theta) is actual value from experiment; DEC is disentangled pairs; N - noise due to dark counts.
Now about these disentangled pairs. What are predictions of QM about this value? I assume there are no definite predictions except that there is such thing and it should be minimized in experiments.

But now we have assumed that proportion between amount of entangled pairs and amount of disentangle pairs is constant in respect to SP value.
But let's change that. What we have now?
EXP(theta)=F1(SP)*QM(theta)+F2(SP)*DEC+N
I will chose F1(SP) and F2(SP) in certain way. F2(SP) will rise from 0 to maximum along the interval but F1(SP) will vary from maximum at start of the interval then droping to 0 at 1/2 interval and then becoming maximum negative at the end of the interval so that integral over the interval is 0.

And now it is possible to have LRT(theta,SP) so that BIAS(theta,SP) = F1(SP)*QM(theta) - LRT(theta,SP) = -F2(SP)*DEC
And we can even drop theta from BIAS().
Of course not very nice thing is negative probability.

What do you say? Does it directly contradict QM or it does not?
 
  • #7
DrChinese said:
You can't go very far with this argument, as this is precisely the Bell argument reformulated. Bell's argument is that IF you assume there are well-defined answers for things you cannot observe - as in your example above - the results are not self-consistent. Therefore, the assumption is wrong.
No I do not make any assumptions in description of this experiment about things behind measurement equipment readings. That's not true. Or if you have spotted something point it out and I will try to fix it.
Only assumption is about cos^2(theta) and that it does not depend from sampling percentage i.e. prediction is SP*cos^2(theta) even if SP=100%.

DrChinese said:
Now, why didn't Bell ask himself whether or not the fair sampling assumption is to blame instead? Because Bell's Theorem itself does not depend on fair sampling at all! He asserts that NO local realistic theory can give the same results as QM. If you did a Bell test with 100% sample size, your results above could not make sense. You have actually demonstrated why local realism fails. Now, this result would not apply IF you had some OTHER function than the cos^2(theta) formula. OK, what is it? You will see quickly that finding an alternative formula which IS consistent is no piece of cake. And of course, it varies from experiment.
Try to analyze this experiments using some interpretation that claims to have solution for violation of Bell's inequalities and see what you get. Can you still hold to cos^2(theta) formula?
 
  • #8
zonde said:
a) Local realism is true after all (otherwise the issue is moot, akin to the price of tea in China);
Yes

b) The predictions of QM match relevant tests within experimental limits;
Yes but with reservations. The reason is that main aim of experiments is proving violation of Bell inequalities and if we analyze results from different perspective some questions may or may not arise.

c) The predictions of the true local realistic theory - which we will call LRT(theta) - are different than the predictions of QM - per Bell, they must be;
Let's formulate local realistic theory as LRT(theta,SP) where SP - sampling percentage.

d) We will acknowledge that there are detector inefficiencies, and therefore for any entangled pair emitted either 0, 1 or 2 photons may be detected within any given time window of size T; as a result, Bell tests
Yes

e) There is a currently unknown mechanism by which an "unfair" sample is being presented;
Let's say there is no well formulated model that propose mechanism by which an "unfair" sample is being presented. But I think I can formulate something as a starting point for a discussion if there will be interest.

f) Since the "unfair" sample matches the predictions of QM where QM(theta)=cos^2 theta), the "bias" can be quantified as being BIAS(theta) = QM(theta) - LRT(theta);
Lets formulate it that way: BIAS(theta,SP) = SP*QM(theta) - LRT(theta,SP)

g) When testing Alice and Bob: we will discuss using photons from a PDC Type I source (matching polarizations for Alice and Bob), and use a PBS and 2 detectors on each side (2 splitters and 4 detectors total) so as rule out the issue that the detectors or the polarizing beam splitters are not consistent.
Yes


Now more about the contradiction with QM.
In real experiment for QM(90)=0 to mach experiment we have to assume that there is some small amount of pairs that disentangle due to decoherence so that we have EXP(90)=SP*QM(90)+SP*DEC+N where EXP(theta) is actual value from experiment; DEC is disentangled pairs; N - noise due to dark counts.
Now about these disentangled pairs. What are predictions of QM about this value? I assume there are no definite predictions except that there is such thing and it should be minimized in experiments.

But now we have assumed that proportion between amount of entangled pairs and amount of disentangle pairs is constant in respect to SP value.
But let's change that. What we have now?
EXP(theta)=F1(SP)*QM(theta)+F2(SP)*DEC+N
I will chose F1(SP) and F2(SP) in certain way. F2(SP) will rise from 0 to maximum along the interval but F1(SP) will vary from maximum at start of the interval then droping to 0 at 1/2 interval and then becoming maximum negative at the end of the interval so that integral over the interval is 0.

And now it is possible to have LRT(theta,SP) so that BIAS(theta,SP) = F1(SP)*QM(theta) - LRT(theta,SP) = -F2(SP)*DEC
And we can even drop theta from BIAS().
Of course not very nice thing is negative probability.

What do you say? Does it directly contradict QM or it does not?

A negative probability would ruin it completely. That was essentially my point, that all attempts to create such a function will create contradictions. I don't understand the point of the DEC as this is not an important element in actual tests and may as well be zero (otherwise the experimental result would not support QM). Same for N, which is known precisely. You also can't have EXP() different than QM() as these are idential, as we know from experiment.

And you cannot have the LRT() function dependent on SP, since the idea is that LRT() yields a value different than QM when all pairs are seen - per Bell. In other words, at no point can LRT() yield the same predictions as QM(). Adding in SP as a variable has the effect of including the bias mechanism in the function and we don't want to do that because we need to keep it separate. And we need to identify the LRT() function explicitly, or at least the values it generates.
 
  • #9
DrChinese said:
A negative probability would ruin it completely. That was essentially my point, that all attempts to create such a function will create contradictions. I don't understand the point of the DEC as this is not an important element in actual tests and may as well be zero (otherwise the experimental result would not support QM). Same for N, which is known precisely. You also can't have EXP() different than QM() as these are idential, as we know from experiment.
About DEC the idea is that it is important element and it can't be zero (maybe except when SP->0) that way making QM prediction not entirely correct.
A bit of mistake from my side - there is not negative probability in my equation but only negative derivative i.e. DEC "eats away" cos^2(theta) as SP increases. This is not clear contradiction but is a bit counter intuitive that one factor decreases as sampling percentage increases. However I found a way how to get away from this.

We can rewrite this equation differently (I will drop N from equation because it can be found out fairly well):
EXP(theta)=F1(SP)*QM(theta)+F2(SP)*DEC+N

we modify it this way:
EXP(theta)=F1(SP)*cos^2(theta)+F2(SP)*DEC = F1(SP)*cos^2(theta)+F2(SP)*cos^(theta)+F2(SP)*sin^(theta) =
(F1(SP)+F2(SP))*cos^2(theta) + F2(SP)*sin^2(theta)

That way there is no counter intuitive tendencies in equation.


DrChinese said:
And you cannot have the LRT() function dependent on SP, since the idea is that LRT() yields a value different than QM when all pairs are seen - per Bell. In other words, at no point can LRT() yield the same predictions as QM(). Adding in SP as a variable has the effect of including the bias mechanism in the function and we don't want to do that because we need to keep it separate. And we need to identify the LRT() function explicitly, or at least the values it generates.
Look, what is fair sampling assumption is QM? We can express it as SP*QM(theta) i.e. QM(theta) is allways proportional to SP.
So if LRT() should describe unfair sampling situation it can't have the same form as QM() meaning SP*LRT() but SP should be as parameter of LRT() that way LRT(SP).

Reread definition of fair sampling assumption - "photon sample detected is faithful representative of photon sample emitted". If we hypothetically reject that assumption we say that - "photon sample detected is not faithful representative of photon sample emitted" i.e. we have two different functions for part of the sample and for full sample. To unify these two functions in one we have to include SP in function.
 
  • #10
zonde said:
Reread definition of fair sampling assumption - "photon sample detected is faithful representative of photon sample emitted". If we hypothetically reject that assumption we say that - "photon sample detected is not faithful representative of photon sample emitted" i.e. we have two different functions for part of the sample and for full sample. To unify these two functions in one we have to include SP in function.

I want you to tell me what a faithful representation of the entire sample (no undetected pairs) would look like. That is LRT(theta) and does not depend on SP. We already know that the experimentally detected values match QM(theta), for all existing SP. We need to know what to expect when SP=100 because that is where Bell's Inequality will kick in!

It is funny, I have had this conversation with Caroline Thompson and other local realists. NEVER once have I been able to get anyone to tell me what a value of LR(theta) is for ANY angle setting. All I want to hear is a number. If there is a number that works, what is it? If not, supporters of LR should quit telling me that LR is viable. Here is what I am looking for:

LR(0) = ?
LR(22.5) = ?
LR(45) = ?
LR(67.5) = ?
LR(90) = ?

What is the TRUE value for correlations at the above angles, when the entire sample is considered and there are no undetected pairs (SP=100%) ?

Now obviously, since I know the experimental values, I can calculate the Bias and see if it varies with theta (which violates strict locality, which was tested by Weihs et al). I also can determine if Bell's Inequality is violated on the full sample (which means the values cannot be realistic). You cannot put in numbers for the above that won't run afoul of one or the other the prime assumptions of locality and realism.
 
Last edited:
  • #11
DrChinese said:
We already know that the experimentally detected values match QM(theta), for all existing SP.
We are discussing photon experiments, right? In that case this statement is clearly wrong.
We know QM(theta) only for SP~=5%.
Can you give reference for other levels of SP?

DrChinese said:
It is funny, I have had this conversation with Caroline Thompson and other local realists. NEVER once have I been able to get anyone to tell me what a value of LR(theta) is for ANY angle setting. All I want to hear is a number. If there is a number that works, what is it? If not, supporters of LR should quit telling me that LR is viable. Here is what I am looking for:

LR(0) = ?
LR(22.5) = ?
LR(45) = ?
LR(67.5) = ?
LR(90) = ?

What is the TRUE value for correlations at the above angles, when the entire sample is considered and there are no undetected pairs (SP=100%) ?
I thought I have stated this already but maybe it was not clear enough.

For SP=100% we have
LR(0) = 0.5
LR(22.5) = 0.5
LR(45) = 0.5
LR(67.5) = 0.5
LR(90) = 0.5
 
  • #12
zonde said:
I thought I have stated this already but maybe it was not clear enough.

For SP=100% we have
LR(0) = 0.5
LR(22.5) = 0.5
LR(45) = 0.5
LR(67.5) = 0.5
LR(90) = 0.5

The SP issue is a bit more complicated so I will discuss this a little later. (It doesn't really matter whether the SP for existing experiments is 5% or 25% since they all say the same thing.)

But LR(0)=.5? This is in direct contradiction to realism. A realistic theory says that there are pre-existing hidden variables being revealed by observation. Measuring the same attributed on particles fulfilling a conservation rule should always yield an exact match. Therefore LR(0)=1 is required.
 
  • #13
DrChinese said:
But LR(0)=.5? This is in direct contradiction to realism. A realistic theory says that there are pre-existing hidden variables being revealed by observation. Measuring the same attributed on particles fulfilling a conservation rule should always yield an exact match. Therefore LR(0)=1 is required.
Different realistic theories can give different descriptions for entanglement. Your definition of entanglement for realistic theory is not viable from my viewpoint (you implicitly include it in your statement that LR(0)=1).
 
  • #14
zonde said:
Different realistic theories can give different descriptions for entanglement. Your definition of entanglement for realistic theory is not viable from my viewpoint (you implicitly include it in your statement that LR(0)=1).

Sure, they don't presume that entanglement is itself a real phenomenon because there is no ongoing connection. EPR's example wasn't even entanglement as we know it. The question is whether the knowledge of one member of a system is sufficient to tell you something about the other one. In that case, LR(0) must be 1. If it didn't, then there are no hidden variables being revealed (when conservation rules are considered, of course). How would you formulate a realistic scenario with conservation of spin, for example?
 
  • #15
DrChinese said:
Sure, they don't presume that entanglement is itself a real phenomenon because there is no ongoing connection. EPR's example wasn't even entanglement as we know it. The question is whether the knowledge of one member of a system is sufficient to tell you something about the other one. In that case, LR(0) must be 1. If it didn't, then there are no hidden variables being revealed (when conservation rules are considered, of course). How would you formulate a realistic scenario with conservation of spin, for example?
I would have to agree with you about LR(0)=1 if we would be discussing the case assuming that fair sampling holds good.
But that is not the case. We are discussing unfair sampling.
When we talk about theoretical detection of full sample we revert to fair sampling (full sample can be viewed only as fair sample because there is no "place" left for unfair sampling). So when we compare predictions or realistic experiments (assuming it features unfair sampling) and theoretical full sample experiment we confront unfair sampling with fair sampling.

So how does unfair sampling change the things? First of all there should be some factor that determines unfair sampling i.e. detection of some photons and non detection of others.
When we resort to full sample we clearly discard any information about that unknown parameter that is revealed by unfair subsample.
If I hypothesize that photons are entangled with detectability parameter then full sample should not show any sign of entanglement because we are completely ignoring that parameter with full sample.
So obviously your point about "If it didn't, then there are no hidden variables being revealed" is true. But it does not mean that hidden variables are not being revealed by unfair subsample.
 
  • #16
zonde said:
1. I would have to agree with you about LR(0)=1 if we would be discussing the case assuming that fair sampling holds good.
But that is not the case. We are discussing unfair sampling.
When we talk about theoretical detection of full sample we revert to fair sampling (full sample can be viewed only as fair sample because there is no "place" left for unfair sampling). So when we compare predictions or realistic experiments (assuming it features unfair sampling) and theoretical full sample experiment we confront unfair sampling with fair sampling.

2. So how does unfair sampling change the things? First of all there should be some factor that determines unfair sampling i.e. detection of some photons and non detection of others.
When we resort to full sample we clearly discard any information about that unknown parameter that is revealed by unfair subsample.
If I hypothesize that photons are entangled with detectability parameter then full sample should not show any sign of entanglement because we are completely ignoring that parameter with full sample.

1. I am not talking about the experimental values, simply the LR() predictions for 100% of the sample. (No fair sampling assumption required.) It is axiomatic that LR(0)=1 because that is the realistic requirement if there are hidden variables.

Now, in all fairness, there *are* some LR variations that yield LR(0)<1 - as you suggest - but they do not support perfect correlations AND they have even larger deviations from observation. So the question becomes: what is the LR assumption you want to use to explain correlations? If LR()=.5 then you are saying that underlying conditions are purely by chance, which completely flies in the face of the observed correlations.

So we can go this route if you like, but it seems weird to me.

2. I am OK with this.
 
  • #17
I do not understand. On the one hand you are saying this:
DrChinese said:
2. I am OK with this.
But on the other hand you are saying this:
DrChinese said:
It is axiomatic that LR(0)=1 because that is the realistic requirement if there are hidden variables.

If LR()=.5 then you are saying that underlying conditions are purely by chance, which completely flies in the face of the observed correlations.
Additionally there are no photon entanglement experiments where full sample is observed so how can you say that this "flies in the face of the observed correlations".

So maybe yo can explain a bit more your point?
 
  • #18
zonde said:
Additionally there are no photon entanglement experiments where full sample is observed so how can you say that this "flies in the face of the observed correlations".

So maybe yo can explain a bit more your point?

If you have LR()=.5 then you have gigantic deviations from experiment. It doesn't come close to Malus, so you are now almost rejecting that too (a classical result from 1809). So you are now saying that the Bias function is:

Bias(theta, SP)=f(SP) * .5-cos^2(theta)

Also as before: this varies with theta, is positive and some points and negative at others, and can vary in magnitude from from 0 to 50%. That is a steep hill to climb, but it at least has this going for it: it does not violate Bell.
 
  • #19
DrChinese said:
If you have LR()=.5 then you have gigantic deviations from experiment. It doesn't come close to Malus, so you are now almost rejecting that too (a classical result from 1809).
I guess that you mean deviations from empirical law SP*cos^2(theta) when you say "deviations from experiment".
About that empirical law SP*cos^2(theta). To consider it tested experimentally it should be tested not only with different theta but with different SP as well. That is exactly motivation for experiment #3 from my opening post.

About Malus law. If entangled particles are entangled with detectability but not polarization then there is no direct relation to Malus law.
It seems that Malus law is still intuitively understood using photon hidden variables but then traditional approach to hidden variables don't work for entanglement so the situation is not very consistent anyways. But discussion about Malus law and how the situation can be made consistent does not promise to be very easy so I propose not to continue it unless we have no other points to discuss.

DrChinese said:
So you are now saying that the Bias function is:

Bias(theta, SP)=f(SP) * .5-cos^2(theta)

Also as before: this varies with theta, is positive and some points and negative at others, and can vary in magnitude from from 0 to 50%. That is a steep hill to climb, but it at least has this going for it: it does not violate Bell.
First of all bias function is:
Bias(theta, SP)=f(SP) * .5 - SP*cos^2(theta)

This bias function as I see describes deviation from fair sampling. I do not see that negative values are problematic with my correction to formula. As we are comparing SP*cos^2(theta) with averaged value there should be some values above average and some below average.
However this fair sampling makes me uncomfortable because it is borrowed from macro level and as I see it implies that QM level can be thought as colliding billiard balls.
So I am not sure that it is physically meaningful to make comparison with some averaged subsample (f(SP) * .5 part).
 
  • #20
zonde said:
1. About Malus law. If entangled particles are entangled with detectability but not polarization then there is no direct relation to Malus law.
It seems that Malus law is still intuitively understood using photon hidden variables but then traditional approach to hidden variables don't work for entanglement so the situation is not very consistent anyways. But discussion about Malus law and how the situation can be made consistent does not promise to be very easy so I propose not to continue it unless we have no other points to discuss.


2, First of all bias function is:
Bias(theta, SP)=f(SP) * .5 - SP*cos^2(theta)

This bias function as I see describes deviation from fair sampling. I do not see that negative values are problematic with my correction to formula. As we are comparing SP*cos^2(theta) with averaged value there should be some values above average and some below average.
However this fair sampling makes me uncomfortable because it is borrowed from macro level and as I see it implies that QM level can be thought as colliding billiard balls.
So I am not sure that it is physically meaningful to make comparison with some averaged subsample (f(SP) * .5 part).

1. The issue about Malus is that classically, photons have polarization. Now, with your hypothesis it may be true that the detection of a photon is dependent on a hidden parameter associated with SP. But clearly: if you detect a photon, it matters not how many polarizers are between the source and the detector; it matters only what angles they are set at. And those settings always follow Malus. So it is hard to say that 2 entangled photons lose any connection at all to Malus, but I agree it is at least conceivable (as a hypothesis).

2. We agree that in your hypothesis it is as follows:
Bias(theta, SP)=f(SP) * ( .5 - SP*cos^2(theta) )

Just making sure I have the grouping as you see it.

Regardless, we still have the issue that Bias() is a function of theta, which it shouldn't be if locality applies.
 
  • #21
DrChinese said:
1. The issue about Malus is that classically, photons have polarization. Now, with your hypothesis it may be true that the detection of a photon is dependent on a hidden parameter associated with SP. But clearly: if you detect a photon, it matters not how many polarizers are between the source and the detector; it matters only what angles they are set at. And those settings always follow Malus. So it is hard to say that 2 entangled photons lose any connection at all to Malus, but I agree it is at least conceivable (as a hypothesis).
Well, it seems that I somewhere went offtrack and made an error.
Let's see what was my idea at the start.
Photons are entangled with superposition that correspond to pure states like that:
(polarization H or V, detectability 1 or 0)
same polarization and same detectability - H1 and H1, H0 and H0, V1 and V1, V0 and V0
opposite polarization and opposite detectability - H1 and V0, H0 and V1, V1 and H0, V0 and H1
For example taking out pure state H1 for the first photon we have such possible states of second photon - H1,V0 (coincidence will be only for H1)
Now if we ignore detectability (full sample) for pure states H1/H0 of the first photon we will have such possible states of second photon - H1,H0,V0,V1 (theoretically coincidence will be for all states)

Now about Malus law. There we measure only polarization because from perspective of detectability we have fair sample (even mixture of detectability irrespective from polarization).

Sorry for my mistake.

DrChinese said:
2. We agree that in your hypothesis it is as follows:
Bias(theta, SP)=f(SP) * ( .5 - SP*cos^2(theta) )

Just making sure I have the grouping as you see it.

Regardless, we still have the issue that Bias() is a function of theta, which it shouldn't be if locality applies.
This does not seem to go well.
As I understand you are trying to make a point that even with SP<<100% LR() can not match the cos^2(theta) but I am not sure that it will be so easy.
And LR(theta,SP) for SP<100% is not SP*0.5. Where did you get that?
If you want some approximation of LR(theta,SP) then use this one LR(SP,theta)=F1(SP)*cos^2(theta) + F2(SP)*sin^2(theta) with condition that F1(100%)=F2(100%)=0.5 (of course F1(SP) and F2(SP) are restricted to certain curves in case of LR).
 
  • #22
zonde said:
1. And LR(theta,SP) for SP<100% is not SP*0.5. Where did you get that?

2. If you want some approximation of LR(theta,SP) then use this one LR(SP,theta)=F1(SP)*cos^2(theta) + F2(SP)*sin^2(theta) with condition that F1(100%)=F2(100%)=0.5 (of course F1(SP) and F2(SP) are restricted to certain curves in case of LR).

1. I think the issue was defined by the BIAS() function. Let's return to that after we look at your LR().

2. I am good with this as a hypothesis:

LR(SP,theta)=F1(SP)*cos^2(theta) + F2(SP)*sin^2(theta)

And you have for SP=100%, F1=F2=.5 so that makes the math work out to:

LR(100%,theta)=.5*cos^2(theta) + .5*sin^2(theta)
=.5*(cos^2(theta) + sin^2(theta)
=.5* (1)
=.5

So for the full universe under your hypothsis (no subsample):

LR(theta) = .5

OK, this does not violate Bell's Inequality so we are OK so far. Next, we look at SP=5% or something like that. To match experiment, we need F1(5%) to be 1 and F2(5%) to be 0 so that LR(5%, theta) = cos^2(theta). Again, that is superficially plausible.

So now we have 2 hidden variable functions F1 and F2 with SP as a hidden variable driving them. And you even have the bonus that theta is not a factor in any of these, which I have said is a requirement for a Local Realistic theory. So where did I go wrong?

Ah, not so fast! It is true that LR(100%, theta) does not violate Bell's Inequality, and it is true that the LR(SP, theta) function does reproduce experimental results following the requirements I set out for there being some kind of BIAS(SP) function. But it FAILS that ever present test: LR(SP, theta) violates Bell's Inequality at almost every point where SP<<100%. In other words, there is still no sample which fulfills the requirements that the predicted subsets must have a probability that is non-negative. We are right back to where we started: the hypothetical Local Realistic theory is not realistic after all. And how do I know this? Because the LR(5%) function now matches the predictions of QM. Per Bell, QED.

And that will always be the case. Either:

a) The LR() cannot pass Bell's test (non-negative probabilities for all subsets for all theta) and the realism requirement is violated;
b) The BIAS() function depends on theta, which means locality is not respected (since we require observer independence).
 
  • #23
It seems like you are confusing what is local and what is non-local.
If we have function that produces local sample spaces for Alice and Bob depending from absolute value of polarizer setting it should be local under LR. But correlation function is intersection of both sample spaces and it is non-local and can depend from relative angle of both polarizers.

If you take difference of two non-local functions (Bias()) then it can be non-local as well and I do not understand why do you put such constraint on this difference of functions.
Now we can try to split this Bias() function in two local functions that can produce required result and then these two local functions can not depend from theta but only from absolute angles of polarizers.

So your argumentation does not seem valid to me.
 
  • #24
zonde said:
It seems like you are confusing what is local and what is non-local.
If we have function that produces local sample spaces for Alice and Bob depending from absolute value of polarizer setting it should be local under LR. But correlation function is intersection of both sample spaces and it is non-local and can depend from relative angle of both polarizers.

If you take difference of two non-local functions (Bias()) then it can be non-local as well and I do not understand why do you put such constraint on this difference of functions.
Now we can try to split this Bias() function in two local functions that can produce required result and then these two local functions can not depend from theta but only from absolute angles of polarizers.

So your argumentation does not seem valid to me.

The issue is: what would be the result for the intersection of 2 local spaces? The results should be consistent with: Alice's outcome is independent of the nature of the measurement at Bob; and vice versa. Yes, you have that with your formula. But not while simultaneously maintaining realism.

The situation is what I call a "squishy" argument: you offer locality without realism in some situations (as I described in my previous post), while trying to offer realism without locality in other situations. I claim that for any Sampling Percentage: a Bell Inequality cannot be violated (enforcing realism); and theta cannot figure as a variable in the Bias() function for Alice or Bob alone (enforcing locality).
 
  • #25
Can I intervene - the question is a good one, but the debate between you two is lost in the stratosphere. To my mind "photon entanglement" should be viewed as correlated waves.

Wikipedia has a mention of 'fair sampling' under http://en.wikipedia.org/wiki/Bell's_inequalities, saying it "limits the range of local theories to those which conceive of the light field as corpuscular. The assumption excludes a large family of local realist theories.."

The Wiki article goes on to say Clauser and Horne[9] recognized that testing Bell's inequality requires some extra assumptions. They introduced the No Enhancement Hypothesis (NEH):

a light signal, originating in an atomic cascade for example, has a certain probability of activating a detector. Then, if a polarizer is interposed between the cascade and the detector, the detection probability cannot increase.

Given this assumption, there is a Bell inequality between the coincidence rates with polarizers and coincidence rates without polarizers.

The experiment was performed by Freedman and Clauser[13], who found that the Bell's inequality was violated. So the no-enhancement hypothesis cannot be true in a local hidden variables model. The Freedman-Clauser experiment reveals that local hidden variables imply the new phenomenon of signal enhancement:

In the total set of signals from an atomic cascade there is a subset whose detection probability increases as a result of passing through a linear polarizer.

This is perhaps not surprising, as it is known that adding noise to data can, in the presence of a threshold, help reveal hidden signals (this property is known as stochastic resonance[15]). One cannot conclude that this is the only local-realist alternative to Quantum Optics...
 
  • #26
Max Wallis said:
Can I intervene - the question is a good one, but the debate between you two is lost in the stratosphere. To my mind "photon entanglement" should be viewed as correlated waves.
I would say that discussions like that are bound to end in stratosphere as there are not much experimental facts about detectability.
It seems like detectability is not an object of interest in QM.

About waves, I think it's more or less clear that you can't have consistent picture of QM from corpuscular viewpoint. But the waves are quantized so corpuscular viewpoint can provide some means of analyzing situation but of course if you do not stretch it too far.
 
  • #27
DrChinese said:
The issue is: what would be the result for the intersection of 2 local spaces? The results should be consistent with: Alice's outcome is independent of the nature of the measurement at Bob; and vice versa. Yes, you have that with your formula. But not while simultaneously maintaining realism.
Well, I do not have that in my formula as I didn't provided any means how to construct two local spaces. But I say that I see no reason to doubt that it is possible.

DrChinese said:
I claim that for any Sampling Percentage: a Bell Inequality cannot be violated (enforcing realism)
There are LR models that violate Bell inequalities with unfair sampling.
Thompson's Chaotic Ball was one. There are some papers by Adenier. And there are others.
I do not say that these models reflect real situation but your claim is clearly invalidated by these models.
 
  • #28
Max Wallis said:
To my mind "photon entanglement" should be viewed as correlated waves.
After thinking it over a bit it seems that it can be tested how appropriate is corpuscular view in context of EPR experiment.
For the test one can use experiment #3 from opening post but data should be viewed from different perspective. When detector is manipulated as to increase or decrease singlet detection rate it should be compared with increase/decrease of coincidences.
For corpuscular view say if we increase two times singlet detection rate in both detectors, coincidences should increase four times or coincidence rate should increase two times. Deviation from this would indicate that corpuscular view is not completely appropriate.
 
  • #29
The way I see it with your hypothesis you are just moving correlations from observed photon count to your detection efficiency.

When we perform Bell's experiment we see correlations - that is a fact. Now since you want to claim that this is due to unfair sampling and that the experiment would not see any correlations if all the photons were detected you have to introduce the correlation into your sampling efficiency to explain observations. But the only result is that now sampling efficiency is correlated instead of photon counts so why do you think it's an improvement?
 
Last edited:
  • #30
zonde said:
There are LR models that violate Bell inequalities with unfair sampling.
Thompson's Chaotic Ball was one. There are some papers by Adenier. And there are others.
I do not say that these models reflect real situation but your claim is clearly invalidated by these models.

NO, these are models that CLAIM they violate Bell with LR. They don't. That is what we are discussing here. Put forth the model and defend it. I am not going to argue with an empty claim (as these papers are NOT generally accepted science).
 
  • #31
PTM19 said:
The way I see it with your hypothesis you are just moving correlations from observed photon count to your detection efficiency.

When we perform Bell's experiment we see correlations - that is a fact. Now since you want to claim that this is due to unfair sampling and that the experiment would not see any correlations if all the photons were detected you have to introduce the correlation into your sampling efficiency to explain observations. But the only result is that now sampling efficiency is correlated instead of photon counts so why do you think it's an improvement?
Entanglement non-locality is traditionally evaluated with level of Bell inequality violation.
Motivation behind this model is to show that decoherence of entanglement lead to local realism just as well. So hidden variables do not need to be straight forward properties that correspond one to one with observables but might be more subtle matter.
I do not say that this model is much of improvement if viewed strictly but I think it can provide a bit different perspective on the problem of entanglement non-locality.
 
  • #32
DrChinese said:
NO, these are models that CLAIM they violate Bell with LR. They don't. That is what we are discussing here. Put forth the model and defend it. I am not going to argue with an empty claim (as these papers are NOT generally accepted science).
Then I can say the same - I am not going to argue with an empty claim (like that - "I claim that for any Sampling Percentage: a Bell Inequality cannot be violated (enforcing realism)")
 
  • #33
zonde said:
Then I can say the same - I am not going to argue with an empty claim (like that - "I claim that for any Sampling Percentage: a Bell Inequality cannot be violated (enforcing realism)")

Hey, I'll be glad to discuss *your* claims - empty or not - with you. No issue there. And I'll be glad to discuss it with the author as well. That is what we are here for, to discuss.

But I won't discuss claims of cold fusion or perpetual motion machines or Bell disproofs that involve one person using another person's speculative article as the source. There are a number of authors out there that think they have "disproven" Bell - amazingly with completely different lines of thinking - but it is not possible to effectively debate THOSE papers because I will dispute one or more of their steps toward their conclusion. For example, I completely dispute 't Hooft's thinking on superdeterminism because it uses arguments that are not generally accepted science. So how can we debate that if you consider it reliable?

If we are going to discuss fair sampling, let's take what you assert and discuss this. I don't mind you using others' ideas IF you will defend them without reference to them being accepted science (unless of course they are). And we can debate anything I assert as well in opposition, all's fair. The point is: it is reasonable for me to cite accepted science while it is not reasonable for you to cite unaccepted science. And vice versa.
 
  • #34
Well, it took some time to come up with some model that can produce local samples of Bob and Alice that will give sinusoidal graph (so as to "fill" my claims with something touchable).

Justification behind this model is related to Malus law.
If we have sample of photons that are polarized along certain axis then size of subsample (intensity) after passing polarizer follows Malus law I'=I*cos^2(theta).
So I hypothesize that all photons in sample have the same hidden variable - polarization but there is another hidden variable that determines interval of angles where photon will be filtered out by polarizer or will pass through.
Obviously photons in sample have different values of that other hidden variable (I will call it static phase difference in respect to context wave or shortly "phase").
Photon "phase" values then have certain distribution described by function - abs(sin(2theta)).
So next thing is how related is this "phase" for two entangled photons. Here I just hypothesize that "phase" vectors are orthogonal for entangled pair.

Now the model is (pol -polarization HV; ph -"phase" HV; alpha -polarizer angle):

Probability that there is photon with given values of hidden variables:
abs(sin(2*ph))

Polarization of photon i.e. will it pass the polarizer or not (+ sign for it will pass):
sign(sin(alpha + pol)^2 - cos(ph)^2)
this function actualy determines whether polarizer angle falls in "passing" interval or in "absorbing" interval of photon so it can be described with intervals without using sine and cosine functions.

Detection (+ sign for it will be detected):
sign(cos(ph)^2-K) where K=sin(Pi/8)^2
again determines whether "ph" falls in certain interval and so can be described without cosine function.
Without reduced detection the graph is like 1+cos^2(theta) so in order to consider this model I assumed some mechanism for detection that reduces this quasi decoherence.

For Bob formulas are the same but the value of "ph" is different by Pi/4 (insted of Pi/2 because we have two periods for polarization as we rotate polarizer around the clock and "phase" too is adjusted to that periodicity in formulas).

To test this model one has to generate set of evenly distributed random values for "pol" and "ph" in period of 0 - 2Pi and then plug them into formulas (with Pi/4 difference between Alice "ph" and Bob "ph" but the same "pol" value). If probability in first formula is calculated but not turned into "1" or "0" according to probability then of course probabilities from Bob and Alice can be multiplied.

With this model coincidence count compared to singlet count is about 8.5%. Based on assumption that polarizer is filtering half of photons calculated detection efficiency would be 17%.
 
  • #35
zonde said:
Well, it took some time to come up with some model that can produce local samples of Bob and Alice that will give sinusoidal graph (so as to "fill" my claims with something touchable).

Justification behind this model is related to Malus law.
If we have sample of photons that are polarized along certain axis then size of subsample (intensity) after passing polarizer follows Malus law I'=I*cos^2(theta).
So I hypothesize that all photons in sample have the same hidden variable - polarization but there is another hidden variable that determines interval of angles where photon will be filtered out by polarizer or will pass through.
Obviously photons in sample have different values of that other hidden variable (I will call it static phase difference in respect to context wave or shortly "phase").
Photon "phase" values then have certain distribution described by function - abs(sin(2theta)).
So next thing is how related is this "phase" for two entangled photons. Here I just hypothesize that "phase" vectors are orthogonal for entangled pair.

Now the model is (pol -polarization HV; ph -"phase" HV; alpha -polarizer angle):

Probability that there is photon with given values of hidden variables:
abs(sin(2*ph))

Polarization of photon i.e. will it pass the polarizer or not (+ sign for it will pass):
sign(sin(alpha + pol)^2 - cos(ph)^2)
this function actualy determines whether polarizer angle falls in "passing" interval or in "absorbing" interval of photon so it can be described with intervals without using sine and cosine functions.

Detection (+ sign for it will be detected):
sign(cos(ph)^2-K) where K=sin(Pi/8)^2
again determines whether "ph" falls in certain interval and so can be described without cosine function.
Without reduced detection the graph is like 1+cos^2(theta) so in order to consider this model I assumed some mechanism for detection that reduces this quasi decoherence.

For Bob formulas are the same but the value of "ph" is different by Pi/4 (insted of Pi/2 because we have two periods for polarization as we rotate polarizer around the clock and "phase" too is adjusted to that periodicity in formulas).

To test this model one has to generate set of evenly distributed random values for "pol" and "ph" in period of 0 - 2Pi and then plug them into formulas (with Pi/4 difference between Alice "ph" and Bob "ph" but the same "pol" value). If probability in first formula is calculated but not turned into "1" or "0" according to probability then of course probabilities from Bob and Alice can be multiplied.

With this model coincidence count compared to singlet count is about 8.5%. Based on assumption that polarizer is filtering half of photons calculated detection efficiency would be 17%.

Fine, an example for us to work with. A few questions about the terminology:

1. Do unentangled photons have POL, PH hidden variables too? If so, this provides additional testable constraints. You imply the answer is yes, but I want to be sure.

2. I get POL. But you say that PH (phase) is "distributed" according to the function abs(sin(2theta)). What is theta here?

3. Then you say that the probability a photon has a given value of the hidden variables is abs(sin(2*ph)). Can you give an example? The indicated hidden variable is PH and I do not believe this function sums to 100% across a suitable range (as it would need to).
 

Similar threads

Replies
7
Views
748
Replies
14
Views
938
Replies
2
Views
951
Replies
15
Views
1K
Replies
1
Views
1K
  • Quantum Physics
Replies
4
Views
909
  • Quantum Physics
Replies
13
Views
639
Replies
4
Views
860
Back
Top