Arguments in favour of the fair sampling assumption

  • Context: Graduate 
  • Thread starter Thread starter gespex
  • Start date Start date
  • Tags Tags
    Sampling
Click For Summary

Discussion Overview

The discussion revolves around the fair sampling assumption in the context of Bell's theorem and related inequalities, such as CH and CHSH. Participants explore the implications of classical hidden variable theories and their ability to match quantum mechanical predictions, questioning the necessity of the no enhancement assumption and the implications of local versus non-local theories.

Discussion Character

  • Debate/contested
  • Exploratory
  • Technical explanation

Main Points Raised

  • Some participants propose that classical hidden variable theories could match the coincidence rates predicted by quantum mechanics, raising questions about the validity of the no enhancement assumption.
  • Others argue that formulating a local hidden variable theory that produces consistent results across varying detector angles is more complex than it appears, emphasizing the importance of relative angles in the detection probabilities.
  • A participant mentions a specific paper that claims to provide a local hidden variable theorem consistent with quantum predictions, prompting questions about why collapse is still assumed in quantum mechanics.
  • Concerns are raised about the implications of detector efficiency and whether missed detections could be inherent to the measurement process, questioning how to accurately assess detector performance.
  • One participant shares their experience simulating a hidden variable theory that aligns with CHSH and CH74 outcomes, seeking validation of their approach and understanding of the underlying assumptions.

Areas of Agreement / Disagreement

Participants express varying opinions on the validity of classical hidden variable theories and the necessity of the fair sampling assumption. There is no consensus on these points, and the discussion remains unresolved regarding the implications of local versus non-local theories and the interpretation of experimental results.

Contextual Notes

Participants note limitations in current understanding, particularly regarding the assumptions underlying detector efficiency and the challenges in measuring particles that cannot be detected. The discussion highlights the complexity of reconciling classical theories with quantum mechanical predictions without reaching definitive conclusions.

gespex
Messages
56
Reaction score
0
Hi all,

I'm no expert in quantum mechanics by any means, but I've been quite interested in, and done quite some research on, Bell's theorem and related inequalities such as CH and CHSH. The theories all look perfectly sound, except that they all contain the "no enhancement assumption", some even in the form of an even stronger "fair sampling assumption".
Now it seems quite doable to formulate a theory that contains regular, classical hidden variables that match CH and CHSH in all correlations (ie. coincidence rate between detected photons with polarizes either set at specific angles or removed). Simply by assuming there exist hidden variables that determines the chance the particle is detected, it seems possible to match these coincidence rates that Quantum Mechanics also predicts.

I have two questions regarding this:
1. Do such hidden variable theorems exist, that are completely classical and match the coincidence rates as predicted by QM?
2. Regardless of whether such a theorem exists, why is this "no enhancement assumption" assumed to be true? What are the arguments in favour of it?


Thanks in advance,
Gespex
 
Physics news on Phys.org
I don't like bumping - but isn't this a fair question? I've found a paper that seems to correspond with the QM predictions here:
http://arxiv.org/pdf/quant-ph/9905018v1.pdf

I know it's "only" arxiv, but the maths look sound...
 
gespex said:
Now it seems quite doable to formulate a theory that contains regular, classical hidden variables that match CH and CHSH in all correlations ... Simply by assuming there exist hidden variables that determines the chance the particle is detected [at various angles]

It's harder than it sounds, because what matters is the relative angle between the two detectors. The hidden variable theory would have to produce the same result with A at zero degrees and B at 120 degrees as with A at 10 degrees and B at 130 degrees; and it would have to produce the same result with A at zero degrees and B at zero degrees as it would with A at 10 degrees and B at 10 degrees... and if the theory is to be local the probability of detection at A must be written as a function of just the hidden variables and the setting of A; that is, it must be the same for all the cases in which A is set to ten degrees or zero degrees or any other value.
 
Nugatory said:
It's harder than it sounds, because what matters is the relative angle between the two detectors. The hidden variable theory would have to produce the same result with A at zero degrees and B at 120 degrees as with A at 10 degrees and B at 130 degrees; and it would have to produce the same result with A at zero degrees and B at zero degrees as it would with A at 10 degrees and B at 10 degrees... and if the theory is to be local the probability of detection at A must be written as a function of just the hidden variables and the setting of A; that is, it must be the same for all the cases in which A is set to ten degrees or zero degrees or any other value.

Thanks for your reply!
Though the article I provided (http://arxiv.org/pdf/quant-ph/9905018v1.pdf) seems to have such a local hidden variable theorem, which completely agrees with QM's predictions.

So why is it still assumed that there is some sort of collapse, rather than that very model being true?
 
While you're waiting for a respose to your question, I'd just like to point out that a forum search for "Gisin" (use the "Search this forum" link at the top right of the thread list) turns up a number of hits, some of which probably discuss this paper.
 
gespex said:
Thanks for your reply!
Though the article I provided (http://arxiv.org/pdf/quant-ph/9905018v1.pdf) seems to have such a local hidden variable theorem, which completely agrees with QM's predictions.

So why is it still assumed that there is some sort of collapse, rather than that very model being true?

You must understand that if you are to reproduce the QM results, you need something somewhere to be sensing the global context or otherwise be leaving some kind of trace. If it is a global context, then it is not local. If there is some other trace, then it is experimentally falsifiable.

In this case, the comment is that there is a dependency on the likelihood of detection between the hidden angle and the angle being measured. Were that the case, then the total intensity of light emerging from a polarizing beam splitter would vary with the input angle. That doesn't happen, ergo it is falsified before you start.

After his (2):

"when ~a happens to be close to ~λA then the probability that an outcome is produced is larger than when ~a happens to be nearly orthogonal to ~λA."

There is always something like this hanging around (and of course it is not a prediction of QM). Has to be, because of Bell!
 
Last edited:
gespex said:
I know it's "only" arxiv, but the maths look sound...
[...]
Though the article I provided (http://arxiv.org/pdf/quant-ph/9905018v1.pdf) seems to have such a local hidden variable theorem, which completely agrees with QM's predictions.

So why is it still assumed that there is some sort of collapse, rather than that very model being true?

Some comments. First, it is not just ArXiv. It has been published in Physics Letters A Volume 260, Issue 5, 20 September 1999, Pages 323–327. (http://dx.doi.org/10.1016/S0375-9601(99)00519-8.

Second, I do not know what you interpret into the article, but Gisin is an opponent of local hidden variable theories. However, as this article dates back to 1999, one should rather read it as giving an estimate of what detector quantum efficiencies one might need to close the detector efficiency loophole in experiments on entanglement. Actually, that is rather clever, as once he outlined the importance of these loopholes, he went on to perform experiments on closing them and published those in high-impact journals.
 
Thanks for your answers, guys! Really appreciated, but I have a few questions left.

@DrChinese:
But isn't λA completely random per particle? Yes, if it's close to the measured angle it has a greater chance to be measured (at one side, not at the other). However, the angle between λA and a is completely random. So given a large number of particles the chance of an outcome being produced is the same for every angle, being 50% for half of the entangled particles and 100% for the other half, so on average 75%. That's completely independent of the actual detected angle, right?

I actually wrote a program to simulate a similar theorem. I made one function for each part of the experiment, to prevent some small bug from allowing one detector of accidentally influencing the other particle or detector. It produced perfectly (within extremely reasonable margins) the same outcome as CHSH and CH74, and as this paper. The number of particles measured would be constant for every measured angle.
(Unfortunately, I no longer have this program, but it seems to follow logically from this paper anyway)
Or am I missing something here?

@Cthugha:
So have people done the experiment with a detector efficiency of >75% yet? And also, how can we be so certain that this missed detection isn't inherent to detecting it at all? In other words, how can we measure the real detector efficiency for particles that simply *can't* be measured? We can't but in another detector, as it will suffer from the same flaws, right? Can we tell some other way?
The only way I can think of we can be certain is by knowing exactly how many particles should be measured. As it may be inherently impossible to measure this number, to tell for sure we must know exactly how many particles will be emitted by some source. But how can we know this for sure, if we have no way to actually test it? We may simply be wrong there...
Or is there some other method here?Again, thanks for your replies guys, I really appreciate it! I hope I'm not being ignorant here, as I said, I'm not an expert, just a programmer with some interest in QM.
 
gespex said:
So have people done the experiment with a detector efficiency of >75% yet?

Sure, this has been done as early as 2001 by using ions instead of photons in David Winelands group (Nature 409, 791-794 (15 February 2001)). For ions the detection efficiency is essentially 100%.

gespex said:
And also, how can we be so certain that this missed detection isn't inherent to detecting it at all? In other words, how can we measure the real detector efficiency for particles that simply *can't* be measured?

Well, if it cannot be detected at all, it is not existing. If you rather consider an unfair sampling scenario, you can rule that out by having efficient detectors. At high efficiencies, you have different prediction between typical entanglement and unfair sampling predictions which are testable (like in the paper mentioned above).

gespex said:
We can't but in another detector, as it will suffer from the same flaws, right? Can we tell some other way?

Well, you obviously can know the total output within some time interval by placing many detectors or using non-linear detectors.

gespex said:
The only way I can think of we can be certain is by knowing exactly how many particles should be measured. As it may be inherently impossible to measure this number, to tell for sure we must know exactly how many particles will be emitted by some source.

I see no problem with measuring the total output power and the statistics of the light field in question.

gespex said:
But how can we know this for sure, if we have no way to actually test it? We may simply be wrong there...
Or is there some other method here?

Also you can calibrate your detector and find out its quantum efficiency. This can be done by using down-converted light or by subjecting light to a non-linearity (like second harmonic generation). You can also perform detector tomography (Nature Physics 5, 27 - 30 (2009)) to fully classify your detector.
 

Similar threads

  • · Replies 80 ·
3
Replies
80
Views
8K
  • · Replies 28 ·
Replies
28
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 99 ·
4
Replies
99
Views
17K
  • · Replies 120 ·
5
Replies
120
Views
10K
  • · Replies 874 ·
30
Replies
874
Views
46K
  • · Replies 333 ·
12
Replies
333
Views
20K
  • · Replies 20 ·
Replies
20
Views
5K
  • · Replies 79 ·
3
Replies
79
Views
15K
  • · Replies 12 ·
Replies
12
Views
4K