The Efficiency Loophole: A Local Hidden Variables Theory?

Click For Summary

Discussion Overview

The discussion revolves around the concept of local hidden variables theories in the context of quantum mechanics, specifically addressing the efficiency loophole and the detection loophole. Participants explore whether a local hidden variable theory can be formulated if certain hidden properties of particles affect their detectability, and how this relates to existing quantum mechanical predictions.

Discussion Character

  • Debate/contested
  • Exploratory
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • Some participants propose that if an electron in an entangled pair has more than two plans for measurement outcomes, it may lead to a local hidden variable theory, though the specifics of how many plans are necessary remain unclear.
  • Others argue that the detection loophole implies a hidden property affecting whether an electron can be measured, questioning if a local hidden variable theory can be valid under these conditions.
  • A participant mentions that increasing detection efficiency should theoretically lead to results aligning more closely with local realism, yet experimental data continues to support quantum mechanics.
  • Some contributions highlight the ongoing efforts to improve photon detector efficiencies and the implications for closing the detection loophole in Bell-type experiments.
  • Concerns are raised about the validity of experiments that claim to close loopholes, particularly regarding the potential for measurement crosstalk and the necessity of falsifiable hypotheses in scientific methodology.

Areas of Agreement / Disagreement

Participants express differing views on the implications of the detection loophole and the feasibility of local hidden variable theories. There is no consensus on whether a local hidden variable theory can be consistently valid or how it relates to quantum mechanics.

Contextual Notes

Some discussions reference the need for improved detection methods and the complexities involved in ensuring that experimental setups can adequately test the hypotheses regarding local realism and quantum mechanics.

Who May Find This Useful

This discussion may be of interest to those studying quantum mechanics, particularly in the areas of entanglement, hidden variable theories, and experimental physics related to Bell's inequalities.

mach567
Messages
5
Reaction score
0
If we assume that an electron in an entangled pair has more than 2 plans (plans that determine if an electron go up or down through a magnet) to choose from, can we create a local hidden variable theory? If this is true, how many plans to choose from would an electron need for this to work.

thanks,
mach
 
Physics news on Phys.org
Your question is not clear to me. What does the efficiency loophole have to do with this? (I assume by efficiency loophole you mean detecton loophole.)

I might remind you that if there were a detection loophole, then QM would be wrong as to its predictions.
 
Hmmm maybe this is a better way to phrase the question. The detection loophole states that there could be a hidden property in an electron that determines if the electron can even be measured by our current equipment (please correct me if I'm wrong). Can there be a local hidden variables theory if this context is true? Can the theory be right all the time?
 
mach567 said:
Hmmm maybe this is a better way to phrase the question. The detection loophole states that there could be a hidden property in an electron that determines if the electron can even be measured by our current equipment (please correct me if I'm wrong). Can there be a local hidden variables theory if this context is true? Can the theory be right all the time?

There have been groups attempting to formulate things like this, to varying degrees of success. There really isn't much place to go with anything at this point, since the resulting theory would be at variance with QM in numerous essential ways.

The fact is that as you increase visibility (i.e. as the loophole shrinks because you can detect a larger %) you should get further away from the QM values and closer to the LR boundary. But that hasn't happened, instead the experimental values remain firmly in the QM fold. In fact, there are some tests that close this loophole:

Experimental violation of a Bell's inequality with efficient detection

"Local realism is the idea that objects have definite properties whether or not they are measured, and that measurements of these properties are not affected by events taking place sufficiently far away1. Einstein, Podolsky and Rosen2 used these reasonable assumptions to conclude that quantum mechanics is incomplete. Starting in 1965, Bell and others constructed mathematical inequalities whereby experimental tests could distinguish between quantum mechanics and local realistic theories1, 3, 4, 5. Many experiments1, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 have since been done that are consistent with quantum mechanics and inconsistent with local realism. But these conclusions remain the subject of considerable interest and debate, and experiments are still being refined to overcome ‘loopholes’ that might allow a local realistic interpretation. Here we have measured correlations in the classical properties of massive entangled particles (9Be+ ions): these correlations violate a form of Bell's inequality. Our measured value of the appropriate Bell's ‘signal’ is 2.25 ± 0.03, whereas a value of 2 is the maximum allowed by local realistic theories of nature. In contrast to previous measurements with massive particles, this violation of Bell's inequality was obtained by use of a complete set of measurements. Moreover, the high detection efficiency of our apparatus eliminates the so-called ‘detection’ loophole."
 
Thanks for the reply Dr. Chinese. You have been a great help! I am relatively new hear, but already I love this forum. Not only is everyone here knowledgeable...but they are willing to take the time out of their day and share that knowledge. That is exactly how academics should be. Much appreciated!
 
mach567 said:
Hmmm maybe this is a better way to phrase the question. The detection loophole states that there could be a hidden property in an electron that determines if the electron can even be measured by our current equipment (please correct me if I'm wrong). Can there be a local hidden variables theory if this context is true? Can the theory be right all the time?
If photon (not electron as there are no EPR type experiments with electrons) has context independent property that determines it's "detectability" it should still obey Bell inequalities.
To talk about some viable local hidden variable theory "detectability" should be context dependent i.e. it should result in unfair sampling.
 
Just in case people missed it, check out the article in March 18, 2011 issue of Science (p.1380). It's a very concise summary of the state of the art in Bell-type experiments and the drive towards closing the detection+locality loopholes in that type of experiment.

Zz.
 
Hmm, people are trying to come up with more efficient photon detectors.
But the parameter that allows to dispense with fair sampling assumption is actually coincidence count rate to singlet count rate.

Photon detector efficiencies have improved over the years and it would be interesting to see that more efficient detectors really allow to reach higher coincidence count rates to singlet count rates without reduction in correlation visibilities.
Without such a tendency it might turn out that more efficient photon detectors still don't give desired result.
 
zonde said:
Hmm, people are trying to come up with more efficient photon detectors.
But the parameter that allows to dispense with fair sampling assumption is actually coincidence count rate to singlet count rate.

Photon detector efficiencies have improved over the years and it would be interesting to see that more efficient detectors really allow to reach higher coincidence count rates to singlet count rates without reduction in correlation visibilities.
Without such a tendency it might turn out that more efficient photon detectors still don't give desired result.

I don't follow this. It is the detection of both members of a pair that we seek. Many times, only 1 of a pair is seen or alternately, the pair are not sufficiently coincident for us to pair them.
 
  • #10
zonde said:
Hmm, people are trying to come up with more efficient photon detectors.
But the parameter that allows to dispense with fair sampling assumption is actually coincidence count rate to singlet count rate.

Photon detector efficiencies have improved over the years and it would be interesting to see that more efficient detectors really allow to reach higher coincidence count rates to singlet count rates without reduction in correlation visibilities.
Without such a tendency it might turn out that more efficient photon detectors still don't give desired result.

You should read the article. It isn't just a matter of coming up with efficient photon detectors. That loophole can already be closed with Bell-type experiments that did not use photons.

Zz.
 
  • #11
Link to the article, requires a subscription:

http://www.sciencemag.org/content/331/6023/1380.short
 
  • #12
ZapperZ said:
You should read the article. It isn't just a matter of coming up with efficient photon detectors. That loophole can already be closed with Bell-type experiments that did not use photons.
I am not going to pay for some article that does not say anything new.
Judging from the excerpt that appears in your blog author is talking about this type of experiment:
http://arxiv.org/abs/0801.2184"

Now let's assume that Zukowski (who is mentioned in that article) fails to observe violation of Bell inequalities with setup in two neighboring labs. There are a lot of things that can fail, right?
And then the question is: How do you know that experiment was successful despite this? And that there simply isn't any "spooky action at a distance" apart from some quite realistic measurement crosstalk?

And if you (or Zukowski) do not have answer for this question then scientific method is out of the window.
Because scientific method requires that experiments should be set up in such a way that they can falsify tested hypothesis. So you have to have success criteria independent from successful observation of expected phenomenon.

But in case of photon experiments this coincidence count rate to singlet count rate is exactly such independent success criteria. Therefore photon experiments are way better from perspective of scientific method imho.

So if we talk about these ion experiments you can only silently hope that they will be successful without loudly announcing it as final test of local realism. They are not designed to be regarded as such.
 
Last edited by a moderator:
  • #13
DrChinese said:
I don't follow this. It is the detection of both members of a pair that we seek. Many times, only 1 of a pair is seen or alternately, the pair are not sufficiently coincident for us to pair them.
I do not understand your question.
We assume that source always produces photons in pairs. So if we observe only singlet then other photon from pair is lost along the way. If we want to test that this loosing of photons is not biased somehow we would want to vary (increase) rate of paired photons versus unpaired photons. And that is this coincidence count rate to singlet count rate.
 
  • #14
zonde said:
Now let's assume that Zukowski (who is mentioned in that article) fails to observe violation of Bell inequalities with setup in two neighboring labs. There are a lot of things that can fail, right?

Wait, you're already prejudging and making such an assumption on the outcome of an experiment that has yet to be completed? And then you dare talk about the "scientific method"?

Again, I haven't seen ANY of your rebuttals to the Bell-type tests appearing in peer-reviewed publication. Or do you not consider such a publication as part of the "scientific method"?

I'm not sure why I even bother responding here...

Zz.
 
  • #15
zonde said:
I do not understand your question.
We assume that source always produces photons in pairs. So if we observe only singlet then other photon from pair is lost along the way. If we want to test that this loosing of photons is not biased somehow we would want to vary (increase) rate of paired photons versus unpaired photons. And that is this coincidence count rate to singlet count rate.

Singlet might not be the best term to use in this context, as it implies something else entirely. If you have a pair of entangled photons headed for Alice and Bob, and there is 50% detector efficiency, I would expect that we would get a ratio of 1 pair and 2 mismatches (what you call singlets) on the average. That's for every 4 pairs, since occasionally neither photon in a pair is detected.

Now, if efficiency goes to 90%, I would expect that we would get a ratio of about 4 pairs to 1 mismatch on the average. I believe that is high enough to get past the detection (fair sampling) loophole. That is somewhat dependent on the actual results though.
 
  • #16
ZapperZ said:
I'm not sure why I even bother responding here... Zz.
From your PF blog:
" ... PF is not like any other forum. Not only do we like to convey to you the knowledge of science, but we would also like to give you an idea of the workings of science."

You've cited in this thread an article from Science magazine, a great source for articles and insights regarding the workings of science.
(By the way, your blog, and webpage, is interesting and informative, and a great resource.)

The problem isn't the science surrounding Bell's theorem, it's the language surrounding the interpretation1 of Bell's theorem. It's murky, and the Science article adds to, rather than clarifies, the murkiness. The domain of science is sensory experience, and that domain can't be extended by reifying conceptions of the reality underlying instrumental behavior and then comparing formalisms to those reifications. We're either comparing competing formalisms to each other or to instrumental behavior. There isn't any underlying reality in our sensory perview to compare either instrumental behavior or formalism to.

1 QM comparison with experiment, ie., the science surrounding Bell's theorem, is a straightforward sensory comparison, and QM has passed those tests so far. Identification and effective widespread communication of logical interpretational loopholes of Bell's theorem is an ongoing process that's followed a more circuitous route.
 
Last edited:
  • #17
zonde said:
Now let's assume that Zukowski (who is mentioned in that article) fails to observe violation of Bell inequalities with setup in two neighboring labs. There are a lot of things that can fail, right?
And then the question is: How do you know that experiment was successful despite this?
It's successful if it agrees with the formalism. If there's some doubt about the execution of the experiment, then you replicate the experiment.

zonde said:
And that there simply isn't any "spooky action at a distance" apart from some quite realistic measurement crosstalk?
You can't.

zonde said:
And if you (or Zukowski) do not have answer for this question then scientific method is out of the window.
Scientific method has to do with sensory phenomena, not with underlying reality.

zonde said:
Because scientific method requires that experiments should be set up in such a way that they can falsify tested hypothesis.
And scientific hypotheses always and only have to do with sensory phenomena. Formalism compared to instrumental behavior.

zonde said:
So you have to have success criteria independent from successful observation of expected phenomenon.
If the interpretational language isn't properly clarified, then yes that's possible. There's no problem with the science re Bell's theorem. Only its interpretation.

zonde said:
So if we talk about these ion experiments you can only silently hope that they will be successful without loudly announcing it as final test of local realism. They are not designed to be regarded as such.
Localism and realism refer to formal constraints, no more and no less, which can be scientifically tested.
 
Last edited:
  • #18
DrChinese said:
Singlet might not be the best term to use in this context, as it implies something else entirely. If you have a pair of entangled photons headed for Alice and Bob, and there is 50% detector efficiency, I would expect that we would get a ratio of 1 pair and 2 mismatches (what you call singlets) on the average. That's for every 4 pairs, since occasionally neither photon in a pair is detected.
Maybe singlet is not the best term. So we can say single detections.

DrChinese said:
Now, if efficiency goes to 90%, I would expect that we would get a ratio of about 4 pairs to 1 mismatch on the average. I believe that is high enough to get past the detection (fair sampling) loophole. That is somewhat dependent on the actual results though.
We can make null hypothesis like that: increase in efficiency (coincident detection rate to single detection rate) does not affect visibility of correlations.
To test this hypothesis we do not need very high efficiencies. We just have to make considerable variations in efficiency. Say if we usually have coincident detection rate to single detection rate around 10% we can try to rise it to 20% and test this null hypothesis comparing two efficiencies in controlled experiment.

That way you don't have to wait for technology for that ultimate test just to find out that say there are improvements in some other technology required as well. Not to mention possibility that this null hypothesis might fail at present level of technology.
 
  • #19
ThomasT said:
It's successful if it agrees with the formalism. If there's some doubt about the execution of the experiment, then you replicate the experiment.
You mean established formalism independent from hypothesis to be tested?
And how do you know when you should doubt execution of the experiment? You need some criterion independent from hypothesis to be tested. And that is exactly what I was saying.

ThomasT said:
You can't.
You can demonstrate it to be superfluous.

ThomasT said:
Scientific method has to do with sensory phenomena, not with underlying reality.
You lost me here.
For me it seems that you are pushing your line completely out of context.
 
  • #20
zonde said:
You mean established formalism independent from hypothesis to be tested?
The formalism is the hypothesis to be tested.

zonde said:
And how do you know when you should doubt execution of the experiment?
When results differ from predictions -- it could mean that the experimental preparation and execution isn't fully in accordance with the formalism, or it could mean that the formalism is flawed in some other way.

zonde said:
You need some criterion independent from hypothesis to be tested.
The hypothesis is the formalism. There isn't any criterion independent of that that's being tested.

zonde said:
You can demonstrate it to be superfluous.
Here we're talking about interpretations associated with the formalism. And yes, those can be demonstrated to be superfluous to the question of whether a particular formalism is compatible with a particular experimental design and preparation -- as is the case with the conventional interpretation of Bell's theorem.

ThomasT said:
Scientific method has to do with sensory phenomena, not with underlying reality.
zonde said:
You lost me here.
Why would that lose you? Is there some domain other than our sensory experience that science applies to?

ThomasT said:
For me it seems that you are pushing your line completely out of context.
The science is experiments testing formalisms. Sure, one can infer, speculate and interpret based on some associated conception of an underlying reality. But that isn't the science. It's the philosophy associated with the science, which is, even though it might be used as a guide to building mathematical models which can be tested, superfluous to the science precisely because we have no direct sensory access to a reality underlying instrumental behavior.

Edit: Looking at the standard qm formalism, it's evident that it isn't about classical objects in classical space. Comparing that formalism with the LR formalism, the disparity between the two has become clear (nonseparability vs separability), and it's also clear that that disparity has nothing to do with classical objects in classical space. Hence, the experimental success of the qm formalism and lack thereof the LR formalism tells us nothing about what does or doesn't exist in the underlying reality.
 
Last edited:
  • #21
A quote from 'Beyond Measure':
In the case of the efficiency loophole, we could choose to reject the assumption that the small proportion of photon pairs detected represents a fair sample of the total and argue instead that the experiment are biased in favour of those photon pairs that deliver results in accordance with the quantum-theory predictions. We would have to suppose that, whilst the sub-ensemble of detected photon pairs violates the generalised Bell's inequality, the total ensemble does not. A local hidden-variables theory which, because of data rejection, predicts the same measurement outcomes as quantum theory was first devised by Philip Pearle in 1970. In a more recent model, Nicolas Gisin and B. Gisin described a local hidden-variable theory in which the variables themselves determined the efficiency of the detectors. The theory explained the measured (qunatum) correlations whilst at the same time remaining true to Bell's inequality.
 
  • #22
ZapperZ said:
Again, I haven't seen ANY of your rebuttals to the Bell-type tests appearing in peer-reviewed publication.

Peer-reviews can always be biased. Not all ranges of work would be published.
 
  • #23
StevieTNZ said:
Peer-reviews can always be biased. Not all ranges of work would be published.

How would YOU know?

Zz.
 
  • #24
StevieTNZ said:
A quote from 'Beyond Measure':

All of the purported models exploiting the fair sampling loophole have severe issues themselves. Keep in mind that there must exist some function which causes the bias such that the "true" correlation rate (presumably linear) is hidden and only the QM expectation value is seen. That function ends up having a very strange shape and gets stranger still as detection efficiency rises. This in turn leads to physical predictions which become progressively more ad hoc.

Rather than quoting the existence of some model, perhaps you would care to cite an actual model that is currently on the table (i.e. not already refuted). I mean, there are already experiments in which the fair sampling loophole has been closed. So in many ways this discussion is moot. If all events are detected and yield results past the Bell Inequality, what is the point of saying "maybe there is a fair sampling loophole"?

http://www.nature.com/nature/journal/v409/n6822/full/409791a0.html
 
  • #25
ZapperZ said:
How would YOU know?

Zz.

It is a plausible explanation.
 
  • #26
StevieTNZ said:
It is a plausible explanation.

Not to me it isn't. Thus my question on how would YOU know?

Zz.
 
  • #27
ThomasT said:
It's successful if it agrees with the formalism. If there's some doubt about the execution of the experiment, then you replicate the experiment.
ThomasT said:
The formalism is the hypothesis to be tested.
So you say that experiment is successful if it agrees with hypothesis to be tested.

That definitely is not scientific method.
Scientific method requires that experiment can falsify hypothesis to be tested.
So we should have three possible outcomes of experiment:
1. Experiment is successful and results agree with prediction derived from hypothesis.
2. Experiment is successful and results disagree with prediction derived from hypothesis.
3. Experiment is unsuccessful. In this case we can try to improve experimental setup and try again.

ThomasT said:
The science is experiments testing formalisms. Sure, one can infer, speculate and interpret based on some associated conception of an underlying reality. But that isn't the science. It's the philosophy associated with the science, which is, even though it might be used as a guide to building mathematical models which can be tested, superfluous to the science precisely because we have no direct sensory access to a reality underlying instrumental behavior.
In wikipedia http://en.wikipedia.org/wiki/Scientific_method" is shortly formulated this way:
1. Use your experience: Consider the problem and try to make sense of it. Look for previous explanations. If this is a new problem to you, then move to step 2.
2. Form a conjecture: When nothing else is yet known, try to state an explanation, to someone else, or to your notebook.
3. Deduce a prediction from that explanation: If you assume 2 is true, what consequences follow?
4. Test: Look for the opposite of each consequence in order to disprove 2. It is a logical error to seek 3 directly as proof of 2. This error is called affirming the consequent.

As I understand you are saying that science is limited to point 4. from this list.
But points 1., 2. and 3. are part of the method. And in many cases point 2. is about making some speculation how one can model underlying reality.
 
Last edited by a moderator:
  • #28
ON THE SCIENTIFIC METHOD

zonde said:
So you say that experiment is successful if it agrees with hypothesis to be tested.

That definitely is not scientific method.
Scientific method requires that experiment can falsify hypothesis to be tested.
So we should have three possible outcomes of experiment:
1. Experiment is successful and results agree with prediction derived from hypothesis.
2. Experiment is successful and results disagree with prediction derived from hypothesis.
3. Experiment is unsuccessful. In this case we can try to improve experimental setup and try again.
I agree that your exposition is essentially correct. Mine was incomplete, and I apologize.

I want to emphasize that experiments are testing formalisms, and that the formalisms can't, scientifically, be definitively associated with any conception of a reality that's beyond our sensory experience. So, I'll propose a slight rewrite of your essentially correct exposition, after a brief consideration of Bell and Bell tests.

Bell compared two competing formalisms, standard qm and LR-supplemented/interpreted standard qm, and proved that they're incompatible. An experimental test of Bell's theorem entails the construction of an inequality based on the specific design and preparation of the test. It provides a quantitative measure of the compatibility of each of the competing formalisms with that experiment, as well as between the competing formalisms for that experiment.

Wrt a Bell experiment where the efficiency/detection loophole isn't closed (all of them, afaik), and the basis for adoption of the fair sampling or no enhancement assumptions isn't scientifically demonstrated in that experiment (all of them, afaik), then the experiment allows a possible flaw wrt the testing of the competing formalisms based on an inequality constructed on those assumptions.

So, we might rewrite your exposition as:

Scientific method requires that experiment can falsify hypothesis to be tested.
So we should have three possible outcomes of experiment:
1. Experiment is not flawed and results agree with formal hypothesis.
2. Experiment is not flawed and results disagree with formal hypothesis.
3. Experiment is flawed because formal hypothesis is based on assumptions which haven't been scientifically demonstrated to hold for that experiment, or for some other reason.

zonde said:
In wikipedia Scientific method is shortly formulated this way:
1. Use your experience: Consider the problem and try to make sense of it. Look for previous explanations. If this is a new problem to you, then move to step 2.
2. Form a conjecture: When nothing else is yet known, try to state an explanation, to someone else, or to your notebook.
3. Deduce a prediction from that explanation: If you assume 2 is true, what consequences follow?
4. Test: Look for the opposite of each consequence in order to disprove 2. It is a logical error to seek 3 directly as proof of 2. This error is called affirming the consequent.

As I understand you are saying that science is limited to point 4. from this list.
Yes, what I'm calling the strictly science part of the scientific method (to differentiate it from conjecture and logic) is limited to point 4, the sensory comparison of formalism with results.

zonde said:
But points 1., 2. and 3. are part of the method. And in many cases point 2. is about making some speculation how one can model underlying reality.
Yes, points 1., 2., and 3. are part of the scientific method. Thanks for clarifying.

---------------------------------ON WHY BELL'S THEOREM AND BELL TESTS PROVE NOTHING ABOUT A REALITY BEYOND OUR SENSORY EXPERIENCE

Even if Bell test loopholes are closed, the experiments will not inform us that the correlations can't be due to relationships traced to local common causes, and/or that nature can't be local -- because 1) the domain of science is limited to our sensory experience, 2) the only thing that the experiments might inform us, definitively, about is that a particular formalism is incompatible with a particular experimental design and preparation, and 3) the salient features of the qm treatment of entanglement not only aren't at odds with, but stem from the applicability of the classical conservation laws and Malus' Law.

So the only thing that Bell tests can ever be said to show is that the formal separability of Bell LR is incompatible with the formal nonseparability of standard qm vis the design and preparation nonseparability of Bell tests.

The key point, and what the conventional literature obfuscates, is that the formal incompatibility doesn't preclude an informal classical understanding/explanation for entanglement correlations based on principles which hold in the 3D space and time of our sensory experience.

Experimental tests (related to Bell's logical demonstration of the incompatibility between a Bell LR modified expectation value formalism and the standard qm formalism) allow us to say only that it remains an open question as to whether the reality beyond our sensory experience is local or nonlocal. And since our sensory experience accords with an exclusively local reality, then we retain the assumption that nature is local.
 
Last edited:
  • #29
ThomasT said:
ON THE SCIENTIFIC METHOD

<SNIP>

Wrt a Bell experiment where the efficiency/detection loophole isn't closed (all of them, afaik), and the basis for adoption of the fair sampling or no enhancement assumptions isn't scientifically demonstrated in that experiment (all of them, afaik), then the experiment allows a possible flaw wrt the testing of the competing formalisms based on an inequality constructed on those assumptions.

<SNIP>

I've not yet studied your exposition in depth, BUT (thus far), I believe the above comments are unnecessary -- even misleading -- in ANY discussion of Bell's Theorem.

I've emphasized "misleading" because, in my experience, and according to my studies: They provide an invalid loop-hole through which too many "local realists" escape -- or seek to -- thereby avoiding the need for critical study.

NB: Not that the "imperfections" don't exist: BUT that the results will not change to any significant extent.

Thus -- whatever your case -- removal of these invalid (IMHO) diversionary loop-holes may strengthen it.

Just my quick 2c. for now.
 
  • #30
Gordon Watson said:
I've not yet studied your exposition in depth, BUT (thus far), I believe the above comments are unnecessary -- even misleading -- in ANY discussion of Bell's Theorem.

I've emphasized "misleading" because, in my experience, and according to my studies: They provide an invalid loop-hole through which too many "local realists" escape -- or seek to -- thereby avoiding the need for critical study.

NB: Not that the "imperfections" don't exist: BUT that the results will not change to any significant extent.

Thus -- whatever your case -- removal of these invalid (IMHO) diversionary loop-holes may strengthen it.

Just my quick 2c. for now.
I agree that the results won't change to any significant extent. QM will be affirmed, and Bell LR will be ruled out. However, as long as an inequality pertaining to an experiment is based on an assumption not verified in that experiment, then the experiment isn't definitive. This is why the applied scientists are working toward producing an unarguably loophole free optical Bell test.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 80 ·
3
Replies
80
Views
8K
  • · Replies 44 ·
2
Replies
44
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 17 ·
Replies
17
Views
2K