The Efficiency Loophole: A Local Hidden Variables Theory?

mach567
Messages
5
Reaction score
0
If we assume that an electron in an entangled pair has more than 2 plans (plans that determine if an electron go up or down through a magnet) to choose from, can we create a local hidden variable theory? If this is true, how many plans to choose from would an electron need for this to work.

thanks,
mach
 
Physics news on Phys.org
Your question is not clear to me. What does the efficiency loophole have to do with this? (I assume by efficiency loophole you mean detecton loophole.)

I might remind you that if there were a detection loophole, then QM would be wrong as to its predictions.
 
Hmmm maybe this is a better way to phrase the question. The detection loophole states that there could be a hidden property in an electron that determines if the electron can even be measured by our current equipment (please correct me if I'm wrong). Can there be a local hidden variables theory if this context is true? Can the theory be right all the time?
 
mach567 said:
Hmmm maybe this is a better way to phrase the question. The detection loophole states that there could be a hidden property in an electron that determines if the electron can even be measured by our current equipment (please correct me if I'm wrong). Can there be a local hidden variables theory if this context is true? Can the theory be right all the time?

There have been groups attempting to formulate things like this, to varying degrees of success. There really isn't much place to go with anything at this point, since the resulting theory would be at variance with QM in numerous essential ways.

The fact is that as you increase visibility (i.e. as the loophole shrinks because you can detect a larger %) you should get further away from the QM values and closer to the LR boundary. But that hasn't happened, instead the experimental values remain firmly in the QM fold. In fact, there are some tests that close this loophole:

Experimental violation of a Bell's inequality with efficient detection

"Local realism is the idea that objects have definite properties whether or not they are measured, and that measurements of these properties are not affected by events taking place sufficiently far away1. Einstein, Podolsky and Rosen2 used these reasonable assumptions to conclude that quantum mechanics is incomplete. Starting in 1965, Bell and others constructed mathematical inequalities whereby experimental tests could distinguish between quantum mechanics and local realistic theories1, 3, 4, 5. Many experiments1, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 have since been done that are consistent with quantum mechanics and inconsistent with local realism. But these conclusions remain the subject of considerable interest and debate, and experiments are still being refined to overcome ‘loopholes’ that might allow a local realistic interpretation. Here we have measured correlations in the classical properties of massive entangled particles (9Be+ ions): these correlations violate a form of Bell's inequality. Our measured value of the appropriate Bell's ‘signal’ is 2.25 ± 0.03, whereas a value of 2 is the maximum allowed by local realistic theories of nature. In contrast to previous measurements with massive particles, this violation of Bell's inequality was obtained by use of a complete set of measurements. Moreover, the high detection efficiency of our apparatus eliminates the so-called ‘detection’ loophole."
 
Thanks for the reply Dr. Chinese. You have been a great help! I am relatively new hear, but already I love this forum. Not only is everyone here knowledgeable...but they are willing to take the time out of their day and share that knowledge. That is exactly how academics should be. Much appreciated!
 
mach567 said:
Hmmm maybe this is a better way to phrase the question. The detection loophole states that there could be a hidden property in an electron that determines if the electron can even be measured by our current equipment (please correct me if I'm wrong). Can there be a local hidden variables theory if this context is true? Can the theory be right all the time?
If photon (not electron as there are no EPR type experiments with electrons) has context independent property that determines it's "detectability" it should still obey Bell inequalities.
To talk about some viable local hidden variable theory "detectability" should be context dependent i.e. it should result in unfair sampling.
 
Just in case people missed it, check out the article in March 18, 2011 issue of Science (p.1380). It's a very concise summary of the state of the art in Bell-type experiments and the drive towards closing the detection+locality loopholes in that type of experiment.

Zz.
 
Hmm, people are trying to come up with more efficient photon detectors.
But the parameter that allows to dispense with fair sampling assumption is actually coincidence count rate to singlet count rate.

Photon detector efficiencies have improved over the years and it would be interesting to see that more efficient detectors really allow to reach higher coincidence count rates to singlet count rates without reduction in correlation visibilities.
Without such a tendency it might turn out that more efficient photon detectors still don't give desired result.
 
zonde said:
Hmm, people are trying to come up with more efficient photon detectors.
But the parameter that allows to dispense with fair sampling assumption is actually coincidence count rate to singlet count rate.

Photon detector efficiencies have improved over the years and it would be interesting to see that more efficient detectors really allow to reach higher coincidence count rates to singlet count rates without reduction in correlation visibilities.
Without such a tendency it might turn out that more efficient photon detectors still don't give desired result.

I don't follow this. It is the detection of both members of a pair that we seek. Many times, only 1 of a pair is seen or alternately, the pair are not sufficiently coincident for us to pair them.
 
  • #10
zonde said:
Hmm, people are trying to come up with more efficient photon detectors.
But the parameter that allows to dispense with fair sampling assumption is actually coincidence count rate to singlet count rate.

Photon detector efficiencies have improved over the years and it would be interesting to see that more efficient detectors really allow to reach higher coincidence count rates to singlet count rates without reduction in correlation visibilities.
Without such a tendency it might turn out that more efficient photon detectors still don't give desired result.

You should read the article. It isn't just a matter of coming up with efficient photon detectors. That loophole can already be closed with Bell-type experiments that did not use photons.

Zz.
 
  • #11
Link to the article, requires a subscription:

http://www.sciencemag.org/content/331/6023/1380.short
 
  • #12
ZapperZ said:
You should read the article. It isn't just a matter of coming up with efficient photon detectors. That loophole can already be closed with Bell-type experiments that did not use photons.
I am not going to pay for some article that does not say anything new.
Judging from the excerpt that appears in your blog author is talking about this type of experiment:
http://arxiv.org/abs/0801.2184"

Now let's assume that Zukowski (who is mentioned in that article) fails to observe violation of Bell inequalities with setup in two neighboring labs. There are a lot of things that can fail, right?
And then the question is: How do you know that experiment was successful despite this? And that there simply isn't any "spooky action at a distance" apart from some quite realistic measurement crosstalk?

And if you (or Zukowski) do not have answer for this question then scientific method is out of the window.
Because scientific method requires that experiments should be set up in such a way that they can falsify tested hypothesis. So you have to have success criteria independent from successful observation of expected phenomenon.

But in case of photon experiments this coincidence count rate to singlet count rate is exactly such independent success criteria. Therefore photon experiments are way better from perspective of scientific method imho.

So if we talk about these ion experiments you can only silently hope that they will be successful without loudly announcing it as final test of local realism. They are not designed to be regarded as such.
 
Last edited by a moderator:
  • #13
DrChinese said:
I don't follow this. It is the detection of both members of a pair that we seek. Many times, only 1 of a pair is seen or alternately, the pair are not sufficiently coincident for us to pair them.
I do not understand your question.
We assume that source always produces photons in pairs. So if we observe only singlet then other photon from pair is lost along the way. If we want to test that this loosing of photons is not biased somehow we would want to vary (increase) rate of paired photons versus unpaired photons. And that is this coincidence count rate to singlet count rate.
 
  • #14
zonde said:
Now let's assume that Zukowski (who is mentioned in that article) fails to observe violation of Bell inequalities with setup in two neighboring labs. There are a lot of things that can fail, right?

Wait, you're already prejudging and making such an assumption on the outcome of an experiment that has yet to be completed? And then you dare talk about the "scientific method"?

Again, I haven't seen ANY of your rebuttals to the Bell-type tests appearing in peer-reviewed publication. Or do you not consider such a publication as part of the "scientific method"?

I'm not sure why I even bother responding here...

Zz.
 
  • #15
zonde said:
I do not understand your question.
We assume that source always produces photons in pairs. So if we observe only singlet then other photon from pair is lost along the way. If we want to test that this loosing of photons is not biased somehow we would want to vary (increase) rate of paired photons versus unpaired photons. And that is this coincidence count rate to singlet count rate.

Singlet might not be the best term to use in this context, as it implies something else entirely. If you have a pair of entangled photons headed for Alice and Bob, and there is 50% detector efficiency, I would expect that we would get a ratio of 1 pair and 2 mismatches (what you call singlets) on the average. That's for every 4 pairs, since occasionally neither photon in a pair is detected.

Now, if efficiency goes to 90%, I would expect that we would get a ratio of about 4 pairs to 1 mismatch on the average. I believe that is high enough to get past the detection (fair sampling) loophole. That is somewhat dependent on the actual results though.
 
  • #16
ZapperZ said:
I'm not sure why I even bother responding here... Zz.
From your PF blog:
" ... PF is not like any other forum. Not only do we like to convey to you the knowledge of science, but we would also like to give you an idea of the workings of science."

You've cited in this thread an article from Science magazine, a great source for articles and insights regarding the workings of science.
(By the way, your blog, and webpage, is interesting and informative, and a great resource.)

The problem isn't the science surrounding Bell's theorem, it's the language surrounding the interpretation1 of Bell's theorem. It's murky, and the Science article adds to, rather than clarifies, the murkiness. The domain of science is sensory experience, and that domain can't be extended by reifying conceptions of the reality underlying instrumental behavior and then comparing formalisms to those reifications. We're either comparing competing formalisms to each other or to instrumental behavior. There isn't any underlying reality in our sensory perview to compare either instrumental behavior or formalism to.

1 QM comparison with experiment, ie., the science surrounding Bell's theorem, is a straightforward sensory comparison, and QM has passed those tests so far. Identification and effective widespread communication of logical interpretational loopholes of Bell's theorem is an ongoing process that's followed a more circuitous route.
 
Last edited:
  • #17
zonde said:
Now let's assume that Zukowski (who is mentioned in that article) fails to observe violation of Bell inequalities with setup in two neighboring labs. There are a lot of things that can fail, right?
And then the question is: How do you know that experiment was successful despite this?
It's successful if it agrees with the formalism. If there's some doubt about the execution of the experiment, then you replicate the experiment.

zonde said:
And that there simply isn't any "spooky action at a distance" apart from some quite realistic measurement crosstalk?
You can't.

zonde said:
And if you (or Zukowski) do not have answer for this question then scientific method is out of the window.
Scientific method has to do with sensory phenomena, not with underlying reality.

zonde said:
Because scientific method requires that experiments should be set up in such a way that they can falsify tested hypothesis.
And scientific hypotheses always and only have to do with sensory phenomena. Formalism compared to instrumental behavior.

zonde said:
So you have to have success criteria independent from successful observation of expected phenomenon.
If the interpretational language isn't properly clarified, then yes that's possible. There's no problem with the science re Bell's theorm. Only its interpretation.

zonde said:
So if we talk about these ion experiments you can only silently hope that they will be successful without loudly announcing it as final test of local realism. They are not designed to be regarded as such.
Localism and realism refer to formal constraints, no more and no less, which can be scientifically tested.
 
Last edited:
  • #18
DrChinese said:
Singlet might not be the best term to use in this context, as it implies something else entirely. If you have a pair of entangled photons headed for Alice and Bob, and there is 50% detector efficiency, I would expect that we would get a ratio of 1 pair and 2 mismatches (what you call singlets) on the average. That's for every 4 pairs, since occasionally neither photon in a pair is detected.
Maybe singlet is not the best term. So we can say single detections.

DrChinese said:
Now, if efficiency goes to 90%, I would expect that we would get a ratio of about 4 pairs to 1 mismatch on the average. I believe that is high enough to get past the detection (fair sampling) loophole. That is somewhat dependent on the actual results though.
We can make null hypothesis like that: increase in efficiency (coincident detection rate to single detection rate) does not affect visibility of correlations.
To test this hypothesis we do not need very high efficiencies. We just have to make considerable variations in efficiency. Say if we usually have coincident detection rate to single detection rate around 10% we can try to rise it to 20% and test this null hypothesis comparing two efficiencies in controlled experiment.

That way you don't have to wait for technology for that ultimate test just to find out that say there are improvements in some other technology required as well. Not to mention possibility that this null hypothesis might fail at present level of technology.
 
  • #19
ThomasT said:
It's successful if it agrees with the formalism. If there's some doubt about the execution of the experiment, then you replicate the experiment.
You mean established formalism independent from hypothesis to be tested?
And how do you know when you should doubt execution of the experiment? You need some criterion independent from hypothesis to be tested. And that is exactly what I was saying.

ThomasT said:
You can't.
You can demonstrate it to be superfluous.

ThomasT said:
Scientific method has to do with sensory phenomena, not with underlying reality.
You lost me here.
For me it seems that you are pushing your line completely out of context.
 
  • #20
zonde said:
You mean established formalism independent from hypothesis to be tested?
The formalism is the hypothesis to be tested.

zonde said:
And how do you know when you should doubt execution of the experiment?
When results differ from predictions -- it could mean that the experimental preparation and execution isn't fully in accordance with the formalism, or it could mean that the formalism is flawed in some other way.

zonde said:
You need some criterion independent from hypothesis to be tested.
The hypothesis is the formalism. There isn't any criterion independent of that that's being tested.

zonde said:
You can demonstrate it to be superfluous.
Here we're talking about interpretations associated with the formalism. And yes, those can be demonstrated to be superfluous to the question of whether a particular formalism is compatible with a particular experimental design and preparation -- as is the case with the conventional interpretation of Bell's theorem.

ThomasT said:
Scientific method has to do with sensory phenomena, not with underlying reality.
zonde said:
You lost me here.
Why would that lose you? Is there some domain other than our sensory experience that science applies to?

ThomasT said:
For me it seems that you are pushing your line completely out of context.
The science is experiments testing formalisms. Sure, one can infer, speculate and interpret based on some associated conception of an underlying reality. But that isn't the science. It's the philosophy associated with the science, which is, even though it might be used as a guide to building mathematical models which can be tested, superfluous to the science precisely because we have no direct sensory access to a reality underlying instrumental behavior.

Edit: Looking at the standard qm formalism, it's evident that it isn't about classical objects in classical space. Comparing that formalism with the LR formalism, the disparity between the two has become clear (nonseparability vs separability), and it's also clear that that disparity has nothing to do with classical objects in classical space. Hence, the experimental success of the qm formalism and lack thereof the LR formalism tells us nothing about what does or doesn't exist in the underlying reality.
 
Last edited:
  • #21
A quote from 'Beyond Measure':
In the case of the efficiency loophole, we could choose to reject the assumption that the small proportion of photon pairs detected represents a fair sample of the total and argue instead that the experiment are biased in favour of those photon pairs that deliver results in accordance with the quantum-theory predictions. We would have to suppose that, whilst the sub-ensemble of detected photon pairs violates the generalised Bell's inequality, the total ensemble does not. A local hidden-variables theory which, because of data rejection, predicts the same measurement outcomes as quantum theory was first devised by Philip Pearle in 1970. In a more recent model, Nicolas Gisin and B. Gisin described a local hidden-variable theory in which the variables themselves determined the efficiency of the detectors. The theory explained the measured (qunatum) correlations whilst at the same time remaining true to Bell's inequality.
 
  • #22
ZapperZ said:
Again, I haven't seen ANY of your rebuttals to the Bell-type tests appearing in peer-reviewed publication.

Peer-reviews can always be biased. Not all ranges of work would be published.
 
  • #23
StevieTNZ said:
Peer-reviews can always be biased. Not all ranges of work would be published.

How would YOU know?

Zz.
 
  • #24
StevieTNZ said:
A quote from 'Beyond Measure':

All of the purported models exploiting the fair sampling loophole have severe issues themselves. Keep in mind that there must exist some function which causes the bias such that the "true" correlation rate (presumably linear) is hidden and only the QM expectation value is seen. That function ends up having a very strange shape and gets stranger still as detection efficiency rises. This in turn leads to physical predictions which become progressively more ad hoc.

Rather than quoting the existence of some model, perhaps you would care to cite an actual model that is currently on the table (i.e. not already refuted). I mean, there are already experiments in which the fair sampling loophole has been closed. So in many ways this discussion is moot. If all events are detected and yield results past the Bell Inequality, what is the point of saying "maybe there is a fair sampling loophole"?

http://www.nature.com/nature/journal/v409/n6822/full/409791a0.html
 
  • #25
ZapperZ said:
How would YOU know?

Zz.

It is a plausible explanation.
 
  • #26
StevieTNZ said:
It is a plausible explanation.

Not to me it isn't. Thus my question on how would YOU know?

Zz.
 
  • #27
ThomasT said:
It's successful if it agrees with the formalism. If there's some doubt about the execution of the experiment, then you replicate the experiment.
ThomasT said:
The formalism is the hypothesis to be tested.
So you say that experiment is successful if it agrees with hypothesis to be tested.

That definitely is not scientific method.
Scientific method requires that experiment can falsify hypothesis to be tested.
So we should have three possible outcomes of experiment:
1. Experiment is successful and results agree with prediction derived from hypothesis.
2. Experiment is successful and results disagree with prediction derived from hypothesis.
3. Experiment is unsuccessful. In this case we can try to improve experimental setup and try again.

ThomasT said:
The science is experiments testing formalisms. Sure, one can infer, speculate and interpret based on some associated conception of an underlying reality. But that isn't the science. It's the philosophy associated with the science, which is, even though it might be used as a guide to building mathematical models which can be tested, superfluous to the science precisely because we have no direct sensory access to a reality underlying instrumental behavior.
In wikipedia http://en.wikipedia.org/wiki/Scientific_method" is shortly formulated this way:
1. Use your experience: Consider the problem and try to make sense of it. Look for previous explanations. If this is a new problem to you, then move to step 2.
2. Form a conjecture: When nothing else is yet known, try to state an explanation, to someone else, or to your notebook.
3. Deduce a prediction from that explanation: If you assume 2 is true, what consequences follow?
4. Test: Look for the opposite of each consequence in order to disprove 2. It is a logical error to seek 3 directly as proof of 2. This error is called affirming the consequent.

As I understand you are saying that science is limited to point 4. from this list.
But points 1., 2. and 3. are part of the method. And in many cases point 2. is about making some speculation how one can model underlying reality.
 
Last edited by a moderator:
  • #28
ON THE SCIENTIFIC METHOD

zonde said:
So you say that experiment is successful if it agrees with hypothesis to be tested.

That definitely is not scientific method.
Scientific method requires that experiment can falsify hypothesis to be tested.
So we should have three possible outcomes of experiment:
1. Experiment is successful and results agree with prediction derived from hypothesis.
2. Experiment is successful and results disagree with prediction derived from hypothesis.
3. Experiment is unsuccessful. In this case we can try to improve experimental setup and try again.
I agree that your exposition is essentially correct. Mine was incomplete, and I apologize.

I want to emphasize that experiments are testing formalisms, and that the formalisms can't, scientifically, be definitively associated with any conception of a reality that's beyond our sensory experience. So, I'll propose a slight rewrite of your essentially correct exposition, after a brief consideration of Bell and Bell tests.

Bell compared two competing formalisms, standard qm and LR-supplemented/interpreted standard qm, and proved that they're incompatible. An experimental test of Bell's theorem entails the construction of an inequality based on the specific design and preparation of the test. It provides a quantitative measure of the compatibility of each of the competing formalisms with that experiment, as well as between the competing formalisms for that experiment.

Wrt a Bell experiment where the efficiency/detection loophole isn't closed (all of them, afaik), and the basis for adoption of the fair sampling or no enhancement assumptions isn't scientifically demonstrated in that experiment (all of them, afaik), then the experiment allows a possible flaw wrt the testing of the competing formalisms based on an inequality constructed on those assumptions.

So, we might rewrite your exposition as:

Scientific method requires that experiment can falsify hypothesis to be tested.
So we should have three possible outcomes of experiment:
1. Experiment is not flawed and results agree with formal hypothesis.
2. Experiment is not flawed and results disagree with formal hypothesis.
3. Experiment is flawed because formal hypothesis is based on assumptions which haven't been scientifically demonstrated to hold for that experiment, or for some other reason.

zonde said:
In wikipedia Scientific method is shortly formulated this way:
1. Use your experience: Consider the problem and try to make sense of it. Look for previous explanations. If this is a new problem to you, then move to step 2.
2. Form a conjecture: When nothing else is yet known, try to state an explanation, to someone else, or to your notebook.
3. Deduce a prediction from that explanation: If you assume 2 is true, what consequences follow?
4. Test: Look for the opposite of each consequence in order to disprove 2. It is a logical error to seek 3 directly as proof of 2. This error is called affirming the consequent.

As I understand you are saying that science is limited to point 4. from this list.
Yes, what I'm calling the strictly science part of the scientific method (to differentiate it from conjecture and logic) is limited to point 4, the sensory comparison of formalism with results.

zonde said:
But points 1., 2. and 3. are part of the method. And in many cases point 2. is about making some speculation how one can model underlying reality.
Yes, points 1., 2., and 3. are part of the scientific method. Thanks for clarifying.

---------------------------------ON WHY BELL'S THEOREM AND BELL TESTS PROVE NOTHING ABOUT A REALITY BEYOND OUR SENSORY EXPERIENCE

Even if Bell test loopholes are closed, the experiments will not inform us that the correlations can't be due to relationships traced to local common causes, and/or that nature can't be local -- because 1) the domain of science is limited to our sensory experience, 2) the only thing that the experiments might inform us, definitively, about is that a particular formalism is incompatible with a particular experimental design and preparation, and 3) the salient features of the qm treatment of entanglement not only aren't at odds with, but stem from the applicability of the classical conservation laws and Malus' Law.

So the only thing that Bell tests can ever be said to show is that the formal separability of Bell LR is incompatible with the formal nonseparability of standard qm vis the design and preparation nonseparability of Bell tests.

The key point, and what the conventional literature obfuscates, is that the formal incompatibility doesn't preclude an informal classical understanding/explanation for entanglement correlations based on principles which hold in the 3D space and time of our sensory experience.

Experimental tests (related to Bell's logical demonstration of the incompatibility between a Bell LR modified expectation value formalism and the standard qm formalism) allow us to say only that it remains an open question as to whether the reality beyond our sensory experience is local or nonlocal. And since our sensory experience accords with an exclusively local reality, then we retain the assumption that nature is local.
 
Last edited:
  • #29
ThomasT said:
ON THE SCIENTIFIC METHOD

<SNIP>

Wrt a Bell experiment where the efficiency/detection loophole isn't closed (all of them, afaik), and the basis for adoption of the fair sampling or no enhancement assumptions isn't scientifically demonstrated in that experiment (all of them, afaik), then the experiment allows a possible flaw wrt the testing of the competing formalisms based on an inequality constructed on those assumptions.

<SNIP>

I've not yet studied your exposition in depth, BUT (thus far), I believe the above comments are unnecessary -- even misleading -- in ANY discussion of Bell's Theorem.

I've emphasized "misleading" because, in my experience, and according to my studies: They provide an invalid loop-hole through which too many "local realists" escape -- or seek to -- thereby avoiding the need for critical study.

NB: Not that the "imperfections" don't exist: BUT that the results will not change to any significant extent.

Thus -- whatever your case -- removal of these invalid (IMHO) diversionary loop-holes may strengthen it.

Just my quick 2c. for now.
 
  • #30
Gordon Watson said:
I've not yet studied your exposition in depth, BUT (thus far), I believe the above comments are unnecessary -- even misleading -- in ANY discussion of Bell's Theorem.

I've emphasized "misleading" because, in my experience, and according to my studies: They provide an invalid loop-hole through which too many "local realists" escape -- or seek to -- thereby avoiding the need for critical study.

NB: Not that the "imperfections" don't exist: BUT that the results will not change to any significant extent.

Thus -- whatever your case -- removal of these invalid (IMHO) diversionary loop-holes may strengthen it.

Just my quick 2c. for now.
I agree that the results won't change to any significant extent. QM will be affirmed, and Bell LR will be ruled out. However, as long as an inequality pertaining to an experiment is based on an assumption not verified in that experiment, then the experiment isn't definitive. This is why the applied scientists are working toward producing an unarguably loophole free optical Bell test.
 
  • #31
ThomasT said:
ON WHY BELL'S THEOREM AND BELL TESTS PROVE NOTHING ABOUT A REALITY BEYOND OUR SENSORY EXPERIENCE

Even if Bell test loopholes are closed, the experiments will not inform us that the correlations can't be due to relationships traced to local common causes, and/or that nature can't be local -- because 1) the domain of science is limited to our sensory experience
"Limited to our sensory experience" is an ambiguous phrase. We can certainly have models of what reality is like apart from our sensory experience, and then show with theoretical analysis that they imply certain constraints on what could be seen by our sensory experience (i.e. use a model to make predictions about experimental results); if these constraints are violated, that proves that the particular model is ruled out as a correct description of reality. Again go back to the theoretical meaning I gave to "local realism" in posts #72 and #83 of Gordon Watson's other now-locked thread:
1. The complete set of physical facts about any region of spacetime can be broken down into a set of local facts about the value of variables at each point in that regions (like the value of the electric and magnetic field vectors at each point in classical electromagnetism)

2. The local facts about any given point P in spacetime are only causally influenced by facts about points in the past light cone of P, meaning if you already know the complete information about all points in some spacelike cross-section of the past light cone, additional knowledge about points at a spacelike separation from P cannot alter your prediction about what happens at P itself (your prediction may be a probabilistic one if the laws of physics are non-deterministic).
Keep in mind that 1) doesn't forbid you from talking about "facts" that involve an extended region of spacetime, it just says that these facts must be possible to deduce as a function of all the local facts in that region. For example, in classical electromagnetism we can talk about the magnetic flux through an extended 2D surface of arbitrary size, this is not itself a local quantity, but the total flux is simply a function of all the local magnetic vectors at each point on the surface, that's the sort of thing I meant when I said in 1) that all physical facts "can be broken down into a set of local facts". Similarly in certain Bell inequalities one considers the expectation values for the product of the two results (each one represented as either +1 or -1), obviously this product is not itself a local fact, but it's a trivial function of the two local facts about the result each experimenter got.
A version of Bell's proof can be used to show that any theoretical model satisfying the above conditions will obey Bell inequalities in appropriately-designed experiments, so if our sensory experience shows that experiments with this design actually violate Bell inequalities, that shows that no theoretical model of this type can be a correct description of reality. Do you disagree?
ThomasT said:
2) the only thing that the experiments might inform us, definitively, about is that a particular formalism is incompatible with a particular experimental design and preparation
I would say a particular formalism can be incompatible with particular experimental results, but I don't know what it would mean to say it's incompatible with a "particular design and preparation". Can you give an example? Certainly there's no reason that the experimental design of Bell's experiment couldn't be replicated in a universe whose laws satisfied 1. and 2. above, it's just that in this universe the results would satisfy the relevant Bell inequalities rather than violating them. Again, tell me if you disagree about this.
ThomasT said:
3) the salient features of the qm treatment of entanglement not only aren't at odds with, but stem from the applicability of the classical conservation laws and Malus' Law.
"Stem from" sounds like weasel words to me, there's certainly no way you could derive a violation of Bell inequalities in a universe governed by local realist laws that included conservation laws and Malus' law, such as Maxwell's laws of electromagnetism. You could perform a Bell experiment in such a universe (using wave packets in place of photons I suppose, and detectors only set to go off if they received more than 50% the energy of the original wave packet so you'd never have a situation where a detector registered the packet going through the polarizer but another detector registered the packet being reflected from the same polarizer), and you would find that all Bell inequalities were satisfied.
 
  • #32
ThomasT said:
However, as long as an inequality pertaining to an experiment is based on an assumption not verified in that experiment, then the experiment isn't definitive. This is why the applied scientists are working toward producing an unarguably loophole free optical Bell test.

There are no "definitive" experiments; all experiments uses hardware which does not work with 100% efficiency or precision.

However, nobody is trying to disprove, say, SR based on that. The motivation of LR guys is a mystrery for me.

In any case, what efficiency level should be reached so there won't be any room for LR?
 
  • #33
JesseM said:
"Limited to our sensory experience" is an ambiguous phrase.
Sensory experience includes mathematical constructs and sensory instrumental output.

JesseM said:
We can certainly have models of what reality is like apart from our sensory experience, and then show with theoretical analysis that they imply certain constraints on what could be seen by our sensory experience (i.e. use a model to make predictions about experimental results); ...
The mathematical formalism makes predictions about sensory instrumental output. The comparison is between formalism and experimental design and preparation. There's no underlying reality in our sensory purview.

JesseM said:
... if these constraints are violated, that proves that the particular model is ruled out as a correct description of reality.
No, as long as the experiment is unflawed, it proves that the formalism is ruled out as a correct description of the experimental design and preparation to which it's being applied.

JesseM said:
A version of Bell's proof can be used to show that any theoretical model satisfying the above conditions will obey Bell inequalities in appropriately-designed experiments, so if our sensory experience shows that experiments with this design actually violate Bell inequalities, that shows that no theoretical model of this type can be a correct description of reality. Do you disagree?
Yes. No theoretical model of any type can ever be said to be a correct description of a reality beyond our sensory apprehension.

JesseM said:
I would say a particular formalism can be incompatible with particular experimental results, but I don't know what it would mean to say it's incompatible with a "particular design and preparation". Can you give an example? Certainly there's no reason that the experimental design of Bell's experiment couldn't be replicated in a universe whose laws satisfied 1. and 2. above, it's just that in this universe the results would satisfy the relevant Bell inequalities rather than violating them. Again, tell me if you disagree about this.
If a formalism gives incorrect results, then, obviously, the formalism is in contradiction with some feature of the design and/or preparation (including the execution) of the experiment.

JesseM said:
"Stem from" sounds like weasel words to me, there's certainly no way you could derive a violation of Bell inequalities in a universe governed by local realist laws that included conservation laws and Malus' law, such as Maxwell's laws of electromagnetism. You could perform a Bell experiment in such a universe (using wave packets in place of photons I suppose, and detectors only set to go off if they received more than 50% the energy of the original wave packet so you'd never have a situation where a detector registered the packet going through the polarizer but another detector registered the packet being reflected from the same polarizer), and you would find that all Bell inequalities were satisfied.
QM preserves the classical Malus' and conservation laws. QM's nonseparability wrt entanglement is acausal. The assumption of classical locality isn't contradicted by QM. But that assumption can't be explicitly denoted in the entanglement formalism.

The constraints imposed by Bell LR are the constraints of a particular formalism. The salient feature of that formalism is incompatible with the salient feature of the design of Bell (entanglement) tests -- the nonseparability of the parameter determining coincidental detection, and the irrelevance of that parameter to individual detection.

Inequalities can therefore be constructed which the Bell LR formalism will satisfy, but which QM won't.

And none of that tells us anything about the reality beyond our sensory experience.

The correct interpretation of Bell's theorem and Bell tests has been obfuscated in the conventional literature. Everybody, including me, would like to be able reify the mathematical constructs and say something definitive about the underlying reality. But science doesn't allow us to do that. We don't know that nature contains nonlocality. We don't know that it doesn't. The de facto scientific assumption is that nature is local.
 
  • #34
Dmitry67 said:
There are no "definitive" experiments; all experiments uses hardware which does not work with 100% efficiency or precision.
Ok.

Dmitry67 said:
The motivation of LR guys is a mystrery for me.
The motivation of people concerned with the interpretation of Bell's theorem and Bell tests is to show that the conventional interpretation (that nature can't be local) is wrong.

Dmitry67 said:
In any case, what efficiency level should be reached so there won't be any room for LR?
I don't know. But the quest for a loophole free test skirts the key issues in correctly interpreting Bell's theorem.
It's only important as long as the language surrounding the interpretation stays muddy.
My aim is to clarify that language, disregard the extraneous stuff, and ascertain what can be said about the meaning of Bell's theorem.
 
  • #35
ThomasT said:
The mathematical formalism makes predictions about sensory instrumental output. The comparison is between formalism and experimental design and preparation. There's no underlying reality in our sensory purview.
No, but we can posit an underlying reality and see what sort of predictions it gives about sensory experience. Do you disagree that we can form a model of an underlying reality?

As a thought-experiment, imagine that we somehow knew that the simulation argument was correct and that we were actually simulated beings living in a vast simulated universe. We might then be interested in knowing the basic program that the simulation is using to get later states from earlier states, and the rules of this program would constitute the "underlying reality" for us. And by observing the results of various experiments we could certainly infer certain things about the underlying program.
ThomasT said:
No, as long as the experiment is unflawed, it proves that the formalism is ruled out as a correct description of the experimental design and preparation to which it's being applied.
Huh? The formalism doesn't describe the "experimental design and preparation" at all, it is only used to predict the results of the experiment. You could imagine running an experiment with the same design in universes with different underlying laws, in each case getting a different result.
ThomasT said:
Yes. No theoretical model of any type can ever be said to be a correct description of a reality beyond our sensory apprehension.
No theoretical model can be definitively shown to be correct as long as it might be possible that there could be other models which make identical predictions about experimental results, but some models may be shown to be incorrect based on experimental results.
ThomasT said:
If a formalism gives incorrect results, then, obviously, the formalism is in contradiction with some feature of the design and/or preparation (including the execution) of the experiment.
Again you are making zero sense, how does the formalism giving incorrect predictions about results have anything whatsoever to do with the "design and/or preparation" of the experiment? If the design of my experiment is that I simultaneously drop two balls of the same shape but different masses off the leaning tower of Pisa, and I am using a theory of gravity that says the more massive ball should hit the ground first, then nothing about my formalism need differ from what was actually done (i.e. the formalism describes two balls of different masses being dropped simultaneously, and that's exactly what was done in real life), but the results will still differ from what was predicted by the formalism (both will actually hit the ground at the same time).
ThomasT said:
QM preserves the classical Malus' and conservation laws.
Yes, and so does classical electromagnetism. But in classical electromagnetism there would be no violation of Bell inequalities in a Bell-type experiment, so obviously you were talking nonsense when you said that conservation laws and Malus' law alone were enough to explain violation of Bell inequalities.
ThomasT said:
The assumption of classical locality isn't contradicted by QM.
Maybe not locality alone, but we were talking about local realism, a classical theory of the type described by my 1) and 2), such as classical electromagnetism. The assumption of classical local realism is contradicted by QM.
ThomasT said:
And none of that tells us anything about the reality beyond our sensory experience.
Sure it does, it tells us that the underlying theory doesn't satisfy my 1) and 2), which would both be true in a broad class of classical theories including classical electromagnetism.
ThomasT said:
Everybody, including me, would like to be able reify the mathematical constructs and say something definitive about the underlying reality. But science doesn't allow us to do that.
Science certainly allows us to falsify plenty of claims about the underlying reality, even if we can't show that any given model of the underlying reality is the unique correct one.
 
  • #36
JesseM said:
The formalism doesn't describe the "experimental design and preparation" at all, it is only used to predict the results of the experiment.
'Describe' was a poor choice of words on my part. However, unlike your 'two balls' example, the salient feature of the design of Bell tests is intimately related to the salient feature of LR and QM entanglement formalisms.

Realizing that will allow you to understand why QM and Bell LR entanglement formalisms are incompatible, and why Bell LR predictions must necessarily be skewed, and why nature can be local while at the same time Bell LR is ruled out.

JesseM said:
The assumption of classical local realism is contradicted by QM.
Neither classical realism nor classical locality is contradicted by QM. What is contradicted by QM, and experimental design, is the parameter separability required by the Bell LR entanglement formalism.
 
  • #37
ThomasT said:
'Describe' was a poor choice of words on my part. However, unlike your 'two balls' example, the salient feature of the design of Bell tests is intimately related to the salient feature of LR and QM entanglement formalisms.
I don't know what you mean by "intimately related". Certainly the Bell experiments are designed to test different ideas about LR and entanglement, but then you could also say that the experiment with the balls is designed to test different ideas about gravity and the relation of mass to rate of acceleration. The point is that the design itself doesn't assume a priori that any of the various competing assumptions is true.
ThomasT said:
Neither classical realism nor classical locality is contradicted by QM.
But the combination of the two is. Do you see anything in my 1) and 2) that goes beyond "classical realism + classical locality"? If so please identify the specific sentence(s) in my statement of 1) and 2) that you think don't follow from these classical assumptions.
ThomasT said:
What is contradicted by QM, and experimental design, is the parameter separability required by the Bell LR entanglement formalism.
What do you mean by "parameter separability"? Are you referring to the idea that we could "screen off" the correlation between the two outcomes by incorporating information about local hidden and/or non-hidden variables in the region of one experiment? (so if Alice measures on axis c and Bob measures on axis b, then while P(c+|b+) may differ from P(c+), if lambda represents the state of some set of local variables in Alice's region, then P(c+|b+, lambda) = P(c+|lambda)) If that is what you mean, this can be derived as a direct consequence of my 1) and 2), it isn't a separate assumption.
 
  • #38
JesseM said:
"Stem from" sounds like weasel words to me, there's certainly no way you could derive a violation of Bell inequalities in a universe governed by local realist laws that included conservation laws and Malus' law, such as Maxwell's laws of electromagnetism. You could perform a Bell experiment in such a universe (using wave packets in place of photons I suppose, and detectors only set to go off if they received more than 50% the energy of the original wave packet so you'd never have a situation where a detector registered the packet going through the polarizer but another detector registered the packet being reflected from the same polarizer), and you would find that all Bell inequalities were satisfied.

I'm grad if you explain this part in more detail.

The photon can not be separated in QM.
But in your explanation, the electromagnetic wave packet can be divided ?
(And you mean that more than 50 % the energy of the original wave packet can be detected ?)

Bell inequality is based on the fact the photon is transmitted or reflected (+ or -).
There are only two patterns in the photon. Right ?

According to Malus' law, the transmit amplitude is \cos \theta.
Here we change the assupmption of more than 50 % into 60%.
So in the electromagnetic wave packet, there are three patterns.
transmitted (1), or reflected (2) and they are detected due to the enough amplitude.
(3) When the wave packet is divided almost equally at the polarizer (for example 55 % + 45 %), it can not be detected (< 60%).

When there are three patterns, Bell inequality can be used correctly ?
(The result of (3) will be ignored and won't be used in the statistics. )
 
  • #39
ThomasT said:
The motivation of people concerned with the interpretation of Bell's theorem and Bell tests is to show that the conventional interpretation (that nature can't be local) is wrong.

Yes, but why do they chose that particular target?
There are so many things one an try to falsify.
For example, there is a very little discussion about the alternatives of GR (even GR or Cartan GR is discussed very little).
The only similar thing which comes to my mind is a camp of MOND guys...
 
  • #40
ytuab said:
But in your explanation, the electromagnetic wave packet can be divided ?
Yes, in classical electromagnetism if you have a polarized electromagnetic wave, which might be created by sending a non-polarized wave through a polarizer, then if this wave encounters another polarizer at an angle Theta relative to the first polarizer, a fraction of the wave proportional to cos^2(theta) will make it through while a fraction proportional to sin^2(theta) will be deflected, this is Malus' law.
ytaub said:
Bell inequality is based on the fact the photon is transmitted or reflected (+ or -).
There are only two patterns in the photon. Right ?
The derivation of the Bell inequality doesn't require any assumptions about unmeasured facts like whether the thing that sets off the detector is a "photon" or something else, it just requires that on each trial the detector(s) at a given location can register one of two possible results, labeled + and -. If you look at the diagram of the setup of the CHSH inequality test below, you can see that after "something" passes through a given polarizer like the one labeled "a", it should set off either the D+ detector (indicating the "something" passed through the polarizer) or the D- detector (indicated it was reflected by the polarizer). What's important is that you don't have trials where both detectors go off, which in the case of wave packets in classical electromagnetism could be ensured by making it so the detectors only went off if the energy they received was more than 50% the energy of the original electromagnetic wave packet.

300px-Two_channel.png

ytaub said:
According to Malus' law, the transmit amplitude is \cos \theta.
Malus' law is normally understood as a classical one, where cos^2 (theta) is not the "transmission amplitude" but rather the fraction of the energy of the original incident polarized wave that makes it through the second polarizer. Of course in QM if you have a bunch of photons of the same frequency, they all have the same energy, so the fraction of energy that makes it through is the same as the fraction of photons that make it through.
ytaub said:
Here we change the assupmption of more than 50 % into 60%.
So in the electromagnetic wave packet, there are three patterns.
transmitted (1), or reflected (2) and they are detected due to the enough amplitude.
(3) When the wave packet is divided almost equally at the polarizer (for example 55 % + 45 %), it can not be detected (< 60%).

When there are three patterns, Bell inequality can be used correctly ?
(The result of (3) will be ignored and won't be used in the statistics. )
In his original proof Bell assumed every photon was either determined to make it through or be deflected, which is why I chose a cutoff of 50% in the classical case so this would still be true. But there are variant Bell inequalities which deal with the possibility that some photons will simply fail to be detected, see the equation here which is meant to deal with the "detector inefficiency loophole"
 
  • #41
JesseM said:
I don't know what you mean by "intimately related".
Bell LR = separable formalism. QM = nonseparable formalism. Bell tests = nonseparable design.

This is what I notice.
Bell tests are, presumably, measuring an underlying, nonseparable parameter (the entanglement relationship) that doesn't determine individual detection, and that doesn't vary from pair to pair. Yet Bell LR requires that this be expressed in terms of functions which determine individual detection and which vary from pair to pair as λ varies. So I reason that if this is sufficient to skew the predictions away from what one would expect via optics principles, then that's, effectively, why Bell LR is incompatible with Bell tests. The problem of course is that this separability is a necessary component of an explicitly LR formalism. This is why I wrote a while back in another thread that diehard LR formalists face a sort of Catch-22 dilemma.

The QM treatment on the other hand is entirely in accord with a classical optics understanding of the correlations.

Prior to Bell there was no reason to suppose that the correlations were not ultimately due to the joint measurement of a locally produced relationship, ie., local common cause. But with the introduction of the LR requirement of formal separability things became less clear.

In my line of thinking the LR requirement of formal separability is an artificial one -- an artifact of the formal requirement of explicit localism with explicit realism which is simply at odds with the design of Bell tests, and therefore unrelated to considerations of locality in nature.
 
Last edited:
  • #42
Dmitry67 said:
Yes, but why do they chose that particular target?
There are so many things one an try to falsify.
For example, there is a very little discussion about the alternatives of GR (even GR or Cartan GR is discussed very little).
The only similar thing which comes to my mind is a camp of MOND guys...
Constructing alternatives to GR would seem to be a lot more difficult than sorting out the meaning of Bell's theorem.
 
  • #43
JesseM said:
Yes, in classical electromagnetism if you have a polarized electromagnetic wave, which might be created by sending a non-polarized wave through a polarizer, then if this wave encounters another polarizer at an angle Theta relative to the first polarizer, a fraction of the wave proportional to cos^2(theta) will make it through while a fraction proportional to sin^2(theta) will be deflected, this is Malus' law.

The derivation of the Bell inequality doesn't require any assumptions about unmeasured facts like whether the thing that sets off the detector is a "photon" or something else, it just requires that on each trial the detector(s) at a given location can register one of two possible results, labeled + and -. If you look at the diagram of the setup of the CHSH inequality test below, you can see that after "something" passes through a given polarizer like the one labeled "a", it should set off either the D+ detector (indicating the "something" passed through the polarizer) or the D- detector (indicated it was reflected by the polarizer). What's important is that you don't have trials where both detectors go off, which in the case of wave packets in classical electromagnetism could be ensured by making it so the detectors only went off if the energy they received was more than 50% the energy of the original electromagnetic wave packet.

300px-Two_channel.png


Malus' law is normally understood as a classical one, where cos^2 (theta) is not the "transmission amplitude" but rather the fraction of the energy of the original incident polarized wave that makes it through the second polarizer. Of course in QM if you have a bunch of photons of the same frequency, they all have the same energy, so the fraction of energy that makes it through is the same as the fraction of photons that make it through.

In his original proof Bell assumed every photon was either determined to make it through or be deflected, which is why I chose a cutoff of 50% in the classical case so this would still be true. But there are variant Bell inequalities which deal with the possibility that some photons will simply fail to be detected, see the equation here which is meant to deal with the "detector inefficiency loophole"

Thanks for reply.
So it seems that the meaning of electromagnetic wave packet you quote is almost same as that of the photon. Right?
(more than 50 % --- two patterns of pass or reflect)

Sorry. when I saw the word of "electromagnetic wave packet" in your text, I thought there was something peculiar to the electromagnetic wave (which is different from photon) in your text. But almost same ?

For example,
the light intensity that passes through the filter is given by

I = I_0 \cos^2 \theta

where I_0 is the initial intensity, and \theta is the angle between the light's initial polarization direction and the axis of the polarizer.

Suppose when this transmitted (or reflected) intensity I is above some threashold, the detector can recognize it as one photon. (For examplem, > 60%)
Your classical electromagnetic wave seems to be different from this meaning ?

And the in the wiki you quote, the detection efficiency of the photon in the actual experiment is lower than that is needed. Right?
 
  • #44
ThomasT said:
Bell LR = separable formalism.
What does "separable formalism" mean? You have a habit of not answering direct questions I ask you, which is frustrating. In my previous post I asked about the meaning of the similar phrase "parameter separability":
Are you referring to the idea that we could "screen off" the correlation between the two outcomes by incorporating information about local hidden and/or non-hidden variables in the region of one experiment? (so if Alice measures on axis c and Bob measures on axis b, then while P(c+|b+) may differ from P(c+), if lambda represents the state of some set of local variables in Alice's region, then P(c+|b+, lambda) = P(c+|lambda))
Can you please tell me if by "separable formalism" you just mean this idea that we can find local variables lambda in Alice's region that screen off the correlation between Alice's result with setting c and Bob's result with setting b, i.e. P(c+|b+, lambda) = P(c+|lambda)?
ThomasT said:
QM = nonseparable formalism. Bell tests = nonseparable design.
I don't know what you mean by "nonseparable design". Do you agree that just as the "dropping balls from the leaning tower of Pisa" experiment has a design that would allow it to be performed in both a universe with our law of gravity and a universe where more massive objects fell faster, similarly the Bell tests have a design that would allow them to be performed both in our universe apparently governed by QM, and in a universe governed by laws which satisfied my 1) and 2) such as the laws of classical electromagnetism? If you do agree with this, and you also agree with my previous notion that "separable formalism" refers to the possibility of screening off correlations between spacelike separated events, then do you also agree that a universe with laws that satisfy 1) and 2) would be one where it would be possible to screen off correlations between separated events, and thus in this universe the exact same Bell tests could be accurately described using separable formalism?
ThomasT said:
Bell tests are, presumably, measuring an underlying, nonseparable parameter (the entanglement relationship) that doesn't determine individual detection, and that doesn't vary from pair to pair.
You can't make assumptions about the "underlying" reality before running the test, the whole point of the test is to see whether the behavior of entangled electrons is consistent with the idea that the laws of physics are local realistic ones, in which case all probabilities would be "separable" in the sense I discussed above of P(c+|b+, lambda) = P(c+|lambda). If you disagree that this notion of separability automatically follows from the assumption of local realism, please address this question from my previous post:
Do you see anything in my 1) and 2) that goes beyond "classical realism + classical locality"? If so please identify the specific sentence(s) in my statement of 1) and 2) that you think don't follow from these classical assumptions.
If you agree that my 1) and 2) are equivalent to "local realism" but don't see how 1) and 2) automatically entail P(c+|b+, lambda) = P(c+|lambda), I can show you that too, just ask.
ThomasT said:
Yet Bell LR requires that this be expressed in terms of functions which determine individual detection and which vary from pair to pair as λ varies. So I reason that if this is sufficient to skew the predictions away from what one would expect via optics principles
Arrrrrgh you just repeat the same silly claims while completely ignoring the criticisms made...you can't derive Bell inequality violations from "optics principles", I already made that point very clear by repeatedly pointing out that the Bell experiment could be performed in a universe governed by the laws of classical electromagnetism and that in this universe the Bell inequalities would be satisfied. If you have some doubt about this then explain it, but don't just blithely repeat the same claims and pretend the criticisms were never raised.
 
Last edited:
  • #45
ytuab said:
Thanks for reply.
So it seems that the meaning of electromagnetic wave packet you quote is almost same as that of the photon. Right?
(more than 50 % --- two patterns of pass or reflect)
Yes, almost the same, with the important difference that a photon is always measured to have either passed through or been reflected by a polarizer (though before measurement its wavefunction might split), whereas an electromagnetic wave or wave packet can be split by a polarizer, with some of the energy of the wave passing through and some being reflected.
ytuab said:
For example,
the light intensity that passes through the filter is given by

I = I_0 \cos^2 \theta

where I_0 is the initial intensity, and \theta is the angle between the light's initial polarization direction and the axis of the polarizer.

Suppose when this transmitted (or reflected) intensity I is above some threashold, the detector can recognize it as one photon. (For examplem, > 60%)
Your classical electromagnetic wave seems to be different from this meaning ?
Well, there are no "photons" in classical electromagnetism, classical electromagnetic waves are infinitely divisible. But I imagined that the detectors were specifically designed to only go off if they received a wave packet with at least 50% of the energy of the original wave packet sent by the source, so that the classical experiment would replicate the same features as the quantum experiment (i.e. you'd always have either detector D+ or D- go off, never both).
ytuab said:
And the in the wiki you quote, the detection efficiency of the photon in the actual experiment is lower than that is needed. Right?
Yes, although there have been some experiments involving ions rather than photons that did close the detector efficiency loophole (though they didn't simultaneously close the locality loophole), see here and here (pdf file). And there are a number of papers that predict it will soon be possible to perform experiments which close both the detector efficiency loophole and the locality loophole simultaneously, see here and here.
 
  • #46
Dmitry67 said:
Yes, but why do they chose that particular target?
There are so many things one an try to falsify.
For example, there is a very little discussion about the alternatives of GR (even GR or Cartan GR is discussed very little).
The only similar thing which comes to my mind is a camp of MOND guys...

The claim of non-locality is the main (or best known) remaining riddle that suggests to have a direct and huge consequence for our perception of the universe including ourselves. A variant of GR doesn't pretend to have any such impact, it's just the same slightly different - rather boring in comparison. :-p
 
  • #47
OK. JesseM.
So I want to return to your first opinition that in classicel electromagnetism there would be no violation of Bell inequalities in a Bell-type experiment.
(This is the reason why I asked what your electromagnetic wave packet means.)

In the photoelectric effect, the light frequency is related to the energy, and the light intensity is related to the number of emitted photoelectrons.
(This means that we can suppose the light intensity Q is required for one emitted photoelectron. 2Q is needed for two emitted photoelectrons ...)
So we can suppose this minimum intensity Q is equal to more than 60% intensity of the wave packet.
(Because if you use the example of the electromagnetic wave, the intensity is related to the events at the polarizer according to Malus' law.)

JesseM said:
Well, there are no "photons" in classical electromagnetism, classical electromagnetic waves are infinitely divisible. But I imagined that the detectors were specifically designed to only go off if they received a wave packet with at least 50% of the energy of the original wave packet sent by the source, so that the classical experiment would replicate the same features as the quantum experiment (i.e. you'd always have either detector D+ or D- go off, never both).

I agree with you about this point.
And of course, as you say, the case when we detect tow photons (D+ and D- ) at the same one polarizer is meaningless (= the total number detected beomes more than 2 photons (3 or 4 photons) ).
The cases that I want to talk about are those of the two or less than two photons (at A and B detectors).

The light intensity that passes throught the filter is

I = I_0 \cos^2 \theta

So the remaining reflection intensity is

I = I_0 \sin^2 \theta

(Of course, a little loss exists.)

As I said, there are three patterns (pass (1) and reflect (2), and they are detected due to its enough intensity (> Q)).
And when the light (intensity) is divided at the polarizer almost equally ( 55% + 45%, for example, in the case of near 45 degrees in the above equations.), neither pass nor reflect detector can not detect it as a photon (3).

When the two photons (A and B) with parallel poralization axis bumps into each filter of the same angle (the angle difference between two filtes \alpha = 0),
The results (pass or reflect) of the two photons always become the same (\cos^2 \alpha = \cos^2 0 = 1) ?
Because when the photon A (or B) passes the filter A (or B), photon A (or B) always has the polarizarion axis near the filter A (or B) to reach the intensity detection threashold (> Q) of the dector.
In the case of equally divided lights at the polarizer as I said above, the pass or reflect light intensities can not reach the detection threashold of the detector.
This case will be ingnored, but is very important as a underlying reality.

Sorry. I want to talk about the two photons case (not the case of ions ...).
Because the ion case uses the very artificial condition such as Paul trap and pulse laser.
(If these artificial manipulations don't exist, the Be+ ion excitation can not occurr, which is required for entanglement condition. )
 
Last edited:
  • #48
ytuab said:
OK. JesseM.
So I want to return to your first opinition that in classicel electromagnetism there would be no violation of Bell inequalities in a Bell-type experiment.
(This is the reason why I asked what your electromagnetic wave packet means.)

In the photoelectric effect, the light frequency is related to the energy, and the light intensity is related to the number of emitted photoelectrons.
The photoelectric effect wouldn't work the same way in classical electromagnetism...do you want to discuss what's true in QM, or what would be true of experiments in a purely classical universe? Also, are you actually trying to dispute my claim that "in classicel electromagnetism there would be no violation of Bell inequalities in a Bell-type experiment"? If not, I don't really understand what the point of your discussion of the classical case is supposed to be.
ytuab said:
(This means that we can suppose the light intensity Q is required for one emitted photoelectron.
What's a "photoelectron"? Are you just talking about my idea of using wave packets in classical electromagnetism?
ytuab said:
2Q is needed for two emitted photoelectrons ...)
So we can suppose this minimum intensity Q is equal to more than 60% intensity of the wave packet.
Why 60%? My thought experiment was that the threshold would be 50%, so that for example the D- detector would go off if it received > 50% of the energy of the original wave packet (assume the source always sends out wave packets with a fixed energy), while the D+ detector would go off if it received ≥ 50% of the energy of the original wave packet. In this way it is guaranteed that for every wave packet sent by the source, one and only one of the two detectors will be triggered. Again, my point was to come up with a thought-experiment that replicates all the features of Bell's experiment (except the final results) in a classical universe, please let me know if you agree or disagree that this is possible to do.
ytuab said:
Sorry. I want to talk about the two photons case (not the case of ions ...).
Because the ion case uses the very artificial condition such as Paul trap and pulse laser.
(If these artificial manipulations don't exist, the Be+ ion excitation can not occurr, which is required for entanglement condition. )
What does "artificial" mean, and how is it relevant to Bell? In a local realist universe, as long as the experiment has all the basic features Bell outlined, you are guaranteed to satisfy Bell inequalities regardless of other conditions, "artificial" or not. Of course the ion experiments do lack one of the features of Bell's thought-experiment since the two measurements are not actually carried out at a spacelike separation, but unless you posit local realistic laws that specifically exploit the "locality loophole" you won't be able to explain these results with local realistic laws.
 
  • #49
ThomasT said:
I agree that your exposition is essentially correct. Mine was incomplete, and I apologize.
I am glad we cleared that discord.

ThomasT said:
I want to emphasize that experiments are testing formalisms, and that the formalisms can't, scientifically, be definitively associated with any conception of a reality that's beyond our sensory experience.
I changed emphasis in your statement. And I agree that you can't conclusively associate formalism with conception of reality. But as we test more and more this association from different sides and with different methods we acquire more certainty in this association.

Another side is that construction of experiments relay on previously tested formalism (that hopefully is is tested throughout). Even what you perceive as your own visual sense is actually reality model constructed by your brain. You don't "see" the light, you "see" the interpretation of that light constructed by your brain. For example you can't "see" http://en.wikipedia.org/wiki/Blind_spot_%28vision%29" directly. And we are relaying on that interpretation. We test it with other senses and we acquire strong confidence in that interpretation.

ThomasT said:
Bell compared two competing formalisms, standard qm and LR-supplemented/interpreted standard qm, and proved that they're incompatible. An experimental test of Bell's theorem entails the construction of an inequality based on the specific design and preparation of the test. It provides a quantitative measure of the compatibility of each of the competing formalisms with that experiment, as well as between the competing formalisms for that experiment.
No, Bell provides quantitative measure for local realism only. For QM there is only qualitative non falsifiable prediction that it can violate LR inequalities.
And that is one of the problems - all these experiments try to test local realism but they don't test falsifiable predictions of QM . However they are presented as scientific tests of QM.
And that is just sick.

ThomasT said:
Wrt a Bell experiment where the efficiency/detection loophole isn't closed (all of them, afaik), and the basis for adoption of the fair sampling or no enhancement assumptions isn't scientifically demonstrated in that experiment (all of them, afaik), then the experiment allows a possible flaw wrt the testing of the competing formalisms based on an inequality constructed on those assumptions.

So, we might rewrite your exposition as:

Scientific method requires that experiment can falsify hypothesis to be tested.
So we should have three possible outcomes of experiment:
1. Experiment is not flawed and results agree with formal hypothesis.
2. Experiment is not flawed and results disagree with formal hypothesis.
3. Experiment is flawed because formal hypothesis is based on assumptions which haven't been scientifically demonstrated to hold for that experiment, or for some other reason.
Prediction is made for certain hypothesis that uses certain assumption. Then this particular hypothesis is falsified with experiment.
We can make different hypothesis without that assumption. That will require different experiment with additional requirements.

But Bell experiments try to falsify some hypothetical hypothesis even before it's made.
What for?
Because otherwise mainstream theory looks crappy? And everybody will look for alternatives?
 
Last edited by a moderator:
  • #50
Dmitry67 said:
There are no "definitive" experiments; all experiments uses hardware which does not work with 100% efficiency or precision.

However, nobody is trying to disprove, say, SR based on that. The motivation of LR guys is a mystrery for me.
Who is trying to disprove QM based on efficiency loophole?

Dmitry67 said:
In any case, what efficiency level should be reached so there won't be any room for LR?
To reach something you have to move in that direction.
Do you know any photon Bell experiment that test different efficiency levels?
Detection efficiency (coincident detection rate to single detection rate) is very often not reported at all in papers about Bell experiments.
 
Back
Top