The Efficiency Loophole: A Local Hidden Variables Theory?

Click For Summary
The discussion centers on the possibility of creating a local hidden variable theory for electrons in entangled pairs, particularly in relation to the detection loophole. Participants debate whether a hidden property could determine an electron's detectability, questioning if such a theory could consistently align with quantum mechanics. Despite attempts to formulate local hidden variable theories, experimental results consistently support quantum mechanics and violate local realism. Improved photon detector efficiencies are highlighted as a means to close the detection loophole, yet concerns remain about the validity of experiments and their ability to falsify hypotheses. Overall, the conversation emphasizes ongoing challenges in reconciling local realism with quantum predictions.
  • #31
ThomasT said:
ON WHY BELL'S THEOREM AND BELL TESTS PROVE NOTHING ABOUT A REALITY BEYOND OUR SENSORY EXPERIENCE

Even if Bell test loopholes are closed, the experiments will not inform us that the correlations can't be due to relationships traced to local common causes, and/or that nature can't be local -- because 1) the domain of science is limited to our sensory experience
"Limited to our sensory experience" is an ambiguous phrase. We can certainly have models of what reality is like apart from our sensory experience, and then show with theoretical analysis that they imply certain constraints on what could be seen by our sensory experience (i.e. use a model to make predictions about experimental results); if these constraints are violated, that proves that the particular model is ruled out as a correct description of reality. Again go back to the theoretical meaning I gave to "local realism" in posts #72 and #83 of Gordon Watson's other now-locked thread:
1. The complete set of physical facts about any region of spacetime can be broken down into a set of local facts about the value of variables at each point in that regions (like the value of the electric and magnetic field vectors at each point in classical electromagnetism)

2. The local facts about any given point P in spacetime are only causally influenced by facts about points in the past light cone of P, meaning if you already know the complete information about all points in some spacelike cross-section of the past light cone, additional knowledge about points at a spacelike separation from P cannot alter your prediction about what happens at P itself (your prediction may be a probabilistic one if the laws of physics are non-deterministic).
Keep in mind that 1) doesn't forbid you from talking about "facts" that involve an extended region of spacetime, it just says that these facts must be possible to deduce as a function of all the local facts in that region. For example, in classical electromagnetism we can talk about the magnetic flux through an extended 2D surface of arbitrary size, this is not itself a local quantity, but the total flux is simply a function of all the local magnetic vectors at each point on the surface, that's the sort of thing I meant when I said in 1) that all physical facts "can be broken down into a set of local facts". Similarly in certain Bell inequalities one considers the expectation values for the product of the two results (each one represented as either +1 or -1), obviously this product is not itself a local fact, but it's a trivial function of the two local facts about the result each experimenter got.
A version of Bell's proof can be used to show that any theoretical model satisfying the above conditions will obey Bell inequalities in appropriately-designed experiments, so if our sensory experience shows that experiments with this design actually violate Bell inequalities, that shows that no theoretical model of this type can be a correct description of reality. Do you disagree?
ThomasT said:
2) the only thing that the experiments might inform us, definitively, about is that a particular formalism is incompatible with a particular experimental design and preparation
I would say a particular formalism can be incompatible with particular experimental results, but I don't know what it would mean to say it's incompatible with a "particular design and preparation". Can you give an example? Certainly there's no reason that the experimental design of Bell's experiment couldn't be replicated in a universe whose laws satisfied 1. and 2. above, it's just that in this universe the results would satisfy the relevant Bell inequalities rather than violating them. Again, tell me if you disagree about this.
ThomasT said:
3) the salient features of the qm treatment of entanglement not only aren't at odds with, but stem from the applicability of the classical conservation laws and Malus' Law.
"Stem from" sounds like weasel words to me, there's certainly no way you could derive a violation of Bell inequalities in a universe governed by local realist laws that included conservation laws and Malus' law, such as Maxwell's laws of electromagnetism. You could perform a Bell experiment in such a universe (using wave packets in place of photons I suppose, and detectors only set to go off if they received more than 50% the energy of the original wave packet so you'd never have a situation where a detector registered the packet going through the polarizer but another detector registered the packet being reflected from the same polarizer), and you would find that all Bell inequalities were satisfied.
 
Physics news on Phys.org
  • #32
ThomasT said:
However, as long as an inequality pertaining to an experiment is based on an assumption not verified in that experiment, then the experiment isn't definitive. This is why the applied scientists are working toward producing an unarguably loophole free optical Bell test.

There are no "definitive" experiments; all experiments uses hardware which does not work with 100% efficiency or precision.

However, nobody is trying to disprove, say, SR based on that. The motivation of LR guys is a mystrery for me.

In any case, what efficiency level should be reached so there won't be any room for LR?
 
  • #33
JesseM said:
"Limited to our sensory experience" is an ambiguous phrase.
Sensory experience includes mathematical constructs and sensory instrumental output.

JesseM said:
We can certainly have models of what reality is like apart from our sensory experience, and then show with theoretical analysis that they imply certain constraints on what could be seen by our sensory experience (i.e. use a model to make predictions about experimental results); ...
The mathematical formalism makes predictions about sensory instrumental output. The comparison is between formalism and experimental design and preparation. There's no underlying reality in our sensory purview.

JesseM said:
... if these constraints are violated, that proves that the particular model is ruled out as a correct description of reality.
No, as long as the experiment is unflawed, it proves that the formalism is ruled out as a correct description of the experimental design and preparation to which it's being applied.

JesseM said:
A version of Bell's proof can be used to show that any theoretical model satisfying the above conditions will obey Bell inequalities in appropriately-designed experiments, so if our sensory experience shows that experiments with this design actually violate Bell inequalities, that shows that no theoretical model of this type can be a correct description of reality. Do you disagree?
Yes. No theoretical model of any type can ever be said to be a correct description of a reality beyond our sensory apprehension.

JesseM said:
I would say a particular formalism can be incompatible with particular experimental results, but I don't know what it would mean to say it's incompatible with a "particular design and preparation". Can you give an example? Certainly there's no reason that the experimental design of Bell's experiment couldn't be replicated in a universe whose laws satisfied 1. and 2. above, it's just that in this universe the results would satisfy the relevant Bell inequalities rather than violating them. Again, tell me if you disagree about this.
If a formalism gives incorrect results, then, obviously, the formalism is in contradiction with some feature of the design and/or preparation (including the execution) of the experiment.

JesseM said:
"Stem from" sounds like weasel words to me, there's certainly no way you could derive a violation of Bell inequalities in a universe governed by local realist laws that included conservation laws and Malus' law, such as Maxwell's laws of electromagnetism. You could perform a Bell experiment in such a universe (using wave packets in place of photons I suppose, and detectors only set to go off if they received more than 50% the energy of the original wave packet so you'd never have a situation where a detector registered the packet going through the polarizer but another detector registered the packet being reflected from the same polarizer), and you would find that all Bell inequalities were satisfied.
QM preserves the classical Malus' and conservation laws. QM's nonseparability wrt entanglement is acausal. The assumption of classical locality isn't contradicted by QM. But that assumption can't be explicitly denoted in the entanglement formalism.

The constraints imposed by Bell LR are the constraints of a particular formalism. The salient feature of that formalism is incompatible with the salient feature of the design of Bell (entanglement) tests -- the nonseparability of the parameter determining coincidental detection, and the irrelevance of that parameter to individual detection.

Inequalities can therefore be constructed which the Bell LR formalism will satisfy, but which QM won't.

And none of that tells us anything about the reality beyond our sensory experience.

The correct interpretation of Bell's theorem and Bell tests has been obfuscated in the conventional literature. Everybody, including me, would like to be able reify the mathematical constructs and say something definitive about the underlying reality. But science doesn't allow us to do that. We don't know that nature contains nonlocality. We don't know that it doesn't. The de facto scientific assumption is that nature is local.
 
  • #34
Dmitry67 said:
There are no "definitive" experiments; all experiments uses hardware which does not work with 100% efficiency or precision.
Ok.

Dmitry67 said:
The motivation of LR guys is a mystrery for me.
The motivation of people concerned with the interpretation of Bell's theorem and Bell tests is to show that the conventional interpretation (that nature can't be local) is wrong.

Dmitry67 said:
In any case, what efficiency level should be reached so there won't be any room for LR?
I don't know. But the quest for a loophole free test skirts the key issues in correctly interpreting Bell's theorem.
It's only important as long as the language surrounding the interpretation stays muddy.
My aim is to clarify that language, disregard the extraneous stuff, and ascertain what can be said about the meaning of Bell's theorem.
 
  • #35
ThomasT said:
The mathematical formalism makes predictions about sensory instrumental output. The comparison is between formalism and experimental design and preparation. There's no underlying reality in our sensory purview.
No, but we can posit an underlying reality and see what sort of predictions it gives about sensory experience. Do you disagree that we can form a model of an underlying reality?

As a thought-experiment, imagine that we somehow knew that the simulation argument was correct and that we were actually simulated beings living in a vast simulated universe. We might then be interested in knowing the basic program that the simulation is using to get later states from earlier states, and the rules of this program would constitute the "underlying reality" for us. And by observing the results of various experiments we could certainly infer certain things about the underlying program.
ThomasT said:
No, as long as the experiment is unflawed, it proves that the formalism is ruled out as a correct description of the experimental design and preparation to which it's being applied.
Huh? The formalism doesn't describe the "experimental design and preparation" at all, it is only used to predict the results of the experiment. You could imagine running an experiment with the same design in universes with different underlying laws, in each case getting a different result.
ThomasT said:
Yes. No theoretical model of any type can ever be said to be a correct description of a reality beyond our sensory apprehension.
No theoretical model can be definitively shown to be correct as long as it might be possible that there could be other models which make identical predictions about experimental results, but some models may be shown to be incorrect based on experimental results.
ThomasT said:
If a formalism gives incorrect results, then, obviously, the formalism is in contradiction with some feature of the design and/or preparation (including the execution) of the experiment.
Again you are making zero sense, how does the formalism giving incorrect predictions about results have anything whatsoever to do with the "design and/or preparation" of the experiment? If the design of my experiment is that I simultaneously drop two balls of the same shape but different masses off the leaning tower of Pisa, and I am using a theory of gravity that says the more massive ball should hit the ground first, then nothing about my formalism need differ from what was actually done (i.e. the formalism describes two balls of different masses being dropped simultaneously, and that's exactly what was done in real life), but the results will still differ from what was predicted by the formalism (both will actually hit the ground at the same time).
ThomasT said:
QM preserves the classical Malus' and conservation laws.
Yes, and so does classical electromagnetism. But in classical electromagnetism there would be no violation of Bell inequalities in a Bell-type experiment, so obviously you were talking nonsense when you said that conservation laws and Malus' law alone were enough to explain violation of Bell inequalities.
ThomasT said:
The assumption of classical locality isn't contradicted by QM.
Maybe not locality alone, but we were talking about local realism, a classical theory of the type described by my 1) and 2), such as classical electromagnetism. The assumption of classical local realism is contradicted by QM.
ThomasT said:
And none of that tells us anything about the reality beyond our sensory experience.
Sure it does, it tells us that the underlying theory doesn't satisfy my 1) and 2), which would both be true in a broad class of classical theories including classical electromagnetism.
ThomasT said:
Everybody, including me, would like to be able reify the mathematical constructs and say something definitive about the underlying reality. But science doesn't allow us to do that.
Science certainly allows us to falsify plenty of claims about the underlying reality, even if we can't show that any given model of the underlying reality is the unique correct one.
 
  • #36
JesseM said:
The formalism doesn't describe the "experimental design and preparation" at all, it is only used to predict the results of the experiment.
'Describe' was a poor choice of words on my part. However, unlike your 'two balls' example, the salient feature of the design of Bell tests is intimately related to the salient feature of LR and QM entanglement formalisms.

Realizing that will allow you to understand why QM and Bell LR entanglement formalisms are incompatible, and why Bell LR predictions must necessarily be skewed, and why nature can be local while at the same time Bell LR is ruled out.

JesseM said:
The assumption of classical local realism is contradicted by QM.
Neither classical realism nor classical locality is contradicted by QM. What is contradicted by QM, and experimental design, is the parameter separability required by the Bell LR entanglement formalism.
 
  • #37
ThomasT said:
'Describe' was a poor choice of words on my part. However, unlike your 'two balls' example, the salient feature of the design of Bell tests is intimately related to the salient feature of LR and QM entanglement formalisms.
I don't know what you mean by "intimately related". Certainly the Bell experiments are designed to test different ideas about LR and entanglement, but then you could also say that the experiment with the balls is designed to test different ideas about gravity and the relation of mass to rate of acceleration. The point is that the design itself doesn't assume a priori that any of the various competing assumptions is true.
ThomasT said:
Neither classical realism nor classical locality is contradicted by QM.
But the combination of the two is. Do you see anything in my 1) and 2) that goes beyond "classical realism + classical locality"? If so please identify the specific sentence(s) in my statement of 1) and 2) that you think don't follow from these classical assumptions.
ThomasT said:
What is contradicted by QM, and experimental design, is the parameter separability required by the Bell LR entanglement formalism.
What do you mean by "parameter separability"? Are you referring to the idea that we could "screen off" the correlation between the two outcomes by incorporating information about local hidden and/or non-hidden variables in the region of one experiment? (so if Alice measures on axis c and Bob measures on axis b, then while P(c+|b+) may differ from P(c+), if lambda represents the state of some set of local variables in Alice's region, then P(c+|b+, lambda) = P(c+|lambda)) If that is what you mean, this can be derived as a direct consequence of my 1) and 2), it isn't a separate assumption.
 
  • #38
JesseM said:
"Stem from" sounds like weasel words to me, there's certainly no way you could derive a violation of Bell inequalities in a universe governed by local realist laws that included conservation laws and Malus' law, such as Maxwell's laws of electromagnetism. You could perform a Bell experiment in such a universe (using wave packets in place of photons I suppose, and detectors only set to go off if they received more than 50% the energy of the original wave packet so you'd never have a situation where a detector registered the packet going through the polarizer but another detector registered the packet being reflected from the same polarizer), and you would find that all Bell inequalities were satisfied.

I'm grad if you explain this part in more detail.

The photon can not be separated in QM.
But in your explanation, the electromagnetic wave packet can be divided ?
(And you mean that more than 50 % the energy of the original wave packet can be detected ?)

Bell inequality is based on the fact the photon is transmitted or reflected (+ or -).
There are only two patterns in the photon. Right ?

According to Malus' law, the transmit amplitude is \cos \theta.
Here we change the assupmption of more than 50 % into 60%.
So in the electromagnetic wave packet, there are three patterns.
transmitted (1), or reflected (2) and they are detected due to the enough amplitude.
(3) When the wave packet is divided almost equally at the polarizer (for example 55 % + 45 %), it can not be detected (< 60%).

When there are three patterns, Bell inequality can be used correctly ?
(The result of (3) will be ignored and won't be used in the statistics. )
 
  • #39
ThomasT said:
The motivation of people concerned with the interpretation of Bell's theorem and Bell tests is to show that the conventional interpretation (that nature can't be local) is wrong.

Yes, but why do they chose that particular target?
There are so many things one an try to falsify.
For example, there is a very little discussion about the alternatives of GR (even GR or Cartan GR is discussed very little).
The only similar thing which comes to my mind is a camp of MOND guys...
 
  • #40
ytuab said:
But in your explanation, the electromagnetic wave packet can be divided ?
Yes, in classical electromagnetism if you have a polarized electromagnetic wave, which might be created by sending a non-polarized wave through a polarizer, then if this wave encounters another polarizer at an angle Theta relative to the first polarizer, a fraction of the wave proportional to cos^2(theta) will make it through while a fraction proportional to sin^2(theta) will be deflected, this is Malus' law.
ytaub said:
Bell inequality is based on the fact the photon is transmitted or reflected (+ or -).
There are only two patterns in the photon. Right ?
The derivation of the Bell inequality doesn't require any assumptions about unmeasured facts like whether the thing that sets off the detector is a "photon" or something else, it just requires that on each trial the detector(s) at a given location can register one of two possible results, labeled + and -. If you look at the diagram of the setup of the CHSH inequality test below, you can see that after "something" passes through a given polarizer like the one labeled "a", it should set off either the D+ detector (indicating the "something" passed through the polarizer) or the D- detector (indicated it was reflected by the polarizer). What's important is that you don't have trials where both detectors go off, which in the case of wave packets in classical electromagnetism could be ensured by making it so the detectors only went off if the energy they received was more than 50% the energy of the original electromagnetic wave packet.

300px-Two_channel.png

ytaub said:
According to Malus' law, the transmit amplitude is \cos \theta.
Malus' law is normally understood as a classical one, where cos^2 (theta) is not the "transmission amplitude" but rather the fraction of the energy of the original incident polarized wave that makes it through the second polarizer. Of course in QM if you have a bunch of photons of the same frequency, they all have the same energy, so the fraction of energy that makes it through is the same as the fraction of photons that make it through.
ytaub said:
Here we change the assupmption of more than 50 % into 60%.
So in the electromagnetic wave packet, there are three patterns.
transmitted (1), or reflected (2) and they are detected due to the enough amplitude.
(3) When the wave packet is divided almost equally at the polarizer (for example 55 % + 45 %), it can not be detected (< 60%).

When there are three patterns, Bell inequality can be used correctly ?
(The result of (3) will be ignored and won't be used in the statistics. )
In his original proof Bell assumed every photon was either determined to make it through or be deflected, which is why I chose a cutoff of 50% in the classical case so this would still be true. But there are variant Bell inequalities which deal with the possibility that some photons will simply fail to be detected, see the equation here which is meant to deal with the "detector inefficiency loophole"
 
  • #41
JesseM said:
I don't know what you mean by "intimately related".
Bell LR = separable formalism. QM = nonseparable formalism. Bell tests = nonseparable design.

This is what I notice.
Bell tests are, presumably, measuring an underlying, nonseparable parameter (the entanglement relationship) that doesn't determine individual detection, and that doesn't vary from pair to pair. Yet Bell LR requires that this be expressed in terms of functions which determine individual detection and which vary from pair to pair as λ varies. So I reason that if this is sufficient to skew the predictions away from what one would expect via optics principles, then that's, effectively, why Bell LR is incompatible with Bell tests. The problem of course is that this separability is a necessary component of an explicitly LR formalism. This is why I wrote a while back in another thread that diehard LR formalists face a sort of Catch-22 dilemma.

The QM treatment on the other hand is entirely in accord with a classical optics understanding of the correlations.

Prior to Bell there was no reason to suppose that the correlations were not ultimately due to the joint measurement of a locally produced relationship, ie., local common cause. But with the introduction of the LR requirement of formal separability things became less clear.

In my line of thinking the LR requirement of formal separability is an artificial one -- an artifact of the formal requirement of explicit localism with explicit realism which is simply at odds with the design of Bell tests, and therefore unrelated to considerations of locality in nature.
 
Last edited:
  • #42
Dmitry67 said:
Yes, but why do they chose that particular target?
There are so many things one an try to falsify.
For example, there is a very little discussion about the alternatives of GR (even GR or Cartan GR is discussed very little).
The only similar thing which comes to my mind is a camp of MOND guys...
Constructing alternatives to GR would seem to be a lot more difficult than sorting out the meaning of Bell's theorem.
 
  • #43
JesseM said:
Yes, in classical electromagnetism if you have a polarized electromagnetic wave, which might be created by sending a non-polarized wave through a polarizer, then if this wave encounters another polarizer at an angle Theta relative to the first polarizer, a fraction of the wave proportional to cos^2(theta) will make it through while a fraction proportional to sin^2(theta) will be deflected, this is Malus' law.

The derivation of the Bell inequality doesn't require any assumptions about unmeasured facts like whether the thing that sets off the detector is a "photon" or something else, it just requires that on each trial the detector(s) at a given location can register one of two possible results, labeled + and -. If you look at the diagram of the setup of the CHSH inequality test below, you can see that after "something" passes through a given polarizer like the one labeled "a", it should set off either the D+ detector (indicating the "something" passed through the polarizer) or the D- detector (indicated it was reflected by the polarizer). What's important is that you don't have trials where both detectors go off, which in the case of wave packets in classical electromagnetism could be ensured by making it so the detectors only went off if the energy they received was more than 50% the energy of the original electromagnetic wave packet.

300px-Two_channel.png


Malus' law is normally understood as a classical one, where cos^2 (theta) is not the "transmission amplitude" but rather the fraction of the energy of the original incident polarized wave that makes it through the second polarizer. Of course in QM if you have a bunch of photons of the same frequency, they all have the same energy, so the fraction of energy that makes it through is the same as the fraction of photons that make it through.

In his original proof Bell assumed every photon was either determined to make it through or be deflected, which is why I chose a cutoff of 50% in the classical case so this would still be true. But there are variant Bell inequalities which deal with the possibility that some photons will simply fail to be detected, see the equation here which is meant to deal with the "detector inefficiency loophole"

Thanks for reply.
So it seems that the meaning of electromagnetic wave packet you quote is almost same as that of the photon. Right?
(more than 50 % --- two patterns of pass or reflect)

Sorry. when I saw the word of "electromagnetic wave packet" in your text, I thought there was something peculiar to the electromagnetic wave (which is different from photon) in your text. But almost same ?

For example,
the light intensity that passes through the filter is given by

I = I_0 \cos^2 \theta

where I_0 is the initial intensity, and \theta is the angle between the light's initial polarization direction and the axis of the polarizer.

Suppose when this transmitted (or reflected) intensity I is above some threashold, the detector can recognize it as one photon. (For examplem, > 60%)
Your classical electromagnetic wave seems to be different from this meaning ?

And the in the wiki you quote, the detection efficiency of the photon in the actual experiment is lower than that is needed. Right?
 
  • #44
ThomasT said:
Bell LR = separable formalism.
What does "separable formalism" mean? You have a habit of not answering direct questions I ask you, which is frustrating. In my previous post I asked about the meaning of the similar phrase "parameter separability":
Are you referring to the idea that we could "screen off" the correlation between the two outcomes by incorporating information about local hidden and/or non-hidden variables in the region of one experiment? (so if Alice measures on axis c and Bob measures on axis b, then while P(c+|b+) may differ from P(c+), if lambda represents the state of some set of local variables in Alice's region, then P(c+|b+, lambda) = P(c+|lambda))
Can you please tell me if by "separable formalism" you just mean this idea that we can find local variables lambda in Alice's region that screen off the correlation between Alice's result with setting c and Bob's result with setting b, i.e. P(c+|b+, lambda) = P(c+|lambda)?
ThomasT said:
QM = nonseparable formalism. Bell tests = nonseparable design.
I don't know what you mean by "nonseparable design". Do you agree that just as the "dropping balls from the leaning tower of Pisa" experiment has a design that would allow it to be performed in both a universe with our law of gravity and a universe where more massive objects fell faster, similarly the Bell tests have a design that would allow them to be performed both in our universe apparently governed by QM, and in a universe governed by laws which satisfied my 1) and 2) such as the laws of classical electromagnetism? If you do agree with this, and you also agree with my previous notion that "separable formalism" refers to the possibility of screening off correlations between spacelike separated events, then do you also agree that a universe with laws that satisfy 1) and 2) would be one where it would be possible to screen off correlations between separated events, and thus in this universe the exact same Bell tests could be accurately described using separable formalism?
ThomasT said:
Bell tests are, presumably, measuring an underlying, nonseparable parameter (the entanglement relationship) that doesn't determine individual detection, and that doesn't vary from pair to pair.
You can't make assumptions about the "underlying" reality before running the test, the whole point of the test is to see whether the behavior of entangled electrons is consistent with the idea that the laws of physics are local realistic ones, in which case all probabilities would be "separable" in the sense I discussed above of P(c+|b+, lambda) = P(c+|lambda). If you disagree that this notion of separability automatically follows from the assumption of local realism, please address this question from my previous post:
Do you see anything in my 1) and 2) that goes beyond "classical realism + classical locality"? If so please identify the specific sentence(s) in my statement of 1) and 2) that you think don't follow from these classical assumptions.
If you agree that my 1) and 2) are equivalent to "local realism" but don't see how 1) and 2) automatically entail P(c+|b+, lambda) = P(c+|lambda), I can show you that too, just ask.
ThomasT said:
Yet Bell LR requires that this be expressed in terms of functions which determine individual detection and which vary from pair to pair as λ varies. So I reason that if this is sufficient to skew the predictions away from what one would expect via optics principles
Arrrrrgh you just repeat the same silly claims while completely ignoring the criticisms made...you can't derive Bell inequality violations from "optics principles", I already made that point very clear by repeatedly pointing out that the Bell experiment could be performed in a universe governed by the laws of classical electromagnetism and that in this universe the Bell inequalities would be satisfied. If you have some doubt about this then explain it, but don't just blithely repeat the same claims and pretend the criticisms were never raised.
 
Last edited:
  • #45
ytuab said:
Thanks for reply.
So it seems that the meaning of electromagnetic wave packet you quote is almost same as that of the photon. Right?
(more than 50 % --- two patterns of pass or reflect)
Yes, almost the same, with the important difference that a photon is always measured to have either passed through or been reflected by a polarizer (though before measurement its wavefunction might split), whereas an electromagnetic wave or wave packet can be split by a polarizer, with some of the energy of the wave passing through and some being reflected.
ytuab said:
For example,
the light intensity that passes through the filter is given by

I = I_0 \cos^2 \theta

where I_0 is the initial intensity, and \theta is the angle between the light's initial polarization direction and the axis of the polarizer.

Suppose when this transmitted (or reflected) intensity I is above some threashold, the detector can recognize it as one photon. (For examplem, > 60%)
Your classical electromagnetic wave seems to be different from this meaning ?
Well, there are no "photons" in classical electromagnetism, classical electromagnetic waves are infinitely divisible. But I imagined that the detectors were specifically designed to only go off if they received a wave packet with at least 50% of the energy of the original wave packet sent by the source, so that the classical experiment would replicate the same features as the quantum experiment (i.e. you'd always have either detector D+ or D- go off, never both).
ytuab said:
And the in the wiki you quote, the detection efficiency of the photon in the actual experiment is lower than that is needed. Right?
Yes, although there have been some experiments involving ions rather than photons that did close the detector efficiency loophole (though they didn't simultaneously close the locality loophole), see here and here (pdf file). And there are a number of papers that predict it will soon be possible to perform experiments which close both the detector efficiency loophole and the locality loophole simultaneously, see here and here.
 
  • #46
Dmitry67 said:
Yes, but why do they chose that particular target?
There are so many things one an try to falsify.
For example, there is a very little discussion about the alternatives of GR (even GR or Cartan GR is discussed very little).
The only similar thing which comes to my mind is a camp of MOND guys...

The claim of non-locality is the main (or best known) remaining riddle that suggests to have a direct and huge consequence for our perception of the universe including ourselves. A variant of GR doesn't pretend to have any such impact, it's just the same slightly different - rather boring in comparison. :-p
 
  • #47
OK. JesseM.
So I want to return to your first opinition that in classicel electromagnetism there would be no violation of Bell inequalities in a Bell-type experiment.
(This is the reason why I asked what your electromagnetic wave packet means.)

In the photoelectric effect, the light frequency is related to the energy, and the light intensity is related to the number of emitted photoelectrons.
(This means that we can suppose the light intensity Q is required for one emitted photoelectron. 2Q is needed for two emitted photoelectrons ...)
So we can suppose this minimum intensity Q is equal to more than 60% intensity of the wave packet.
(Because if you use the example of the electromagnetic wave, the intensity is related to the events at the polarizer according to Malus' law.)

JesseM said:
Well, there are no "photons" in classical electromagnetism, classical electromagnetic waves are infinitely divisible. But I imagined that the detectors were specifically designed to only go off if they received a wave packet with at least 50% of the energy of the original wave packet sent by the source, so that the classical experiment would replicate the same features as the quantum experiment (i.e. you'd always have either detector D+ or D- go off, never both).

I agree with you about this point.
And of course, as you say, the case when we detect tow photons (D+ and D- ) at the same one polarizer is meaningless (= the total number detected beomes more than 2 photons (3 or 4 photons) ).
The cases that I want to talk about are those of the two or less than two photons (at A and B detectors).

The light intensity that passes throught the filter is

I = I_0 \cos^2 \theta

So the remaining reflection intensity is

I = I_0 \sin^2 \theta

(Of course, a little loss exists.)

As I said, there are three patterns (pass (1) and reflect (2), and they are detected due to its enough intensity (> Q)).
And when the light (intensity) is divided at the polarizer almost equally ( 55% + 45%, for example, in the case of near 45 degrees in the above equations.), neither pass nor reflect detector can not detect it as a photon (3).

When the two photons (A and B) with parallel poralization axis bumps into each filter of the same angle (the angle difference between two filtes \alpha = 0),
The results (pass or reflect) of the two photons always become the same (\cos^2 \alpha = \cos^2 0 = 1) ?
Because when the photon A (or B) passes the filter A (or B), photon A (or B) always has the polarizarion axis near the filter A (or B) to reach the intensity detection threashold (> Q) of the dector.
In the case of equally divided lights at the polarizer as I said above, the pass or reflect light intensities can not reach the detection threashold of the detector.
This case will be ingnored, but is very important as a underlying reality.

Sorry. I want to talk about the two photons case (not the case of ions ...).
Because the ion case uses the very artificial condition such as Paul trap and pulse laser.
(If these artificial manipulations don't exist, the Be+ ion excitation can not occurr, which is required for entanglement condition. )
 
Last edited:
  • #48
ytuab said:
OK. JesseM.
So I want to return to your first opinition that in classicel electromagnetism there would be no violation of Bell inequalities in a Bell-type experiment.
(This is the reason why I asked what your electromagnetic wave packet means.)

In the photoelectric effect, the light frequency is related to the energy, and the light intensity is related to the number of emitted photoelectrons.
The photoelectric effect wouldn't work the same way in classical electromagnetism...do you want to discuss what's true in QM, or what would be true of experiments in a purely classical universe? Also, are you actually trying to dispute my claim that "in classicel electromagnetism there would be no violation of Bell inequalities in a Bell-type experiment"? If not, I don't really understand what the point of your discussion of the classical case is supposed to be.
ytuab said:
(This means that we can suppose the light intensity Q is required for one emitted photoelectron.
What's a "photoelectron"? Are you just talking about my idea of using wave packets in classical electromagnetism?
ytuab said:
2Q is needed for two emitted photoelectrons ...)
So we can suppose this minimum intensity Q is equal to more than 60% intensity of the wave packet.
Why 60%? My thought experiment was that the threshold would be 50%, so that for example the D- detector would go off if it received > 50% of the energy of the original wave packet (assume the source always sends out wave packets with a fixed energy), while the D+ detector would go off if it received ≥ 50% of the energy of the original wave packet. In this way it is guaranteed that for every wave packet sent by the source, one and only one of the two detectors will be triggered. Again, my point was to come up with a thought-experiment that replicates all the features of Bell's experiment (except the final results) in a classical universe, please let me know if you agree or disagree that this is possible to do.
ytuab said:
Sorry. I want to talk about the two photons case (not the case of ions ...).
Because the ion case uses the very artificial condition such as Paul trap and pulse laser.
(If these artificial manipulations don't exist, the Be+ ion excitation can not occurr, which is required for entanglement condition. )
What does "artificial" mean, and how is it relevant to Bell? In a local realist universe, as long as the experiment has all the basic features Bell outlined, you are guaranteed to satisfy Bell inequalities regardless of other conditions, "artificial" or not. Of course the ion experiments do lack one of the features of Bell's thought-experiment since the two measurements are not actually carried out at a spacelike separation, but unless you posit local realistic laws that specifically exploit the "locality loophole" you won't be able to explain these results with local realistic laws.
 
  • #49
ThomasT said:
I agree that your exposition is essentially correct. Mine was incomplete, and I apologize.
I am glad we cleared that discord.

ThomasT said:
I want to emphasize that experiments are testing formalisms, and that the formalisms can't, scientifically, be definitively associated with any conception of a reality that's beyond our sensory experience.
I changed emphasis in your statement. And I agree that you can't conclusively associate formalism with conception of reality. But as we test more and more this association from different sides and with different methods we acquire more certainty in this association.

Another side is that construction of experiments relay on previously tested formalism (that hopefully is is tested throughout). Even what you perceive as your own visual sense is actually reality model constructed by your brain. You don't "see" the light, you "see" the interpretation of that light constructed by your brain. For example you can't "see" http://en.wikipedia.org/wiki/Blind_spot_%28vision%29" directly. And we are relaying on that interpretation. We test it with other senses and we acquire strong confidence in that interpretation.

ThomasT said:
Bell compared two competing formalisms, standard qm and LR-supplemented/interpreted standard qm, and proved that they're incompatible. An experimental test of Bell's theorem entails the construction of an inequality based on the specific design and preparation of the test. It provides a quantitative measure of the compatibility of each of the competing formalisms with that experiment, as well as between the competing formalisms for that experiment.
No, Bell provides quantitative measure for local realism only. For QM there is only qualitative non falsifiable prediction that it can violate LR inequalities.
And that is one of the problems - all these experiments try to test local realism but they don't test falsifiable predictions of QM . However they are presented as scientific tests of QM.
And that is just sick.

ThomasT said:
Wrt a Bell experiment where the efficiency/detection loophole isn't closed (all of them, afaik), and the basis for adoption of the fair sampling or no enhancement assumptions isn't scientifically demonstrated in that experiment (all of them, afaik), then the experiment allows a possible flaw wrt the testing of the competing formalisms based on an inequality constructed on those assumptions.

So, we might rewrite your exposition as:

Scientific method requires that experiment can falsify hypothesis to be tested.
So we should have three possible outcomes of experiment:
1. Experiment is not flawed and results agree with formal hypothesis.
2. Experiment is not flawed and results disagree with formal hypothesis.
3. Experiment is flawed because formal hypothesis is based on assumptions which haven't been scientifically demonstrated to hold for that experiment, or for some other reason.
Prediction is made for certain hypothesis that uses certain assumption. Then this particular hypothesis is falsified with experiment.
We can make different hypothesis without that assumption. That will require different experiment with additional requirements.

But Bell experiments try to falsify some hypothetical hypothesis even before it's made.
What for?
Because otherwise mainstream theory looks crappy? And everybody will look for alternatives?
 
Last edited by a moderator:
  • #50
Dmitry67 said:
There are no "definitive" experiments; all experiments uses hardware which does not work with 100% efficiency or precision.

However, nobody is trying to disprove, say, SR based on that. The motivation of LR guys is a mystrery for me.
Who is trying to disprove QM based on efficiency loophole?

Dmitry67 said:
In any case, what efficiency level should be reached so there won't be any room for LR?
To reach something you have to move in that direction.
Do you know any photon Bell experiment that test different efficiency levels?
Detection efficiency (coincident detection rate to single detection rate) is very often not reported at all in papers about Bell experiments.
 
  • #51
zonde said:
And that is one of the problems - all these experiments try to test local realism but they don't test falsifiable predictions of QM . However they are presented as scientific tests of QM.
And that is just sick.

This is incorrect. They absolutely test a falsifiable prediction of QM as well! That prediction being the cos^2(theta) relationship. The EPR paper contemplated the idea that QM was not complete. Please recall that Bell says that if QM is incorrect, then the Bell Inequality is respected and the cos^2 relationship is wrong. In fact, there are local realistic models in which QM and LR yield different predictions for this relationship. In such, usually the LR model is linear.
 
  • #52
DrChinese said:
This is incorrect. They absolutely test a falsifiable prediction of QM as well! That prediction being the cos^2(theta) relationship.
Yes, cos^2(theta) relationship is falsifiable prediction.
So can you give reference to some experiment that does scientific test of this relationship and which you would prefer as an example?
 
  • #53
zonde said:
Yes, cos^2(theta) relationship is falsifiable prediction.
So can you give reference to some experiment that does scientific test of this relationship and which you would prefer as an example?

One of many I could cite:

http://arxiv.org/abs/quant-ph/9810080
Violation of Bell's inequality under strict Einstein locality conditions (1998)
Authors: Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, Anton Zeilinger

"Quantum theory predicts a sinusoidal dependence for the coincidence rate Cqm++(A , B ) ∝ sin2(B − A ) on the difference angle of the analyzer directions in Alice’s and Bob’s experiments. ... Thus, because the visibility of the perfect correlations in our experiment was about 97% we expect S to be not higher than 2.74 if alignment of all angles is perfect and all detectors are equally efficient. ... A typical observed value of the function S in such a measurement was S = 2.73±0.02 for 14700 coincidence events collected in 10 s. ... Our results confirm the quantum theoretical predictions..."

I would say the above description is fairly typical, and I did not include the portion in which local realistic predictions are calculated and then falsified. My point being that QM makes a specific prediction different than LR. The QM prediction would be falsified if the LR value was seen - or in fact if any other value than the QM prediction was seen. So QM is tested.
 
  • #54
JesseM said:
What does "separable formalism" mean? You have a habit of not answering direct questions I ask you, which is frustrating. In my previous post I asked about the meaning of the similar phrase "parameter separability":

Can you please tell me if by "separable formalism" you just mean this idea that we can find local variables lambda in Alice's region that screen off the correlation between Alice's result with setting c and Bob's result with setting b, i.e. P(c+|b+, lambda) = P(c+|lambda)?

Not answering for ThomasT but just to chime in on what "parameter separability" means. Given an expression such as

ab + bc < ac

Separability allows me to rearrange the terms at will in the expression. I can factor out b on the LHS and treat each of the parameters as a standalone variable.

Note that this can not be done if our parameters are not communtative. In other words, if the value of a when it occurs together with b, is not the same as the value of a when it occurs with c, then we can not factor at will. The parameters will not be separable either, and therefore each term in the inequality (ie "ab", "bc", "ac") is a single indivisible whole which must be treated as such.

What has this got to do with Bell?
Bell derives his inequality by making use of the ability to factorize the terms at will. This introduces a separability requirement. If you are in doubt about this, see his derivation starting at equation 14. He introduces a P(a,c) term which he subtracts from a P(a,b) term, and by factorization and rearagement, he obtains a P(b,c) term. The fact that the P(b,c) term pops out from the P(a,b) and P(b,c) terms affirms this point.

What has this got to do with QM?
P(a,b) from QM does not commute with P(b,c), nor with P(a,c). So off the bat, we have a problem already before we can even do an QM calculations as those terms will not be compatible with Bell's inequality.

What about the experiments?
P(a,b) from one run of the experiment, does not commute with P(b,c) nor with P(a,c) from a different run of the experiment either. That is what QM has been telling us all along! For those whose concept of reality involves ridgit pre-existing properties which are passively revealed in Bell-type experiments it will be difficult to see how this is possible. All you need is for the parameters being measured to be contextual. Which simply means, a pre-existing property of the particles combines with a property of the device to reveal the outcome of an experiment.

Yet some may exclaim that if the value of 'a' in combination with 'b' is different from the value of 'a' in combination with 'c', it means there is spooking action between "setting a" and "setting b". That is certainly the naive interpretation since all that is required is for the process which produces the particle pairs to be non-stationary (http://en.wikipedia.org/wiki/Stationary_process)

Therefore Bell's theorem is mistated in my opinion. It will be better stated as:

Non-commuting expectation values are not compatible with Bell's inequalities
Or
Non-separable expectation values are not compatible with Bell's inequalities
Or
You can not eat your cake and have it

Which would have been stating the obvious if not of all the noise surrounding Bell's theorem.
 
  • #55
billschnieder said:
Not answering for ThomasT but just to chime in on what "parameter separability" means. Given an expression such as

ab + bc < ac

Separability allows me to rearrange the terms at will in the expression. I can factor out b on the LHS and treat each of the parameters as a standalone variable.

Note that this can not be done if our parameters are not communtative. In other words, if the value of a when it occurs together with b, is not the same as the value of a when it occurs with c, then we can not factor at will. The parameters will not be separable either, and therefore each term in the inequality (ie "ab", "bc", "ac") is a single indivisible whole which must be treated as such.

What has this got to do with Bell?
Bell derives his inequality by making use of the ability to factorize the terms at will. This introduces a separability requirement. If you are in doubt about this, see his derivation starting at equation 14. He introduces a P(a,c) term which he subtracts from a P(a,b) term, and by factorization and rearagement, he obtains a P(b,c) term. The fact that the P(b,c) term pops out from the P(a,b) and P(b,c) terms affirms this point.

What has this got to do with QM?
P(a,b) from QM does not commute with P(b,c), nor with P(a,c). So off the bat, we have a problem already before we can even do an QM calculations as those terms will not be compatible with Bell's inequality.
In that derivation Bell is not trying to show what's true in QM, he's showing what would necessarily be true in this experiment under a local realist theory (assuming the local realist theory meets the condition that when the experimenters both choose the same detector setting they are guaranteed to get opposite results, the condition expressed in equation 13), and then showing that this is incompatible with QM's predictions about the same experiment. His derivation in equations 14-15 in this paper is about what would be true in a local realist theory (of the type I discussed in [post=3231977]post 31[/post]).

That said I'm still not really clear on what you mean by "separability"--"Separability allows me to rearrange the terms at will in the expression" is a bit vague, and the relation of this to commuting/non-commuting is also unclear, the notion of commuting or not commuting is usually applied to measurement operators, not expectation values. Position x, momentum p and energy E don't all mutually commute, but if you are interested in the expectation values P(x), P(p) and P(E) for a single state vector, then if you had some expression like P(x)*P(p) + P(p)*P(E) < P(x)*P(E), you could certainly factor out P(p) from the left hand side, for any specific state vector the three expectation values will all just be real numbers with fixed values after all. Non-commuting would imply that if you took a state vector V and then applied the position operator resulting in a collapse to a position eigenvector Vx, then immediately applied the momentum operator to Vx and looked at the expectation value for momentum, this would be different than if you had first applied the momentum operator to V and then immediately applied the position operator.

So, if you had the following:

[P(x)*P(p) for a position measurement followed by a momentum measurement] + [P(p)*P(E) for a momentum measurement followed by an energy measurement] < [P(x)*P(E) for a position measurement followed by an energy measurement]

...then in that case the non-commutativity would mean you could no longer factor P(p) out of the left hand side because the expectation value for momentum would depend if it was measured first as in P(p)*P(E) or second as in P(x)*P(p). Not clear on how this relates to an inequality featuring expectation values for P(a,b), P(b,c) and P(a,c) though, might help if you wrote it out in the same explicit form as I did above.
 
Last edited:
  • #56
DrChinese said:
One of many I could cite:

http://arxiv.org/abs/quant-ph/9810080
Violation of Bell's inequality under strict Einstein locality conditions (1998)
Authors: Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, Anton Zeilinger
Yes, very good experiment.

DrChinese said:
"Quantum theory predicts a sinusoidal dependence for the coincidence rate Cqm++(A , B ) ∝ sin2(B − A ) on the difference angle of the analyzer directions in Alice’s and Bob’s experiments. ... Thus, because the visibility of the perfect correlations in our experiment was about 97% we expect S to be not higher than 2.74 if alignment of all angles is perfect and all detectors are equally efficient. ... A typical observed value of the function S in such a measurement was S = 2.73±0.02 for 14700 coincidence events collected in 10 s. ... Our results confirm the quantum theoretical predictions..."

I would say the above description is fairly typical, and I did not include the portion in which local realistic predictions are calculated and then falsified. My point being that QM makes a specific prediction different than LR. The QM prediction would be falsified if the LR value was seen - or in fact if any other value than the QM prediction was seen. So QM is tested.
We were talking about testing of cos^2(theta) relationship. So let's keep to that.
From paper:
"A nonlinear χ2 -fit showed perfect agreement with the sine curve predicted by quantum theory."
That is about as far as it goes in respect of cos^2(theta) testing.

In order to consider this experiment as scientific test of cos^2(theta) relationship there should be some necessary condition for that test. Then if that necessary condition does not hold we can say that test falsified prediction.

If result about cos^2(theta) testing is formulated like: "experimenter's opinion is that fit is good" we don't call this scientific test, do we? Experimenters opinion can't be this necessary condition if we talk about scientific tests.

So I say that only Bell inequalities are tested scientifically in this experiment. But not cos^2(theta) relationship.
 
  • #57
billschnieder said:
...

Separability allows me to rearrange the terms at will in the expression. I can factor out b on the LHS and treat each of the parameters as a standalone variable.

Note that this can not be done if our parameters are not communtative. In other words, if the value of a when it occurs together with b, is not the same as the value of a when it occurs with c, then we can not factor at will. The parameters will not be separable either, and therefore each term in the inequality (ie "ab", "bc", "ac") is a single indivisible whole which must be treated as such.

What has this got to do with Bell?
Bell derives his inequality by making use of the ability to factorize the terms at will. This introduces a separability requirement. If you are in doubt about this, see his derivation starting at equation 14. He introduces a P(a,c) term which he subtracts from a P(a,b) term, and by factorization and rearagement, he obtains a P(b,c) term. The fact that the P(b,c) term pops out from the P(a,b) and P(b,c) terms affirms this point.

What has this got to do with QM?
P(a,b) from QM does not commute with P(b,c), nor with P(a,c). So off the bat, we have a problem already before we can even do an QM calculations as those terms will not be compatible with Bell's inequality.

What about the experiments?
P(a,b) from one run of the experiment, does not commute with P(b,c) nor with P(a,c) from a different run of the experiment either. That is what QM has been telling us all along! For those whose concept of reality involves ridgit pre-existing properties which are passively revealed in Bell-type experiments it will be difficult to see how this is possible. All you need is for the parameters being measured to be contextual. Which simply means, a pre-existing property of the particles combines with a property of the device to reveal the outcome of an experiment.

...

Wrong, as per usual. I will repeat what I have said numerous times before: your statements represent your personal theories about Bell, which are completely at variance with the scientific community at large. Other readers may not be aware that you are pushing your personal opinions and not good science.

It is the assertion of the Local Realist that there is no dependence of a measurement here on a result there (separability/locality), which essentially denies entanglement exists as a physical state. It really wouldn't matter in that statement whether QM says this or says that. Further the Local Realist says that there exists values for unobserved measurement settings (realism) for a particle, regardless of whether measuring one setting commutes with the measurement of another. That's all you really need to get Bell's Theorem. Of course, you would also want to know the QM expectation value for comparative purposes, tying back to EPR.

If you think that non-commutativity is relevant in EPR setups (with entangled pairs which don't commute), then I would say you reject Local Realism prima facie. And that same conclusion follows from accepting QM as "complete". Like most Local Realists, you want to have your cake (LR) and eat it too (QM). Bell does not allow this.
 
  • #58
zonde said:
So I say that only Bell inequalities are tested scientifically in this experiment. But not cos^2(theta) relationship.

And I would say the authors of the paper would laff their heads off if they read that. :biggrin:

Since Bell Inequalities come from that relationship. As do the perfect correlations that the paper mentions.
 
  • #59
DrChinese said:
And I would say the authors of the paper would laff their heads off if they read that. :biggrin:

Since Bell Inequalities come from that relationship. As do the perfect correlations that the paper mentions.

"Perfect" correlations don't exist in scientific measurements - scientific measurements work with measurement errors. :-p
 
  • #60
DrChinese said:
[...] It is the assertion of the Local Realist that there is no dependence of a measurement here on a result there (separability/locality), which essentially denies entanglement exists as a physical state. [..]
Further the Local Realist says that there exists values for unobserved measurement settings (realism) for a particle, regardless of whether measuring one setting commutes with the measurement of another. That's all you really need to get Bell's Theorem. [..]

Thanks for the summary, but those assertions are a little (too) extreme.
I would think that it is the assertion of the Local Realist that there is no magical dependence of a measurement here on a result there (separability/locality). Influences at a distance according to known or not yet known physical mechanisms are admitted. However I agree that that does essentially deny physical entanglement at a great distance. [..] Further, a Local Realist assumes that already before the measurement one or more unobserved particle variables exist that will affect the values that will be measured. And I suppose that Bell's theorem is meant to apply to such local realism.
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 80 ·
3
Replies
80
Views
7K
  • · Replies 44 ·
2
Replies
44
Views
5K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K