Spin difference between entangled and non-entangled

  • #101
atyy said:
I see. But perhaps the definition of "realism" is debated? I suspect Norsen would consider Many-Worlds to be realistic, whereas I think you would not?

Not sure about that (MWI). But it would not be fair to say the definition of "realism" is debated so much as it is distorted. Norsen's position is quite clear in his paper "Against 'Realism'":

http://arxiv.org/abs/quant-ph/0607057

His non-mainstream position is obvious from the extract:

"Carefully surveying several possible meanings, we argue that all of them are flawed in one way or another as attempts to point out a second premise (in addition to locality) on which the Bell inequalities rest... We thus suggest that the phrase `local realism' should be banned from future discussions of these issues, and urge physicists to revisit the foundational questions behind Bell's Theorem."

He and I have discussed this ad infinitum, in fact our discussions may have spurred him to write that paper. :) Norsen is regarded as brilliant in the area, but has been harshly reviewed by Shimony and others. I would be happy to refute him any day of the week, as it is not that hard.
 
Physics news on Phys.org
  • #102
DrChinese said:
Locality is all that this article indicates is at the root of Bell, which is a denial of the role of realism.

Well what is the role of "realism"? I know that the default catchphrase many authors use is that Bell's theorem rules out "local realism", but I've never seen a good explanation of what "realism" actually means in this context or why it's a necessary part of the argument. If it just means measurement outcomes in an experiment being predetermined, then that's not necessary as an assumption in order to derive Bell inequalities (Bell himself was quite explicit about this in later essays, starting at least as early as the mid 1970s).
That is consistent for Norsen (I am quite sure he wrote most of the historical part as I am well familiar with his writing style). In his mind, violation of a Bell Inequality equates to a proof of non-locality. That view is generally rejected by the community in favor of one in which realism may alternately (or additionally) be rejected. You will find few in the scientific community who advocate a realistic view of QM regardless of the locality issue.

I'm not aware of such a consensus, at least among people who actually do research on the topic. There's certainly some disagreement on the terminology and the finer points of what Bell's theorem is about, but as far as I'm aware, Norsen's expositions on Bell's theorem are known about and at at least reasonably well regarded in the community. I'm certainly not aware of any overwhelming consensus that "realism", "determinism", "counterfactual definiteness", etc., is a necessary or important ingredient in Bell's theorem. For instance, the terminology that I'm most familiar with is that the correlations that satisfy Bell inequalities are just called the "local set" or the "local polytope".
 
  • #103
wle said:
Well what is the role of "realism"? I know that the default catchphrase many authors use is that Bell's theorem rules out "local realism", but I've never seen a good explanation of what "realism" actually means in this context or why it's a necessary part of the argument.

How many quotes from folks like Aspect and Zeilinger would it take to convince you that "local realism" is what is ruled out by Bell? EPR is all about realism (defined as simultaneous elements of reality there, as well as by Bell). Locality is an afterthought to EPR, as they assumed there would be no spooky action at a distance. As to why it is necessary to the Bell argument, simply look after Bell's (14) and you will see realism introduced as an assumption (let c be a unit vector...).

Honestly, I was asked my opinion and gave it. After hours of discussing this with Travis, I am not likely to change my opinion any more than he is likely to change his. If we want to continue this discussion, we should do it outside of this thread as I think we have strayed off target.
 
  • #104
DrChinese said:
How many quotes from folks like Aspect and Zeilinger would it take to convince you that "local realism" is what is ruled out by Bell? EPR is all about realism (defined as simultaneous elements of reality there, as well as by Bell). Locality is an afterthought to EPR, as they assumed there would be no spooky action at a distance. As to why it is necessary to the Bell argument, simply look after Bell's (14) and you will see realism introduced as an assumption (let c be a unit vector...).

Honestly, I was asked my opinion and gave it. After hours of discussing this with Travis, I am not likely to change my opinion any more than he is likely to change his. If we want to continue this discussion, we should do it outside of this thread as I think we have strayed off target.

This isn't about me personally convincing you or vice versa. If you want to hold the opinion that Bell's theorem rules out something called "local realism", that's one thing and it can be debated. It's certainly how a lot of physicists and textbooks would describe Bell's theorem. But if you're going to insist that this is how 99% of theorists working in the field today would explain Bell's theorem and Norsen represents a 1% anti-mainstream fringe stance, then that's not my impression based on my exposure to what's going on in the field. For instance, there was a review article published on the topic earlier this year [1] (which, incidentally, I'd recommend to anyone looking for a modern overview of the field) that hardly mentions realism at all.

[1] N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, "Bell nonlocality", Rev. Mod. Phys. 86, 419 (2014), arXiv:1303.2849 [quant-ph].
 
Last edited:
  • Like
Likes atyy
  • #105
wle said:
[1] N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, "Bell nonlocality", Rev. Mod. Phys. 86, 419 (2014), arXiv:1303.2849 [quant-ph].

You are correct that "realism" is not mentioned. This definitely follows Norsen's reasoning. I am surprised to see Cavalcanti in the list of authors, as he had recently written about "local realism" in the same vein as I. So you may be correct that the tide has changed.
 
  • #106
What exactly makes QM compatible with non-locality or non-realism? Is there an example of QM non-locality or non-realism before or other than Bell test experiments and inequalities?
 
  • #107
Alien8 said:
What exactly makes QM compatible with non-locality or non-realism? Is there an example of QM non-locality or non-realism before or other than Bell test experiments and inequalities?

Quantum systems are not always localized as point particles. Entangled pairs are but one example of that. That alone causes locality to be a suspect idea.

Quantum systems obey the HUP. That alone causes realism to be a suspect idea, since non-commuting observables do not seem to have definite values at all times.

This was known in 1935, but the full implications were not clear at that time.
 
  • #108
DrChinese said:
Quantum systems are not always localized as point particles. Entangled pairs are but one example of that. That alone causes locality to be a suspect idea.

I understand that position of quantum particles is given as probability function or average in QM equations, but what does that have to do with interaction or connection between two particles over distance? I see that entangled pairs are example of non-locality, but can you name any other example?

Quantum systems obey the HUP. That alone causes realism to be a suspect idea, since non-commuting observables do not seem to have definite values at all times.

I can see connection between uncertainty principle and the idea it might be due to actual undefined reality, but I don't see that can explain non-locality.
 
  • #109
DrChinese said:
You are correct that "realism" is not mentioned. This definitely follows Norsen's reasoning. I am surprised to see Cavalcanti in the list of authors, as he had recently written about "local realism" in the same vein as I. So you may be correct that the tide has changed.
1] N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, "Bell nonlocality", Rev. Mod. Phys. 86, 419 (2014), arXiv:1303.2849 [quant-ph].

You are right realism is not mentioned. But their definition of locality seems to apply and be interchangeable with realism : Top page 3 : ' Let us formalize the idea of a local theory more precisely: The assumption of locality implies that we should be able to identify a set of past factors, described by some variables lambda having a joint causal influence on both outcomes'
 
  • #110
Alien8 said:
I understand that position of quantum particles is given as probability function or average in QM equations, but what does that have to do with interaction or connection between two particles over distance? I see that entangled pairs are example of non-locality, but can you name any other example?
It is impossible to assign a precise position to any particle, and it is rather difficult to define "locality" without talking about the positions of the particles involved.
 
  • #111
wle said:
Well what is the role of "realism"? I know that the default catchphrase many authors use is that Bell's theorem rules out "local realism", but I've never seen a good explanation of what "realism" actually means in this context or why it's a necessary part of the argument. If it just means measurement outcomes in an experiment being predetermined, then that's not necessary as an assumption in order to derive Bell inequalities (Bell himself was quite explicit about this in later essays, starting at least as early as the mid 1970s).
Actually, even Norsen himself argues in his paper that a particular notion of 'realism' is required for Bell's theorem; that is, the notion of "metaphysical realism" or the existence of an external world “out there” whose existence and identity is independent of anyone’s awareness:
So it should not be surprising that Bell’s Theorem (a specific instance of, among other things, using certain words with their ordinary meanings) rests on Metaphysical Realism. This manifests itself most clearly in Bell’s use of the symbol λ to refer to a (candidate theory’s) complete description of the state of the relevant physical system – a usage which obviously presupposes the real existence of the physical system possessing some particular set of features that are supposed to be described in the theory. Putting it negatively, without Metaphysical Realism, there can be no Bell’s theorem. Metaphysical Realism can (thus) be thought of as a premise that is needed in order to arrive at a Bell-type inequality.

And so it seems we may have finally discovered the meaning of the ‘realism’ in ‘local realism’. One cannot, as suggested earlier, derive a Bell-type inequality from the assumption of Locality alone; one needs in addition this particular Realism assumption. This therefore explains the ‘local realism’ terminology and explains precisely the nature of the two assumptions we are entitled to choose between in the face of the empirical violations of Bell’s inequality. On this interpretation, we must either reject Locality or reject Metaphysical Realism.
http://arxiv.org/pdf/quant-ph/0607057v2.pdf
 
  • #112
I think "metaphysical realism" is not what most people have in mind when they say "local realism". Most people mean counterfactual definiteness. Here is an explanation by Gill, but with a hint of why this may be a subtle issue: "Instead of assuming quantum mechanics and deriving counterfactual deniteness, Bell turned the EPR argument on its head. He assumes three principles which Einstein would have endorsed anyway, and uses them to get a contradiction with quantum mechanics; and the first is counterfactual deniteness. We must first agree that though, say, only A and B are actually measured in one particular run, still, in a mathematical sense, A' and B' also exist (or at least may be constructed) alongside of the other two; and moreover they may be thought to be located in space and time just where one would imagine." http://arxiv.org/abs/1207.5103

Bell's theorem only assumes that A' and B' "may be constructed". Then the question is whether one wants to go from "may be constructed" to terms like "exist", "counterfactual definiteness" and "realism".

Scarani makes a similar comment: "Therefore LV statistics can always be explained by a deterministic model. Of course, this does not mean that such an explanation must necessarily be adopted: your favorite explanation, as well as the “real” phenomenon, may not involve determinism. For instance, as we shall see soon, measurement on separable quantum states leads to LV statistics, but this does not make quantum theory deterministic (if that is your favorite explanation), nor forces us to believe that the physical phenomenon “out there” is deterministic." http://arxiv.org/abs/1303.3081
 
Last edited:
  • #113
atyy said:
Bell's theorem only assumes that A' and B' "may be constructed". Then the question is whether one wants to go from "may be constructed" to terms like "exist", "counterfactual definiteness" and "realism".

To me, the locality requirement is tied up with the requirement that A and B are separable.

The realism requirement is the requirement that there is a counterfactual C in addition to A and B which can be measured. You need that too for Bell, and it is introduced after his (14). "Let c be a unit vector..." This assumption was originally made explicit in EPR, which says that it is not reasonable to require each element of reality to be predictable simultaneously. So you aren't REQUIRED to accept that, but if you do, that's your "realism". Bell built on that by picking his a/b/c and saying: a and b are separable, so b and c are separable, and a and c are separable.

You can't get the Bell result with a counterfactual to go with the 2 you actually measure. And that part has nothing to do with locality.
 
  • #114
DrChinese said:
The realism requirement is the requirement that there is a counterfactual C in addition to A and B which can be measured.

Yes, I think everyone agrees that a violation of the Bell inequalities are incompatible with a theory that has a local deterministic explanation. I think everyone would also agree that a deterministic theory can be written in a counterfactual definite way.

DrChinese said:
So you aren't REQUIRED to accept that, but if you do, that's your "realism".

I think that is the question. For example, Bell's theorem can be used to rule out an unentangled state, but not everyone would be comfortable with saying that an unentangled state has to be real, because otherwise we can't apply Bell's theorem to it.

Maybe an analogy is that for a free Gaussian wave function, the results of experiments on position and momentum are consistent with particles that had definite position and momentum at all times. However, I am not comfortable from within Copenhagen saying that this means that the particles described by a free Gaussian wave function had real trajectories with definite position and momentum.
 
  • #115
atyy said:
For example, Bell's theorem can be used to rule out an unentangled state, but not everyone would be comfortable with saying that an unentangled state has to be real, because otherwise we can't apply Bell's theorem to it.

Maybe an analogy is that for a free Gaussian wave function, the results of experiments on position and momentum are consistent with particles that had definite position and momentum at all times. However, I am not comfortable from within Copenhagen saying that this means that the particles described by a free Gaussian wave function had real trajectories with definite position and momentum.

I am not asserting anything (and certainly not trajectories) is real or realistic outside of what can be predicted with certainty. I am simply saying that is one of the 2 key Bell assumptions: locality and realism (as I showed). As far as I am concerned, you could say both are contradicted by QM/Bell tests.
 
  • #116
DrChinese said:
I am not asserting anything (and certainly not trajectories) is real or realistic outside of what can be predicted with certainty. I am simply saying that is one of the 2 key Bell assumptions: locality and realism (as I showed). As far as I am concerned, you could say both are contradicted by QM/Bell tests.

How about if I have a pure state, say two unentangled photons of the same definite vertical polarization (0##^{\circ}##)? If both polarizers are set vertical (0##^{\circ}##) or horizontal (45##^{\circ}##), then each photon will definitely pass or not pass. But in a Bell test, the polarizer angles used may be 0##^{\circ}##, -45##^{\circ}##, and 22.5##^{\circ}##, so not all angles have results that are predicted with certainty. Would you consider this to be a state that is excluded by a Bell inequality violation?
 
  • #117
atyy said:
How about if I have a pure state, say two unentangled photons of the same definite vertical polarization (0##^{\circ}##)? If both polarizers are set vertical (0##^{\circ}##) or horizontal (45##^{\circ}##), then each photon will definitely pass or not pass. But in a Bell test, the polarizer angles used may be 0##^{\circ}##, -45##^{\circ}##, and 22.5##^{\circ}##, so not all angles have results that are predicted with certainty. Would you consider this to be a state that is excluded by a Bell inequality violation?

Not sure if we are on different sides of this or not. :)

EPR says something is real (an "element of reality") if it can be predicted with certainty without previously disturbing it. That definition is used by them as a building block to conclude QM is incomplete. Bell used that same idea, along with the assumption that the elements of reality not be required to be simultaneously predictable, to conclude that you could not match the QM predictions.

My own viewpoint is that reality is shaped by the nature of the observation, and therefore we do not exist in an objective reality. So going back to your question, the Bell Inequality does not apply because I don't assert the existence of a reality independent of observation. Any observation predicable with certainty is merely redundant. Everything else is up to chance.
 
  • #118
DrChinese said:
Not sure if we are on different sides of this or not. :)

EPR says something is real (an "element of reality") if it can be predicted with certainty without previously disturbing it. That definition is used by them as a building block to conclude QM is incomplete. Bell used that same idea, along with the assumption that the elements of reality not be required to be simultaneously predictable, to conclude that you could not match the QM predictions.

My own viewpoint is that reality is shaped by the nature of the observation, and therefore we do not exist in an objective reality. So going back to your question, the Bell Inequality does not apply because I don't assert the existence of a reality independent of observation. Any observation predicable with certainty is merely redundant. Everything else is up to chance.

I think we are on different "sides" of a circle. :D

Unlike you, I would say the Bell inequality applies, because the inequality holds as long as the counterfactuals exist "in principle" in the sense that they "can be constructed", even if they don't exist "in reality". So for the two unentangled photons with the same definite vertical polarization, I would say that they are excluded by a Bell violation, because the counterfactuals exist "in principle", even though they may not exist "in reality".

Maybe this is why the Brunner et al review doesn't use "local realism", because they wish to use the violation of the Bell inequalities to also certify things like entanglement, ie. they want to be able to consider quantum states as the hidden variable ##\lambda##.
 
  • #119
atyy said:
How about if I have a pure state, say two unentangled photons of the same definite vertical polarization (0##^{\circ}##)? If both polarizers are set vertical (0##^{\circ}##) or horizontal (45##^{\circ}##), then each photon will definitely pass or not pass. But in a Bell test, the polarizer angles used may be 0##^{\circ}##, -45##^{\circ}##, and 22.5##^{\circ}##, so not all angles have results that are predicted with certainty. Would you consider this to be a state that is excluded by a Bell inequality violation?

Separable states don't lead to a Bell violation. The most general situation in quantum mechanics is that the two parties in a Bell-type experiment (Alice and Bob) can perform POVM measurements on a shared mixed state. Unentangled mixed states are generally defined as those that can be decomposed in the form $$\rho_{\mathrm{AB}} = \sum_{\lambda} p_{\lambda} \rho_{\mathrm{A}}^{(\lambda)} \otimes \rho_{\mathrm{B}}^{(\lambda)} \,,$$
in which ##p_{\lambda}## are a set of probability coefficients and ##\rho_{\mathrm{A}}^{(\lambda)}## and ##\rho_{\mathrm{B}}^{(\lambda)}## are density operators defined on Alice's and Bob's Hilbert spaces respectively. If Alice has a set ##\{M_{a}^{(x)}\}## of POVM measurements she can perform (indicated by an index ##x## denoting the choice of measurement, with the index ##a## indicating the result) and Bob similarly can perform the set ##\{N_{b}^{(y)}\}## of POVMs, then the joint probabilities predicted by quantum mechanics just reduce to the definition of a local model: $$\begin{eqnarray}
P(ab \mid xy) &=& \mathrm{Tr} \bigl[ M_{a}^{(x)} \otimes N_{b}^{(y)} \rho_{\mathrm{AB}} \bigr] \\
&=& \sum_{\lambda} p_{\lambda} \, \mathrm{Tr} \bigl[\bigl( M_{a}^{(x)} \otimes N_{b}^{(y)} \bigr) \, \bigl( \rho_{\mathrm{A}}^{(\lambda)} \otimes \rho_{\mathrm{B}}^{(\lambda)} \bigr) \bigr] \\
&=& \sum_{\lambda} p_{\lambda} \, \mathrm{Tr}_{\mathrm{A}} \bigl[ M_{a}^{(x)} \rho_{\mathrm{A}}^{(\lambda)} \bigr] \, \mathrm{Tr}_{\mathrm{B}} \bigl[ N_{b}^{(y)} \rho_{\mathrm{B}}^{(\lambda)} \bigr] \\
&=& \sum_{\lambda} p_{\lambda} \, P_{\mathrm{A}}(a \mid x; \lambda) \, P_{\mathrm{B}}(b \mid y; \lambda) \,,
\end{eqnarray}$$ with ##P_{\mathrm{A}}(a \mid x; \lambda) = \mathrm{Tr}_{\mathrm{A}} \bigl[ M_{a}^{(x)} \rho_{\mathrm{A}}^{(\lambda)} \bigr]## and ##P_{\mathrm{B}}(b \mid y; \lambda) = \mathrm{Tr}_{\mathrm{B}} \bigl[ N_{b}^{(y)} \rho_{\mathrm{B}}^{(\lambda)} \bigr]## according to the Born rule. So for nonentangled states, you always trivially have a local model that makes the same predictions as quantum mechanics which, of course, won't violate any Bell inequality.

An unentangled pure state is just the special case of a density operator of the form ##\rho_{\mathrm{AB}} = \lvert \psi \rangle \langle \psi \rvert_{\mathrm{A}} \otimes \lvert \phi \rangle \langle \phi \rvert_{\mathrm{B}}##. In that case, the quantum predictions factorise completely: $$\begin{eqnarray}
P(ab \mid xy) &=& \langle \psi \rvert M_{a}^{(x)} \lvert \psi \rangle_{\mathrm{A}} \, \langle \phi \rvert N_{b}^{(y)} \lvert \phi \rangle_{\mathrm{B}} \\
&=& P_{\mathrm{A}}(a \mid x) \, P_{\mathrm{B}}(b \mid y) \,.
\end{eqnarray}$$

As far as I know, the converse isn't so clear. Specifically, I think it's known that all entangled pure states can predict a Bell violation, but I don't think it's known for arbitrary entangled mixed states (though this isn't a topic I know much about, so don't quote me on this).

(EDIT: Section III of the review I linked to covers all of this.)
 
Last edited:
  • #120
wle said:
Separable states don't lead to a Bell violation.

Yes, that was the point of my question. To me, a Bell violation excludes separable states. However, if I understand DrChinese correctly, although we know from QM that a Bell violation excludes separable states, we don't know from "Bell's theorem" that a Bell violation excludes separable states.

The issue is that if counterfactual definiteness "in reality" is an assumption of Bell's theorem, then it doesn't apply to separable states since a separable state like the 2 unentangled photons with the same vertical polarization won't give 100% certain results at more than 2 of the angles used in a Bell test.

On the other hand one cannot say that counterfactual definiteness is not used at all in Bell's theorem. This is because a local variable theory that is excluded by Bell's theorem can be rewritten as a local deterministic theory. So by excluding local deterministic theories, one also excludes local variable theories. So the counterfactual definiteness is there "in principle", although not necessarily "in reality".
 
Last edited:
  • #121
atyy said:
Yes, that was the point of my question. To me, a Bell violation excludes separable states. However, if I understand DrChinese correctly, although we know from QM that a Bell violation excludes separable states, we don't know from "Bell's theorem" that a Bell violation excludes separable states.

The issue is that if counterfactual definiteness "in reality" is an assumption of Bell's theorem, then it doesn't apply to separable states since a separable state like the 2 unentangled photons with the same vertical polarization won't give 100% certain results at more than 2 of the angles used in a Bell test.

On the other hand one cannot say that counterfactual definiteness is not used at all in Bell's theorem. This is because a local variable theory that is excluded by Bell's theorem can be rewritten as a local deterministic theory. So by excluding local deterministic theories, one also excludes local variable theories. So the counterfactual definiteness is there "in principle", although not necessarily "in reality".

That's something that is a little confusing about discussions of Bell's theorem. In most treatments, it is assumed that the local realistic theory is deterministic: that is, in an EPR-type experiment, Alice's result is a deterministic function of her detector settings and the hidden variable \lambda. It's easy enough to allow classical nondeterminism, in the sense that Alice's measurement results could just be probabilistically related to her settings and the value of the hidden variable. But this more generality doesn't actually do anything; in any classical probabilistic theory, it's always possible to think of the nondeterminism as arising from ignorance about the details of the initial state. It's always consistent to assume that the underlying theory is deterministic. So if QM is inconsistent with a deterministic local theory, then it's also inconsistent with a nondeterministic local theory.
 
  • #122
atyy said:
Yes, that was the point of my question. To me, a Bell violation excludes separable states. However, if I understand DrChinese correctly, although we know from QM that a Bell violation excludes separable states, we don't know from "Bell's theorem" that a Bell violation excludes separable states.

You're conflating what are really two different questions here: 1) the general assumptions necessary to derive Bell-type inequalities, and 2) what resources, according to quantum mechanics, are needed to exhibit a Bell violation.

For 1), Bell inequalities can be derived from the factorisation assumption $$P(ab \mid xy) = \int \mathrm{d}\lambda \, \rho(\lambda) \, P_{\mathrm{A}}(a \mid x; \lambda) \, P_{\mathrm{B}}(b \mid y; \lambda)$$ for joint probability distributions. This is the criterion that the review article I linked to works with and is what Bell called "local causality" in a work called "The Theory of Local Beables" in 1975. In his original 1964 article, Bell in addition did the equivalent of assuming that the probabilities ##P_{\mathrm{A}}(a \mid x; \lambda)## and ##P_{\mathrm{B}}(b \mid y; \lambda)## are deterministic, i.e., they should only have values 0 and 1. (This may be what you might want to call "counterfactual definitess", i.e., the results for all possible measurements are predetermined given ##\lambda##.) It's now well known that this isn't necessary and, in fact, it's a fairly simple exercise to show that you can always turn a local stochastic model into a local deterministic one just by adding more hidden variables (the review article gives a short proof in section II.B.1, for instance), so the two are really equivalent.

For 2), in quantum mechanics, outcomes in a Bell-type experiment are a result of performing measurements on a shared quantum state. As I explained in my previous post, its quite easy to show that if the state is not entangled, the quantum prediction just reduces to the definition of a local model, and you won't get a Bell violation. It's also possible to show that if either Alice's or Bob's measurements are compatible (i.e., they commute), then the quantum prediction likewise reduces to a local model. So in order to produce a Bell violation with a quantum system, you need both entanglement and incompatible (noncommuting) measurements. Neither one alone is sufficient.
On the other hand one cannot say that counterfactual definiteness is not used at all in Bell's theorem. This is because a local variable theory that is excluded by Bell's theorem can be rewritten as a local deterministic theory. So by excluding local deterministic theories, one also excludes local variable theories. So the counterfactual definiteness is there "in principle", although not necessarily "in reality".

I'm not sure I agree with this. It's known that local stochastic and local deterministic models can account for exactly the same correlations. So for the purpose of deriving Bell inequalities, that means it's sufficient, but not necessary, to consider just local deterministic models.

Whether "counterfactual definiteness" is necessary in any of this depends on what exactly you're calling counterfactual definiteness. For instance, suppose I make up a theory that fits the factorisation condition above but in which the probability distributions ##P_{\mathrm{A}}(a \mid x; \lambda)## and ##P_{\mathrm{B}}(b \mid y; \lambda)## are not deterministic. Would you say that theory respects counterfactual definiteness? If not, then it's not an assumption needed to derive Bell inequalities.
 
  • #123
wle said:
Separable states don't lead to a Bell violation. ...

I agree. I am saying that a separable function on A & B doesn't lead to a Bell Inequality unless you ALSO consider the counterfactual case C. You must have A & B separable, plus B & C separable, and A & C separable. So then A & B & C are separable. It is only by combining the variations that you get Bell's Theorem.

Now you can ask whether A & B separable alone can be mimicked by a local theory (explicitly leaving out the realism assumption). I doubt one could reproduce the predictions of QM, but I don't really know.
 
  • #124
wle said:
You're conflating what are really two different questions here: 1) the general assumptions necessary to derive Bell-type inequalities, and 2) what resources, according to quantum mechanics, are needed to exhibit a Bell violation.

For 1), Bell inequalities can be derived from the factorisation assumption $$P(ab \mid xy) = \int \mathrm{d}\lambda \, \rho(\lambda) \, P_{\mathrm{A}}(a \mid x; \lambda) \, P_{\mathrm{B}}(b \mid y; \lambda)$$ for joint probability distributions. This is the criterion that the review article I linked to works with and is what Bell called "local causality" in a work called "The Theory of Local Beables" in 1975. In his original 1964 article, Bell in addition did the equivalent of assuming that the probabilities ##P_{\mathrm{A}}(a \mid x; \lambda)## and ##P_{\mathrm{B}}(b \mid y; \lambda)## are deterministic, i.e., they should only have values 0 and 1. (This may be what you might want to call "counterfactual definitess", i.e., the results for all possible measurements are predetermined given ##\lambda##.) It's now well known that this isn't necessary and, in fact, it's a fairly simple exercise to show that you can always turn a local stochastic model into a local deterministic one just by adding more hidden variables (the review article gives a short proof in section II.B.1, for instance), so the two are really equivalent.

For 2), in quantum mechanics, outcomes in a Bell-type experiment are a result of performing measurements on a shared quantum state. As I explained in my previous post, its quite easy to show that if the state is not entangled, the quantum prediction just reduces to the definition of a local model, and you won't get a Bell violation. It's also possible to show that if either Alice's or Bob's measurements are compatible (i.e., they commute), then the quantum prediction likewise reduces to a local model. So in order to produce a Bell violation with a quantum system, you need both entanglement and incompatible (noncommuting) measurements. Neither one alone is sufficient.

Yes, I'm conflating. But at least in the case of the two unentangled photons, the quantum probabilities do obey the factorization condition. So I would say that Bell's theorem shows that photon pairs that violate the inequality cannot be explained by the unentangled state.

wle said:
I'm not sure I agree with this. It's known that local stochastic and local deterministic models can account for exactly the same correlations. So for the purpose of deriving Bell inequalities, that means it's sufficient, but not necessary, to consider just local deterministic models.

Whether "counterfactual definiteness" is necessary in any of this depends on what exactly you're calling counterfactual definiteness. For instance, suppose I make up a theory that fits the factorisation condition above but in which the probability distributions ##P_{\mathrm{A}}(a \mid x; \lambda)## and ##P_{\mathrm{B}}(b \mid y; \lambda)## are not deterministic. Would you say that theory respects counterfactual definiteness? If not, then it's not an assumption needed to derive Bell inequalities.

Yes. I'm not using "necessary" in a mathematical sense. I prefer not to use "counterfactual definiteness" since it's such a philosophy term. I would prefer to say: a violation of a Bell inequality is inconsistent with any theory that has a local deterministic explanation.

Actually, one reason I like the Goldstein et al Scholarpedia article http://www.scholarpedia.org/article/Bell's_theorem is that they really focus on factorization, and avoid calling it "local causality". Factorization is an unambiguous mathematical condition needed for a Bell inequality. Locality is something else, and we need additional assumptions to justify why "factorization" has anything to do with "locality".

One thing that I don't understand is that you and many seem quite comfortable with the notion of a "local nondeterministic theory" without necessarily relying on it being undergirded by a "local deterministic theory". How do you find that natural? I prefer to start with local deterministic theories, and then use that as a basis to construct local nondeterministic theories as a larger class.

In part, this is related to how one thinks of directed graphical models. Do we need determinism in using a graphical model to justify how we factorize a joint probability? I think we do, because otherwise, the graphical model is simply restating the factorization assumption, which is a purely mathematical condition, and is not necessarily linked to any concept of causality. So for example, Wood and Spekkens http://arxiv.org/abs/1208.4119 give the factorization condition to prove the Bell inequality in Fig. 19, which I like because one immediately sees the loopholes like no superdeterminism, no retrocausation etc in order to favour nonlocality as can be seen in Fig. 25,26 and 27. However, Wood and Spekkens start in Fig. 1 with a deterministic model and build up the graphical language from there to a larger class of nondeterministic models. In a local deterministic model, the concept of local causality is clear, and it seems easier to build up. It's clearly a matter of taste, since the two classes are equivalent - but do you really find "local nondeterministic models" a natural fundamental concept?

Edit: One more argument against "local nondeterministic models" as a fundamental concept is that "local" really means consistent with relativity and its concept of light cones etc. However, there is a bigger class of nondeterministic theories consistent with relativity than local nondeterministic theories - quantum theory. So if one is considering stochastic theories and relativity, it's not clear why one would define "local nondeterministic theories" unless one was considering "local deterministic theories".
 
Last edited:
  • #125
atyy said:
But at least in the case of the two unentangled photons, the quantum probabilities do obey the factorization condition. So I would say that Bell's theorem shows that photon pairs that violate the inequality cannot be explained by the unentangled state.

Violating the inequality means that the QM statistics for entangled pairs is observed (the cos^2 function where theta is any pair of angles). Almost by definition, you wouldn't expect unentangled pairs would do that. :)
 
  • #126
DrChinese said:
Violating the inequality means that the QM statistics for entangled pairs is observed (the cos^2 function where theta is any pair of angles). Almost by definition, you wouldn't expect unentangled pairs would do that. :)

Yes! The question is although we don't need Bell's theorem to tell us that, would it be ok if we used Bell's theorem to tell us that? If I understood you correctly, you would say no, whereas I would say yes. But I don't think we differ much? I think you would say Bell's theorem applies only to local deterministic theories, whereas I would say (following the same reasoning as stevendaryl in #121) that Bell's theorem also applies to any theory that can be experssed as a local deterministic theory.
 
  • #127
atyy said:
Factorization is an unambiguous mathematical condition needed for a Bell inequality. Locality is something else, and we need additional assumptions to justify why "factorization" has anything to do with "locality".

So after all, what does factorization have to do with locality?I don't see where all these complications and undefined terms come from, I see it really simple:
- "In the vernacular of Einstein: locality meant no instantaneous ("spooky") action at a distance; realism meant the moon is there even when not being observed."
http://en.wikipedia.org/wiki/Bell's_theorem

Non-locality is about two entities interacting over distance, it's conflicting with SR and not so much with classical physics where interaction is instantaneous anyway, but no one was interpreting that as non-locality of classical physics, it's just very quick propagation of the change in the field. EPR non-locality is very specifically related to SR's speed of light barrier, it should be called "FTL interaction" rather than non-locality.

Non-reality is about a single entity and uncertainty or non-existence of its properties. Non-reality does not explain EPR experiments. Just because properties are uncertain or undefined does not justify two entities interacting over distance faster than light. It looks to me non-locality is alien to QM as is to SR.
 
  • #128
atyy said:
Here is an explanation by Gill, but with a hint of why this may be a subtle issue: "Instead of assuming quantum mechanics and deriving counterfactual deniteness, Bell turned the EPR argument on its head. He assumes three principles which Einstein would have endorsed anyway, and uses them to get a contradiction with quantum mechanics; and the first is counterfactual deniteness.
Norsen argues that counterfactual definiteness is not a separate assumption in Bell's, but follows from local causality (and results of QM which specify that perfect correlations between some outcome events can be achieved in the EPRB set-up). Bell, himself, in his most recently published account of his theorem ('La nouvelle cuisine') also suggested that his argument begins with local causality and leads to counterfactual definiteness. I believe Norsen brought this up in another thread.
 
  • #129
Alien8 said:
So after all, what does factorization have to do with locality?

Try the argument here http://www.scholarpedia.org/article/Bell's_theorem

bohm2 said:
Norsen argues that counterfactual definiteness is not a separate assumption in Bell's, but follows from local causality (and results of QM which specify that perfect correlations between some outcome events can be achieved in the EPRB set-up). Bell, himself, in his most recently published account of his theorem ('La nouvelle cuisine') also suggested that his argument begins with local causality and leads to counterfactual definiteness. I believe Norsen brought this up in another thread.

I tend to agree (but not sure about the QM part).
 
  • #130
atyy said:
Yes, I'm conflating. But at least in the case of the two unentangled photons, the quantum probabilities do obey the factorization condition. So I would say that Bell's theorem shows that photon pairs that violate the inequality cannot be explained by the unentangled state.

Well that's one conclusion you can draw, though it can be a bit misleading since an "entangled state" is really a concept specific to quantum mechanics which isn't necessarily the only alternative to the class of local models that are ruled out by Bell's theorem. For instance, quantum mechanics itself predicts an upper bound of ##2 \sqrt{2}## on the CHSH correlator, so if you observed, say, ##S_{\mathrm{CHSH}} = 3## in an experiment, that could be used as evidence against quantum mechanics.
Actually, one reason I like the Goldstein et al Scholarpedia article http://www.scholarpedia.org/article/Bell's_theorem is that they really focus on factorization, and avoid calling it "local causality". Factorization is an unambiguous mathematical condition needed for a Bell inequality. Locality is something else, and we need additional assumptions to justify why "factorization" has anything to do with "locality".

One thing that I don't understand is that you and many seem quite comfortable with the notion of a "local nondeterministic theory" without necessarily relying on it being undergirded by a "local deterministic theory". How do you find that natural? I prefer to start with local deterministic theories, and then use that as a basis to construct local nondeterministic theories as a larger class.

The idea behind the factorisation condition is that it is expressing that, for instance, Bob's choice of measurement ##y## and result ##b## should not exert a direct causal influence on Alice's result ##a##. This doesn't have anything a priori to do with determinism. Exactly why quantum mechanics fails this depends to some extent on how you interpret it. For instance, if you (naively) think of the quantum state as something "real", then Bob's measurement makes Alice's part of the state instantaneously collapse to something different than it was before, which then influences Alice's result. If you don't think of quantum states as something "real", then you've just got correlations spontaneously appearing with no real explanation for them (i.e., a violation of Reichenbach's principle).
Edit: One more argument against "local nondeterministic models" as a fundamental concept is that "local" really means consistent with relativity and its concept of light cones etc. However, there is a bigger class of nondeterministic theories consistent with relativity than local nondeterministic theories - quantum theory. So if one is considering stochastic theories and relativity, it's not clear why one would define "local nondeterministic theories" unless one was considering "local deterministic theories".

To some extent it's a matter of definition. The factorisation condition quoted above is called "locality" or "Bell locality" (if you want to remove any ambiguity) within the nonlocality research community. It's not the only meaning of the word "locality" that you'll see used in the physics research literature.

There's another, larger class of possible theory that gets studied in which the only constraints are that Alice's marginal probability distribution doesn't depend explicitly on Bob's measurement choice and vice versa: $$P_{\mathrm{A}}(a \mid xy) = P_{\mathrm{A}}(a \mid x) \text{ and } P_{\mathrm{B}}(b \mid xy) = P_{\mathrm{B}}(b \mid y) \,.$$ These are called "no-signalling" constraints in the review article I linked to earlier (because they imply that Alice's and Bob's choice of measurements can't be used for faster-than-light signalling), though you might see some authors call them "locality". Bell argued for the factorisation condition on the basis of relativistic causality in the "Theory of Local Beables" exposition I linked to earlier. There could be a fair bit of background reading you might need to do if you really want to understand why Bell settled on the factorisation condition rather than just accepting the no-signalling constraints. I haven't thought about this sort of thing in a while so I'm hazy on the details, but my recollection is that at least part of it is that the no-signalling constraints only really make sense if you're introducing a distinction between "controllable" variables like ##x## and ##y## and merely "outcome" variables like ##a## and ##b##, which I think Bell found suspect to make at the level of a fundamental theory. (This is related to an unresolved issue called the "measurement problem" in quantum physics.)
 
  • #131
wle said:
The idea behind the factorisation condition is that it is expressing that, for instance, Bob's choice of measurement ##y## and result ##b## should not exert a direct causal influence on Alice's result ##a##. This doesn't have anything a priori to do with determinism. Exactly why quantum mechanics fails this depends to some extent on how you interpret it. For instance, if you (naively) think of the quantum state as something "real", then Bob's measurement makes Alice's part of the state instantaneously collapse to something different than it was before, which then influences Alice's result. If you don't think of quantum states as something "real", then you've just got correlations spontaneously appearing with no real explanation for them (i.e., a violation of Reichenbach's principle).

I guess I don't understand what "direct causal influence" means without determinism. One can define it directly, but that would be equivalent to postulating the factorization condition. Is there really a notion of "direct causal influence" from which the factorization condition is derived?

wle said:
To some extent it's a matter of definition. The factorisation condition quoted above is called "locality" or "Bell locality" (if you want to remove any ambiguity) within the nonlocality research community. It's not the only meaning of the word "locality" that you'll see used in the physics research literature.

Yes, it's a matter of taste. I don't like calling the factorization condition "Bell locality", because to me the factorization is a just a mathematical definition with no physical meaning, and doing this just makes "Bell locality" another physically meaningless term.

wle said:
There's another, larger class of possible theory that gets studied in which the only constraints are that Alice's marginal probability distribution doesn't depend explicitly on Bob's measurement choice and vice versa: $$P_{\mathrm{A}}(a \mid xy) = P_{\mathrm{A}}(a \mid x) \text{ and } P_{\mathrm{B}}(b \mid xy) = P_{\mathrm{B}}(b \mid y) \,.$$ These are called "no-signalling" constraints in the review article I linked to earlier (because they imply that Alice's and Bob's choice of measurements can't be used for faster-than-light signalling), though you might see some authors call them "locality". Bell argued for the factorisation condition on the basis of relativistic causality in the "Theory of Local Beables" exposition I linked to earlier. There could be a fair bit of background reading you might need to do if you really want to understand why Bell settled on the factorisation condition rather than just accepting the no-signalling constraints. I haven't thought about this sort of thing in a while so I'm hazy on the details, but my recollection is that at least part of it is that the no-signalling constraints only really make sense if you're introducing a distinction between "controllable" variables like ##x## and ##y## and merely "outcome" variables like ##a## and ##b##, which I think Bell found suspect to make at the level of a fundamental theory. (This is related to an unresolved issue called the "measurement problem" in quantum physics.)

Interesting, I didn't know Bell considered "no signalling". If I recall correctly, no signalling is not very restrictive, and allows more correlations than even QM. I think someone proposed another principle to get the QM limit, something like "life should not be too easy".

BUT, surely the measurement problem is at least partially solved :P If anything, we have too many solutions, even if we don't know all solutions yet:)
 
  • #132
Would it be fair to say there are two Bell theorems?

In the first, we simply postulate factorization directly and name that Bell locality. In other words, we start with a well defined mathematical operation, but no clear physical meaning. Here since we got to factorization by direct postulation, we have bypassed counterfactual definiteness. So counterfactual definiteness is not necessary, but it is sufficient to prove the inequality for local deterministic theories (which one can take as synonymous with counterfactual definiteness) in order to prove the inequality for factorizable theories.

In the second, we consider local deterministic theories and the larger class of local nondeterministic theories that can be built from the local deterministic theories, and we argue by physical considerations that these must satisfy factorization, from which the inequality follows. In other words, we start with clear physical meaning, but them we need physical, non-mathematical, argumentation to get to factorization. Here counterfactual definiteness is necessary, by virtue of the starting point.
 
  • #133
atyy said:
I guess I don't understand what "direct causal influence" means without determinism. One can define it directly, but that would be equivalent to postulating the factorization condition. Is there really a notion of "direct causal influence" from which the factorization condition is derived?

You might want to read through one of Norsen's articles [arXiv:0707.0401 [quant-ph]] that works through this and see whether you agree with the reasoning. A rough sketch goes something like this: First, if you're trying to come up with a theory that's going to predict outcomes in a Bell-type experiment, the most general situation (barring the "superdeterminism" loophole) is that the predicted probabilities might be averaged over some additional initial conditions ##\lambda## provided by the theory: $$P(ab \mid xy) = \int \mathrm{d}\lambda \, \rho(\lambda) \, P(ab \mid xy; \lambda) \,.$$ According to Bayes' theorem, you can always factorise the probability distribution appearing under the integral according to $$P(ab \mid xy; \lambda) = P_{\mathrm{A} \mid \mathrm{B}}(a \mid bxy; \lambda) \, P_{\mathrm{B}}(b \mid xy; \lambda) \,.$$ Finally, the local causality criterion is that, given complete information about any initial conditions ##\lambda##, Bob's choice of measurement ##y## and result ##b## should be redundant for making a prediction about Alice's result ##a##, and Alice's choice of measurement ##x## should be redundant for making any prediction about Bob's result ##b##. Dropping these out of the probabilities appearing above, they just simplify to ##P_{\mathrm{A} \mid \mathrm{B}}(a \mid bxy; \lambda) = P_{\mathrm{A}}(a \mid x; \lambda)## and ##P_{\mathrm{B}}(b \mid xy; \lambda) = P_{\mathrm{B}}(b \mid y; \lambda)##.
Interesting, I didn't know Bell considered "no signalling".

I'm not at all certain that he did or to what extent he did. I'm hazily recalling things I gleaned from some of Bell's essays in Speakable and Unspeakable and one or two of Norsen's ArXiv articles four or five years ago. I'd have to go hunt through these again if I wanted to figure who said what and when. Don't quote me on anything. :)
If I recall correctly, no signalling is not very restrictive, and allows more correlations than even QM.

Yes. For instance, there's a set of hypothetical correlations called the Popescu-Rohrlich box defined by $$\begin{cases}
P(00 \mid xy) = P(11 \mid xy) = 1/2 &\text{if} \quad xy \in \{00, 01, 10\} \\
P(01 \mid xy) = P(10 \mid xy) = 1/2 &\text{if} \quad x = y = 1
\end{cases} \,.$$ These are no signalling (the marginals are just ##P_{\mathrm{A}}(a \mid x) = P_{\mathrm{B}}(b \mid y) = 1/2## for all inputs and outputs), but the expectation values are ##\langle A_{0} B_{0} \rangle = \langle A_{0} B_{1} \rangle = \langle A_{1} B_{0} \rangle = +1## and ##\langle A_{1} B_{1} \rangle = -1## so you get the maximal result ##S_{\mathrm{CHSH}} = 4## for the CHSH correlator.
I think someone proposed another principle to get the QM limit, something like "life should not be too easy".

There was a host of articles proposing principles that might single out the set of quantum correlations a while back. One nice early one (and as far as I remember, the only one I've actually read) was an article by Wim van Dam [arXiv:quant-ph/0501159] showing that basically the entire field of communication complexity would become trivial if PR boxes existed as a resource in nature.

(Though a certain self-styled rat apparently wants to kill the field.)
 
Last edited:
  • #134
bohm2 said:
Norsen argues that counterfactual definiteness is not a separate assumption in Bell's, but follows from local causality (and results of QM which specify that perfect correlations between some outcome events can be achieved in the EPRB set-up). Bell, himself, in his most recently published account of his theorem ('La nouvelle cuisine') also suggested that his argument begins with local causality and leads to counterfactual definiteness. I believe Norsen brought this up in another thread.

Norsen follows some of Bell's later thoughts, including as you say above. I simply say that Bell's Theorem itself requires 2 distinct assumptions, as laid out in EPR. You can label it anyway you like, to me local causality is 2 distinct assumptions. I do not argue that both may be wrong, but they probably are in some respect.
 
  • #135
wle said:
For instance, quantum mechanics itself predicts an upper bound of ##2 \sqrt{2}## on the CHSH correlator

Can you name an example of non-locality before or other than Bell's inequalities? Based on what equation QM predicts ##2 \sqrt{2}## bound on the CHSH correlator?
 
  • #136
bohm2 said:
Norsen argues that counterfactual definiteness is not a separate assumption in Bell's, but follows from local causality (and results of QM which specify that perfect correlations between some outcome events can be achieved in the EPRB set-up). Bell, himself, in his most recently published account of his theorem ('La nouvelle cuisine') also suggested that his argument begins with local causality and leads to counterfactual definiteness. I believe Norsen brought this up in another thread.

atyy said:
I tend to agree (but not sure about the QM part).

I think I understand the QM part now of Norsen's argument, and it is really about "Bell's theorem" rather than "Bell's inequality", which I have been using interchangeably. Norsen is considering Bell's theorem as saying that QM is nonlocal, but not necessarily only because it violates a Bell inequality, but also because of EPR. On the other hand, what most of us are talking about in this thread is Bell's inequality, which is supposed to provide a notion of locality that applies to all theories, not just QM. So no, I don't agree with Norsen (nor disagree), since I am not really interested in Bell's theorem, I am interested in Bell's inequality as something that is derived without considering QM at all.
 
  • #137
Alien8 said:
Can you name an example of non-locality before or other than Bell's inequalities? Based on what equation QM predicts ##2 \sqrt{2}## bound on the CHSH correlator?

That number itself is an arbitrary one, nothing fundamental about it. Prior to Bell type inequalities, I am not aware of any specific measures of quantum non-locality. I guess you could say the perfect correlations mentioned a la EPR fit the bill. I can't think of any specific early points at which someone was saying "aha, look how non-local QM is." They were, however, saying that it was non-realistic (observer dependent). This was EPR's chief objection to QM.
 
  • #138
atyy said:
Yes, it's a matter of taste. I don't like calling the factorization condition "Bell locality", because to me the factorization is a just a mathematical definition with no physical meaning, and doing this just makes "Bell locality" another physically meaningless term.

I don't know why you would say it has no physical meaning. If it rules out some physical theories and can be disproved by experiment, then how could it not be physically meaningful? What does "physically meaningful" mean, if this condition isn't physically meaningful?
 
  • #139
atyy said:
I am interested in Bell's inequality as something that is derived without considering QM at all.

That's what I'm talking about. It says a lot about how locality is supposed to fail, but little about how non-locality is supposed to work. I've learned in the other thread how to derive CHSH local prediction: ##1/2 * cos^2(a-b)## from Malus' law, but I am yet to hear what law is QM prediction: ##cos^2(a-b)## based on. It seems it has to do with uncertainty principle, but I don't see uncertainty can explain or justify non-locality, at all.
 
  • #140
DrChinese said:
That number itself is an arbitrary one, nothing fundamental about it. Prior to Bell type inequalities, I am not aware of any specific measures of quantum non-locality. I guess you could say the perfect correlations mentioned a la EPR fit the bill. I can't think of any specific early points at which someone was saying "aha, look how non-local QM is." They were, however, saying that it was non-realistic (observer dependent). This was EPR's chief objection to QM.

Well, the informal "recipe" for using quantum mechanics is explicitly nonlocal and instantaneous:
  1. Describe the initial state by some wave function \Psi
  2. Later perform a measurement corresponding to operator O.
  3. Get a value \lambda
  4. For future measurements, use P_{O,\lambda} \Psi, where P_{O,\lambda} is the projection operator that projects onto the subspace of wave functions that are eigenstates of O with eigenvalue \lambda
This recipe is explicitly instantaneous and nonlocal, since a measurement here causes the wave function describing distant phenomena to change instantly. Of course, many people didn't think of that as really nonlocal, because the wave function was regarded (at least by some) as reflecting our knowledge of the distant phenomena, rather than anything objective about that phenomena.
 
  • #141
DrChinese said:
That number itself is an arbitrary one, nothing fundamental about it. Prior to Bell type inequalities, I am not aware of any specific measures of quantum non-locality. I guess you could say the perfect correlations mentioned a la EPR fit the bill. I can't think of any specific early points at which someone was saying "aha, look how non-local QM is." They were, however, saying that it was non-realistic (observer dependent). This was EPR's chief objection to QM.

Yeah, it all started with uncertainty and non-reality, but somehow ended up with non-locality. What's the connection?
 
  • #142
stevendaryl said:
Well, the informal "recipe" for using quantum mechanics is explicitly nonlocal and instantaneous:
  1. Describe the initial state by some wave function \Psi
  2. Later perform a measurement corresponding to operator O.
  3. Get a value \lambda
  4. For future measurements, use P_{O,\lambda} \Psi, where P_{O,\lambda} is the projection operator that projects onto the subspace of wave functions that are eigenstates of O with eigenvalue \lambda
This recipe is explicitly instantaneous and nonlocal, since a measurement here causes the wave function describing distant phenomena to change instantly. Of course, many people didn't think of that as really nonlocal, because the wave function was regarded (at least by some) as reflecting our knowledge of the distant phenomena, rather than anything objective about that phenomena.

But where does it say a single wave function can be applied to two separate photons interacting with two separate polarizers?
 
  • #143
Alien8 said:
That's what I'm talking about. It says a lot about how locality is supposed to fail, but little about how non-locality is supposed to work. I've learned in the other thread how to derive CHSH local prediction: ##1/2 * cos^2(a-b)## from Malus' law, but I am yet to hear what law is QM prediction: ##cos^2(a-b)## based on. It seems it has to do with uncertainty principle, but I don't see uncertainty can explain or justify non-locality, at all.

The interesting thing is that there are at least two notions of locality. The first is the notion is called "local causality" and can be built up from local deterministic theories, and is the notion addressed by Bell's inequality. A wider notion of locality is called "relativistic causality" and means that we cannot send messages faster than the speed of light. Although QM violates local causality, it is consistent with the wider notion of relativistic causality.
 
  • #144
Alien8 said:
It says a lot about how locality is supposed to fail, but little about how non-locality is supposed to work. I've learned in the other thread how to derive CHSH local prediction: ##1/2 * cos^2(a-b)## from Malus' law, but I am yet to hear what law is QM prediction: ##cos^2(a-b)## based on. It seems it has to do with uncertainty principle, but I don't see uncertainty can explain or justify non-locality, at all.

That is because no one knows anything deeper about quantum non-locality. It may not be a non-local force in the the sense of "something" moving faster than c. Or maybe the Bohmians have it right. At this point, there is no candidate theory which is local realistic due to experimental failure, and the interpretations of QM cannot currently be distinguished on the basis of experiment. So your choice of the available QM interpretations is as good as anyone's.
 
  • #145
Alien8 said:
But where does it say a single wave function can be applied to two separate photons interacting with two separate polarizers?

Quantum mechanics describes any collection of particles by a single wave function (or, more generally, a density matrix). There is no way to describe the interaction of two particles, or two subsystems without using a single, composite wave-function (or density matrix).
 
  • #146
Alien8 said:
But where does it say a single wave function can be applied to two separate photons interacting with two separate polarizers?

Ah, but it does just that! Check out (1) and (3) at the following excellent reference:

http://arxiv.org/abs/quant-ph/0205171

It is actually called the EPR state.
 
  • #147
atyy said:
Although QM violates local causality, it is consistent with the wider notion of relativistic causality.

Which is wider and which is narrower? :) I can't tell anymore!
 
  • #148
stevendaryl said:
Quantum mechanics describes any collection of particles by a single wave function (or, more generally, a density matrix). There is no way to describe the interaction of two particles, or two subsystems without using a single, composite wave-function (or density matrix).

As far I know wave function is shared only between two interacting entities, like electron - proton interaction, it doesn't say what other electrons might be doing with some other protons. Wave function can be collective as an average, say a light beam interacting with a polarizer, but that again doesn't say what some other light beam is supposed to be doing with some other polarizer.

Is there any other example where a single wave function is applied to two separate systems and two pairs of interacting entities instead of a single system and two interacting entities?
 
  • #149
Quantum mechanics always uses a single wave function to describe all particles and subsystems of interest. If the subsystems don't interact very strongly, it is possible to get a good approximation in some circumstances by only analyzing the subsystems separately, but that's always only a matter of convenience and making the analysis simpler.
 
  • #150
DrChinese said:
Which is wider and which is narrower? :) I can't tell anymore!

The notion of relativistic causality (no signalling) is wider than local causality (local determinism or Bell nonlocality). The idea was that although quantum mechanics is nonlocal, it is still surprisingly consistent with special relativity. So people began to wonder whether QM is the maximal amount of nonlocality that is permitted by relativity. The surprising answer was that relativity is consistent with even more nonlocality than QM.

This isn't the peer-reviewed version of Popescu and Rohrlich's paper, which doesn't seem to be on the arXiv, but it sketches the idea: http://arxiv.org/abs/quant-ph/9508009.

There's also a schematic in Fig. 2 of the Brunner et al review: http://arxiv.org/abs/1303.2849.
 
Back
Top