Spin difference between entangled and non-entangled

Click For Summary
Entangled-spin pairs in the Stern–Gerlach experiment consistently yield opposite results when measured along the same axis, unlike non-entangled pairs, which may not always show this correlation. The distinction lies in the fact that entangled particles have a defined relationship before measurement, while non-entangled particles do not possess definite spins until observed. Quantum mechanics predicts that the results of measurements are eigenvalues of operators corresponding to the observable, emphasizing the probabilistic nature of quantum states. Historical context reveals that the concept of entanglement was articulated by Schrödinger following the EPR paradox, highlighting its fundamental role in quantum mechanics. Ultimately, entanglement illustrates a departure from classical interpretations, reinforcing the non-locality inherent in quantum systems.
  • #121
atyy said:
Yes, that was the point of my question. To me, a Bell violation excludes separable states. However, if I understand DrChinese correctly, although we know from QM that a Bell violation excludes separable states, we don't know from "Bell's theorem" that a Bell violation excludes separable states.

The issue is that if counterfactual definiteness "in reality" is an assumption of Bell's theorem, then it doesn't apply to separable states since a separable state like the 2 unentangled photons with the same vertical polarization won't give 100% certain results at more than 2 of the angles used in a Bell test.

On the other hand one cannot say that counterfactual definiteness is not used at all in Bell's theorem. This is because a local variable theory that is excluded by Bell's theorem can be rewritten as a local deterministic theory. So by excluding local deterministic theories, one also excludes local variable theories. So the counterfactual definiteness is there "in principle", although not necessarily "in reality".

That's something that is a little confusing about discussions of Bell's theorem. In most treatments, it is assumed that the local realistic theory is deterministic: that is, in an EPR-type experiment, Alice's result is a deterministic function of her detector settings and the hidden variable \lambda. It's easy enough to allow classical nondeterminism, in the sense that Alice's measurement results could just be probabilistically related to her settings and the value of the hidden variable. But this more generality doesn't actually do anything; in any classical probabilistic theory, it's always possible to think of the nondeterminism as arising from ignorance about the details of the initial state. It's always consistent to assume that the underlying theory is deterministic. So if QM is inconsistent with a deterministic local theory, then it's also inconsistent with a nondeterministic local theory.
 
Physics news on Phys.org
  • #122
atyy said:
Yes, that was the point of my question. To me, a Bell violation excludes separable states. However, if I understand DrChinese correctly, although we know from QM that a Bell violation excludes separable states, we don't know from "Bell's theorem" that a Bell violation excludes separable states.

You're conflating what are really two different questions here: 1) the general assumptions necessary to derive Bell-type inequalities, and 2) what resources, according to quantum mechanics, are needed to exhibit a Bell violation.

For 1), Bell inequalities can be derived from the factorisation assumption $$P(ab \mid xy) = \int \mathrm{d}\lambda \, \rho(\lambda) \, P_{\mathrm{A}}(a \mid x; \lambda) \, P_{\mathrm{B}}(b \mid y; \lambda)$$ for joint probability distributions. This is the criterion that the review article I linked to works with and is what Bell called "local causality" in a work called "The Theory of Local Beables" in 1975. In his original 1964 article, Bell in addition did the equivalent of assuming that the probabilities ##P_{\mathrm{A}}(a \mid x; \lambda)## and ##P_{\mathrm{B}}(b \mid y; \lambda)## are deterministic, i.e., they should only have values 0 and 1. (This may be what you might want to call "counterfactual definitess", i.e., the results for all possible measurements are predetermined given ##\lambda##.) It's now well known that this isn't necessary and, in fact, it's a fairly simple exercise to show that you can always turn a local stochastic model into a local deterministic one just by adding more hidden variables (the review article gives a short proof in section II.B.1, for instance), so the two are really equivalent.

For 2), in quantum mechanics, outcomes in a Bell-type experiment are a result of performing measurements on a shared quantum state. As I explained in my previous post, its quite easy to show that if the state is not entangled, the quantum prediction just reduces to the definition of a local model, and you won't get a Bell violation. It's also possible to show that if either Alice's or Bob's measurements are compatible (i.e., they commute), then the quantum prediction likewise reduces to a local model. So in order to produce a Bell violation with a quantum system, you need both entanglement and incompatible (noncommuting) measurements. Neither one alone is sufficient.
On the other hand one cannot say that counterfactual definiteness is not used at all in Bell's theorem. This is because a local variable theory that is excluded by Bell's theorem can be rewritten as a local deterministic theory. So by excluding local deterministic theories, one also excludes local variable theories. So the counterfactual definiteness is there "in principle", although not necessarily "in reality".

I'm not sure I agree with this. It's known that local stochastic and local deterministic models can account for exactly the same correlations. So for the purpose of deriving Bell inequalities, that means it's sufficient, but not necessary, to consider just local deterministic models.

Whether "counterfactual definiteness" is necessary in any of this depends on what exactly you're calling counterfactual definiteness. For instance, suppose I make up a theory that fits the factorisation condition above but in which the probability distributions ##P_{\mathrm{A}}(a \mid x; \lambda)## and ##P_{\mathrm{B}}(b \mid y; \lambda)## are not deterministic. Would you say that theory respects counterfactual definiteness? If not, then it's not an assumption needed to derive Bell inequalities.
 
  • #123
wle said:
Separable states don't lead to a Bell violation. ...

I agree. I am saying that a separable function on A & B doesn't lead to a Bell Inequality unless you ALSO consider the counterfactual case C. You must have A & B separable, plus B & C separable, and A & C separable. So then A & B & C are separable. It is only by combining the variations that you get Bell's Theorem.

Now you can ask whether A & B separable alone can be mimicked by a local theory (explicitly leaving out the realism assumption). I doubt one could reproduce the predictions of QM, but I don't really know.
 
  • #124
wle said:
You're conflating what are really two different questions here: 1) the general assumptions necessary to derive Bell-type inequalities, and 2) what resources, according to quantum mechanics, are needed to exhibit a Bell violation.

For 1), Bell inequalities can be derived from the factorisation assumption $$P(ab \mid xy) = \int \mathrm{d}\lambda \, \rho(\lambda) \, P_{\mathrm{A}}(a \mid x; \lambda) \, P_{\mathrm{B}}(b \mid y; \lambda)$$ for joint probability distributions. This is the criterion that the review article I linked to works with and is what Bell called "local causality" in a work called "The Theory of Local Beables" in 1975. In his original 1964 article, Bell in addition did the equivalent of assuming that the probabilities ##P_{\mathrm{A}}(a \mid x; \lambda)## and ##P_{\mathrm{B}}(b \mid y; \lambda)## are deterministic, i.e., they should only have values 0 and 1. (This may be what you might want to call "counterfactual definitess", i.e., the results for all possible measurements are predetermined given ##\lambda##.) It's now well known that this isn't necessary and, in fact, it's a fairly simple exercise to show that you can always turn a local stochastic model into a local deterministic one just by adding more hidden variables (the review article gives a short proof in section II.B.1, for instance), so the two are really equivalent.

For 2), in quantum mechanics, outcomes in a Bell-type experiment are a result of performing measurements on a shared quantum state. As I explained in my previous post, its quite easy to show that if the state is not entangled, the quantum prediction just reduces to the definition of a local model, and you won't get a Bell violation. It's also possible to show that if either Alice's or Bob's measurements are compatible (i.e., they commute), then the quantum prediction likewise reduces to a local model. So in order to produce a Bell violation with a quantum system, you need both entanglement and incompatible (noncommuting) measurements. Neither one alone is sufficient.

Yes, I'm conflating. But at least in the case of the two unentangled photons, the quantum probabilities do obey the factorization condition. So I would say that Bell's theorem shows that photon pairs that violate the inequality cannot be explained by the unentangled state.

wle said:
I'm not sure I agree with this. It's known that local stochastic and local deterministic models can account for exactly the same correlations. So for the purpose of deriving Bell inequalities, that means it's sufficient, but not necessary, to consider just local deterministic models.

Whether "counterfactual definiteness" is necessary in any of this depends on what exactly you're calling counterfactual definiteness. For instance, suppose I make up a theory that fits the factorisation condition above but in which the probability distributions ##P_{\mathrm{A}}(a \mid x; \lambda)## and ##P_{\mathrm{B}}(b \mid y; \lambda)## are not deterministic. Would you say that theory respects counterfactual definiteness? If not, then it's not an assumption needed to derive Bell inequalities.

Yes. I'm not using "necessary" in a mathematical sense. I prefer not to use "counterfactual definiteness" since it's such a philosophy term. I would prefer to say: a violation of a Bell inequality is inconsistent with any theory that has a local deterministic explanation.

Actually, one reason I like the Goldstein et al Scholarpedia article http://www.scholarpedia.org/article/Bell's_theorem is that they really focus on factorization, and avoid calling it "local causality". Factorization is an unambiguous mathematical condition needed for a Bell inequality. Locality is something else, and we need additional assumptions to justify why "factorization" has anything to do with "locality".

One thing that I don't understand is that you and many seem quite comfortable with the notion of a "local nondeterministic theory" without necessarily relying on it being undergirded by a "local deterministic theory". How do you find that natural? I prefer to start with local deterministic theories, and then use that as a basis to construct local nondeterministic theories as a larger class.

In part, this is related to how one thinks of directed graphical models. Do we need determinism in using a graphical model to justify how we factorize a joint probability? I think we do, because otherwise, the graphical model is simply restating the factorization assumption, which is a purely mathematical condition, and is not necessarily linked to any concept of causality. So for example, Wood and Spekkens http://arxiv.org/abs/1208.4119 give the factorization condition to prove the Bell inequality in Fig. 19, which I like because one immediately sees the loopholes like no superdeterminism, no retrocausation etc in order to favour nonlocality as can be seen in Fig. 25,26 and 27. However, Wood and Spekkens start in Fig. 1 with a deterministic model and build up the graphical language from there to a larger class of nondeterministic models. In a local deterministic model, the concept of local causality is clear, and it seems easier to build up. It's clearly a matter of taste, since the two classes are equivalent - but do you really find "local nondeterministic models" a natural fundamental concept?

Edit: One more argument against "local nondeterministic models" as a fundamental concept is that "local" really means consistent with relativity and its concept of light cones etc. However, there is a bigger class of nondeterministic theories consistent with relativity than local nondeterministic theories - quantum theory. So if one is considering stochastic theories and relativity, it's not clear why one would define "local nondeterministic theories" unless one was considering "local deterministic theories".
 
Last edited:
  • #125
atyy said:
But at least in the case of the two unentangled photons, the quantum probabilities do obey the factorization condition. So I would say that Bell's theorem shows that photon pairs that violate the inequality cannot be explained by the unentangled state.

Violating the inequality means that the QM statistics for entangled pairs is observed (the cos^2 function where theta is any pair of angles). Almost by definition, you wouldn't expect unentangled pairs would do that. :)
 
  • #126
DrChinese said:
Violating the inequality means that the QM statistics for entangled pairs is observed (the cos^2 function where theta is any pair of angles). Almost by definition, you wouldn't expect unentangled pairs would do that. :)

Yes! The question is although we don't need Bell's theorem to tell us that, would it be ok if we used Bell's theorem to tell us that? If I understood you correctly, you would say no, whereas I would say yes. But I don't think we differ much? I think you would say Bell's theorem applies only to local deterministic theories, whereas I would say (following the same reasoning as stevendaryl in #121) that Bell's theorem also applies to any theory that can be experssed as a local deterministic theory.
 
  • #127
atyy said:
Factorization is an unambiguous mathematical condition needed for a Bell inequality. Locality is something else, and we need additional assumptions to justify why "factorization" has anything to do with "locality".

So after all, what does factorization have to do with locality?I don't see where all these complications and undefined terms come from, I see it really simple:
- "In the vernacular of Einstein: locality meant no instantaneous ("spooky") action at a distance; realism meant the moon is there even when not being observed."
http://en.wikipedia.org/wiki/Bell's_theorem

Non-locality is about two entities interacting over distance, it's conflicting with SR and not so much with classical physics where interaction is instantaneous anyway, but no one was interpreting that as non-locality of classical physics, it's just very quick propagation of the change in the field. EPR non-locality is very specifically related to SR's speed of light barrier, it should be called "FTL interaction" rather than non-locality.

Non-reality is about a single entity and uncertainty or non-existence of its properties. Non-reality does not explain EPR experiments. Just because properties are uncertain or undefined does not justify two entities interacting over distance faster than light. It looks to me non-locality is alien to QM as is to SR.
 
  • #128
atyy said:
Here is an explanation by Gill, but with a hint of why this may be a subtle issue: "Instead of assuming quantum mechanics and deriving counterfactual deniteness, Bell turned the EPR argument on its head. He assumes three principles which Einstein would have endorsed anyway, and uses them to get a contradiction with quantum mechanics; and the first is counterfactual deniteness.
Norsen argues that counterfactual definiteness is not a separate assumption in Bell's, but follows from local causality (and results of QM which specify that perfect correlations between some outcome events can be achieved in the EPRB set-up). Bell, himself, in his most recently published account of his theorem ('La nouvelle cuisine') also suggested that his argument begins with local causality and leads to counterfactual definiteness. I believe Norsen brought this up in another thread.
 
  • #129
Alien8 said:
So after all, what does factorization have to do with locality?

Try the argument here http://www.scholarpedia.org/article/Bell's_theorem

bohm2 said:
Norsen argues that counterfactual definiteness is not a separate assumption in Bell's, but follows from local causality (and results of QM which specify that perfect correlations between some outcome events can be achieved in the EPRB set-up). Bell, himself, in his most recently published account of his theorem ('La nouvelle cuisine') also suggested that his argument begins with local causality and leads to counterfactual definiteness. I believe Norsen brought this up in another thread.

I tend to agree (but not sure about the QM part).
 
  • #130
atyy said:
Yes, I'm conflating. But at least in the case of the two unentangled photons, the quantum probabilities do obey the factorization condition. So I would say that Bell's theorem shows that photon pairs that violate the inequality cannot be explained by the unentangled state.

Well that's one conclusion you can draw, though it can be a bit misleading since an "entangled state" is really a concept specific to quantum mechanics which isn't necessarily the only alternative to the class of local models that are ruled out by Bell's theorem. For instance, quantum mechanics itself predicts an upper bound of ##2 \sqrt{2}## on the CHSH correlator, so if you observed, say, ##S_{\mathrm{CHSH}} = 3## in an experiment, that could be used as evidence against quantum mechanics.
Actually, one reason I like the Goldstein et al Scholarpedia article http://www.scholarpedia.org/article/Bell's_theorem is that they really focus on factorization, and avoid calling it "local causality". Factorization is an unambiguous mathematical condition needed for a Bell inequality. Locality is something else, and we need additional assumptions to justify why "factorization" has anything to do with "locality".

One thing that I don't understand is that you and many seem quite comfortable with the notion of a "local nondeterministic theory" without necessarily relying on it being undergirded by a "local deterministic theory". How do you find that natural? I prefer to start with local deterministic theories, and then use that as a basis to construct local nondeterministic theories as a larger class.

The idea behind the factorisation condition is that it is expressing that, for instance, Bob's choice of measurement ##y## and result ##b## should not exert a direct causal influence on Alice's result ##a##. This doesn't have anything a priori to do with determinism. Exactly why quantum mechanics fails this depends to some extent on how you interpret it. For instance, if you (naively) think of the quantum state as something "real", then Bob's measurement makes Alice's part of the state instantaneously collapse to something different than it was before, which then influences Alice's result. If you don't think of quantum states as something "real", then you've just got correlations spontaneously appearing with no real explanation for them (i.e., a violation of Reichenbach's principle).
Edit: One more argument against "local nondeterministic models" as a fundamental concept is that "local" really means consistent with relativity and its concept of light cones etc. However, there is a bigger class of nondeterministic theories consistent with relativity than local nondeterministic theories - quantum theory. So if one is considering stochastic theories and relativity, it's not clear why one would define "local nondeterministic theories" unless one was considering "local deterministic theories".

To some extent it's a matter of definition. The factorisation condition quoted above is called "locality" or "Bell locality" (if you want to remove any ambiguity) within the nonlocality research community. It's not the only meaning of the word "locality" that you'll see used in the physics research literature.

There's another, larger class of possible theory that gets studied in which the only constraints are that Alice's marginal probability distribution doesn't depend explicitly on Bob's measurement choice and vice versa: $$P_{\mathrm{A}}(a \mid xy) = P_{\mathrm{A}}(a \mid x) \text{ and } P_{\mathrm{B}}(b \mid xy) = P_{\mathrm{B}}(b \mid y) \,.$$ These are called "no-signalling" constraints in the review article I linked to earlier (because they imply that Alice's and Bob's choice of measurements can't be used for faster-than-light signalling), though you might see some authors call them "locality". Bell argued for the factorisation condition on the basis of relativistic causality in the "Theory of Local Beables" exposition I linked to earlier. There could be a fair bit of background reading you might need to do if you really want to understand why Bell settled on the factorisation condition rather than just accepting the no-signalling constraints. I haven't thought about this sort of thing in a while so I'm hazy on the details, but my recollection is that at least part of it is that the no-signalling constraints only really make sense if you're introducing a distinction between "controllable" variables like ##x## and ##y## and merely "outcome" variables like ##a## and ##b##, which I think Bell found suspect to make at the level of a fundamental theory. (This is related to an unresolved issue called the "measurement problem" in quantum physics.)
 
  • #131
wle said:
The idea behind the factorisation condition is that it is expressing that, for instance, Bob's choice of measurement ##y## and result ##b## should not exert a direct causal influence on Alice's result ##a##. This doesn't have anything a priori to do with determinism. Exactly why quantum mechanics fails this depends to some extent on how you interpret it. For instance, if you (naively) think of the quantum state as something "real", then Bob's measurement makes Alice's part of the state instantaneously collapse to something different than it was before, which then influences Alice's result. If you don't think of quantum states as something "real", then you've just got correlations spontaneously appearing with no real explanation for them (i.e., a violation of Reichenbach's principle).

I guess I don't understand what "direct causal influence" means without determinism. One can define it directly, but that would be equivalent to postulating the factorization condition. Is there really a notion of "direct causal influence" from which the factorization condition is derived?

wle said:
To some extent it's a matter of definition. The factorisation condition quoted above is called "locality" or "Bell locality" (if you want to remove any ambiguity) within the nonlocality research community. It's not the only meaning of the word "locality" that you'll see used in the physics research literature.

Yes, it's a matter of taste. I don't like calling the factorization condition "Bell locality", because to me the factorization is a just a mathematical definition with no physical meaning, and doing this just makes "Bell locality" another physically meaningless term.

wle said:
There's another, larger class of possible theory that gets studied in which the only constraints are that Alice's marginal probability distribution doesn't depend explicitly on Bob's measurement choice and vice versa: $$P_{\mathrm{A}}(a \mid xy) = P_{\mathrm{A}}(a \mid x) \text{ and } P_{\mathrm{B}}(b \mid xy) = P_{\mathrm{B}}(b \mid y) \,.$$ These are called "no-signalling" constraints in the review article I linked to earlier (because they imply that Alice's and Bob's choice of measurements can't be used for faster-than-light signalling), though you might see some authors call them "locality". Bell argued for the factorisation condition on the basis of relativistic causality in the "Theory of Local Beables" exposition I linked to earlier. There could be a fair bit of background reading you might need to do if you really want to understand why Bell settled on the factorisation condition rather than just accepting the no-signalling constraints. I haven't thought about this sort of thing in a while so I'm hazy on the details, but my recollection is that at least part of it is that the no-signalling constraints only really make sense if you're introducing a distinction between "controllable" variables like ##x## and ##y## and merely "outcome" variables like ##a## and ##b##, which I think Bell found suspect to make at the level of a fundamental theory. (This is related to an unresolved issue called the "measurement problem" in quantum physics.)

Interesting, I didn't know Bell considered "no signalling". If I recall correctly, no signalling is not very restrictive, and allows more correlations than even QM. I think someone proposed another principle to get the QM limit, something like "life should not be too easy".

BUT, surely the measurement problem is at least partially solved :P If anything, we have too many solutions, even if we don't know all solutions yet:)
 
  • #132
Would it be fair to say there are two Bell theorems?

In the first, we simply postulate factorization directly and name that Bell locality. In other words, we start with a well defined mathematical operation, but no clear physical meaning. Here since we got to factorization by direct postulation, we have bypassed counterfactual definiteness. So counterfactual definiteness is not necessary, but it is sufficient to prove the inequality for local deterministic theories (which one can take as synonymous with counterfactual definiteness) in order to prove the inequality for factorizable theories.

In the second, we consider local deterministic theories and the larger class of local nondeterministic theories that can be built from the local deterministic theories, and we argue by physical considerations that these must satisfy factorization, from which the inequality follows. In other words, we start with clear physical meaning, but them we need physical, non-mathematical, argumentation to get to factorization. Here counterfactual definiteness is necessary, by virtue of the starting point.
 
  • #133
atyy said:
I guess I don't understand what "direct causal influence" means without determinism. One can define it directly, but that would be equivalent to postulating the factorization condition. Is there really a notion of "direct causal influence" from which the factorization condition is derived?

You might want to read through one of Norsen's articles [arXiv:0707.0401 [quant-ph]] that works through this and see whether you agree with the reasoning. A rough sketch goes something like this: First, if you're trying to come up with a theory that's going to predict outcomes in a Bell-type experiment, the most general situation (barring the "superdeterminism" loophole) is that the predicted probabilities might be averaged over some additional initial conditions ##\lambda## provided by the theory: $$P(ab \mid xy) = \int \mathrm{d}\lambda \, \rho(\lambda) \, P(ab \mid xy; \lambda) \,.$$ According to Bayes' theorem, you can always factorise the probability distribution appearing under the integral according to $$P(ab \mid xy; \lambda) = P_{\mathrm{A} \mid \mathrm{B}}(a \mid bxy; \lambda) \, P_{\mathrm{B}}(b \mid xy; \lambda) \,.$$ Finally, the local causality criterion is that, given complete information about any initial conditions ##\lambda##, Bob's choice of measurement ##y## and result ##b## should be redundant for making a prediction about Alice's result ##a##, and Alice's choice of measurement ##x## should be redundant for making any prediction about Bob's result ##b##. Dropping these out of the probabilities appearing above, they just simplify to ##P_{\mathrm{A} \mid \mathrm{B}}(a \mid bxy; \lambda) = P_{\mathrm{A}}(a \mid x; \lambda)## and ##P_{\mathrm{B}}(b \mid xy; \lambda) = P_{\mathrm{B}}(b \mid y; \lambda)##.
Interesting, I didn't know Bell considered "no signalling".

I'm not at all certain that he did or to what extent he did. I'm hazily recalling things I gleaned from some of Bell's essays in Speakable and Unspeakable and one or two of Norsen's ArXiv articles four or five years ago. I'd have to go hunt through these again if I wanted to figure who said what and when. Don't quote me on anything. :)
If I recall correctly, no signalling is not very restrictive, and allows more correlations than even QM.

Yes. For instance, there's a set of hypothetical correlations called the Popescu-Rohrlich box defined by $$\begin{cases}
P(00 \mid xy) = P(11 \mid xy) = 1/2 &\text{if} \quad xy \in \{00, 01, 10\} \\
P(01 \mid xy) = P(10 \mid xy) = 1/2 &\text{if} \quad x = y = 1
\end{cases} \,.$$ These are no signalling (the marginals are just ##P_{\mathrm{A}}(a \mid x) = P_{\mathrm{B}}(b \mid y) = 1/2## for all inputs and outputs), but the expectation values are ##\langle A_{0} B_{0} \rangle = \langle A_{0} B_{1} \rangle = \langle A_{1} B_{0} \rangle = +1## and ##\langle A_{1} B_{1} \rangle = -1## so you get the maximal result ##S_{\mathrm{CHSH}} = 4## for the CHSH correlator.
I think someone proposed another principle to get the QM limit, something like "life should not be too easy".

There was a host of articles proposing principles that might single out the set of quantum correlations a while back. One nice early one (and as far as I remember, the only one I've actually read) was an article by Wim van Dam [arXiv:quant-ph/0501159] showing that basically the entire field of communication complexity would become trivial if PR boxes existed as a resource in nature.

(Though a certain self-styled rat apparently wants to kill the field.)
 
Last edited:
  • #134
bohm2 said:
Norsen argues that counterfactual definiteness is not a separate assumption in Bell's, but follows from local causality (and results of QM which specify that perfect correlations between some outcome events can be achieved in the EPRB set-up). Bell, himself, in his most recently published account of his theorem ('La nouvelle cuisine') also suggested that his argument begins with local causality and leads to counterfactual definiteness. I believe Norsen brought this up in another thread.

Norsen follows some of Bell's later thoughts, including as you say above. I simply say that Bell's Theorem itself requires 2 distinct assumptions, as laid out in EPR. You can label it anyway you like, to me local causality is 2 distinct assumptions. I do not argue that both may be wrong, but they probably are in some respect.
 
  • #135
wle said:
For instance, quantum mechanics itself predicts an upper bound of ##2 \sqrt{2}## on the CHSH correlator

Can you name an example of non-locality before or other than Bell's inequalities? Based on what equation QM predicts ##2 \sqrt{2}## bound on the CHSH correlator?
 
  • #136
bohm2 said:
Norsen argues that counterfactual definiteness is not a separate assumption in Bell's, but follows from local causality (and results of QM which specify that perfect correlations between some outcome events can be achieved in the EPRB set-up). Bell, himself, in his most recently published account of his theorem ('La nouvelle cuisine') also suggested that his argument begins with local causality and leads to counterfactual definiteness. I believe Norsen brought this up in another thread.

atyy said:
I tend to agree (but not sure about the QM part).

I think I understand the QM part now of Norsen's argument, and it is really about "Bell's theorem" rather than "Bell's inequality", which I have been using interchangeably. Norsen is considering Bell's theorem as saying that QM is nonlocal, but not necessarily only because it violates a Bell inequality, but also because of EPR. On the other hand, what most of us are talking about in this thread is Bell's inequality, which is supposed to provide a notion of locality that applies to all theories, not just QM. So no, I don't agree with Norsen (nor disagree), since I am not really interested in Bell's theorem, I am interested in Bell's inequality as something that is derived without considering QM at all.
 
  • #137
Alien8 said:
Can you name an example of non-locality before or other than Bell's inequalities? Based on what equation QM predicts ##2 \sqrt{2}## bound on the CHSH correlator?

That number itself is an arbitrary one, nothing fundamental about it. Prior to Bell type inequalities, I am not aware of any specific measures of quantum non-locality. I guess you could say the perfect correlations mentioned a la EPR fit the bill. I can't think of any specific early points at which someone was saying "aha, look how non-local QM is." They were, however, saying that it was non-realistic (observer dependent). This was EPR's chief objection to QM.
 
  • #138
atyy said:
Yes, it's a matter of taste. I don't like calling the factorization condition "Bell locality", because to me the factorization is a just a mathematical definition with no physical meaning, and doing this just makes "Bell locality" another physically meaningless term.

I don't know why you would say it has no physical meaning. If it rules out some physical theories and can be disproved by experiment, then how could it not be physically meaningful? What does "physically meaningful" mean, if this condition isn't physically meaningful?
 
  • #139
atyy said:
I am interested in Bell's inequality as something that is derived without considering QM at all.

That's what I'm talking about. It says a lot about how locality is supposed to fail, but little about how non-locality is supposed to work. I've learned in the other thread how to derive CHSH local prediction: ##1/2 * cos^2(a-b)## from Malus' law, but I am yet to hear what law is QM prediction: ##cos^2(a-b)## based on. It seems it has to do with uncertainty principle, but I don't see uncertainty can explain or justify non-locality, at all.
 
  • #140
DrChinese said:
That number itself is an arbitrary one, nothing fundamental about it. Prior to Bell type inequalities, I am not aware of any specific measures of quantum non-locality. I guess you could say the perfect correlations mentioned a la EPR fit the bill. I can't think of any specific early points at which someone was saying "aha, look how non-local QM is." They were, however, saying that it was non-realistic (observer dependent). This was EPR's chief objection to QM.

Well, the informal "recipe" for using quantum mechanics is explicitly nonlocal and instantaneous:
  1. Describe the initial state by some wave function \Psi
  2. Later perform a measurement corresponding to operator O.
  3. Get a value \lambda
  4. For future measurements, use P_{O,\lambda} \Psi, where P_{O,\lambda} is the projection operator that projects onto the subspace of wave functions that are eigenstates of O with eigenvalue \lambda
This recipe is explicitly instantaneous and nonlocal, since a measurement here causes the wave function describing distant phenomena to change instantly. Of course, many people didn't think of that as really nonlocal, because the wave function was regarded (at least by some) as reflecting our knowledge of the distant phenomena, rather than anything objective about that phenomena.
 
  • #141
DrChinese said:
That number itself is an arbitrary one, nothing fundamental about it. Prior to Bell type inequalities, I am not aware of any specific measures of quantum non-locality. I guess you could say the perfect correlations mentioned a la EPR fit the bill. I can't think of any specific early points at which someone was saying "aha, look how non-local QM is." They were, however, saying that it was non-realistic (observer dependent). This was EPR's chief objection to QM.

Yeah, it all started with uncertainty and non-reality, but somehow ended up with non-locality. What's the connection?
 
  • #142
stevendaryl said:
Well, the informal "recipe" for using quantum mechanics is explicitly nonlocal and instantaneous:
  1. Describe the initial state by some wave function \Psi
  2. Later perform a measurement corresponding to operator O.
  3. Get a value \lambda
  4. For future measurements, use P_{O,\lambda} \Psi, where P_{O,\lambda} is the projection operator that projects onto the subspace of wave functions that are eigenstates of O with eigenvalue \lambda
This recipe is explicitly instantaneous and nonlocal, since a measurement here causes the wave function describing distant phenomena to change instantly. Of course, many people didn't think of that as really nonlocal, because the wave function was regarded (at least by some) as reflecting our knowledge of the distant phenomena, rather than anything objective about that phenomena.

But where does it say a single wave function can be applied to two separate photons interacting with two separate polarizers?
 
  • #143
Alien8 said:
That's what I'm talking about. It says a lot about how locality is supposed to fail, but little about how non-locality is supposed to work. I've learned in the other thread how to derive CHSH local prediction: ##1/2 * cos^2(a-b)## from Malus' law, but I am yet to hear what law is QM prediction: ##cos^2(a-b)## based on. It seems it has to do with uncertainty principle, but I don't see uncertainty can explain or justify non-locality, at all.

The interesting thing is that there are at least two notions of locality. The first is the notion is called "local causality" and can be built up from local deterministic theories, and is the notion addressed by Bell's inequality. A wider notion of locality is called "relativistic causality" and means that we cannot send messages faster than the speed of light. Although QM violates local causality, it is consistent with the wider notion of relativistic causality.
 
  • #144
Alien8 said:
It says a lot about how locality is supposed to fail, but little about how non-locality is supposed to work. I've learned in the other thread how to derive CHSH local prediction: ##1/2 * cos^2(a-b)## from Malus' law, but I am yet to hear what law is QM prediction: ##cos^2(a-b)## based on. It seems it has to do with uncertainty principle, but I don't see uncertainty can explain or justify non-locality, at all.

That is because no one knows anything deeper about quantum non-locality. It may not be a non-local force in the the sense of "something" moving faster than c. Or maybe the Bohmians have it right. At this point, there is no candidate theory which is local realistic due to experimental failure, and the interpretations of QM cannot currently be distinguished on the basis of experiment. So your choice of the available QM interpretations is as good as anyone's.
 
  • #145
Alien8 said:
But where does it say a single wave function can be applied to two separate photons interacting with two separate polarizers?

Quantum mechanics describes any collection of particles by a single wave function (or, more generally, a density matrix). There is no way to describe the interaction of two particles, or two subsystems without using a single, composite wave-function (or density matrix).
 
  • #146
Alien8 said:
But where does it say a single wave function can be applied to two separate photons interacting with two separate polarizers?

Ah, but it does just that! Check out (1) and (3) at the following excellent reference:

http://arxiv.org/abs/quant-ph/0205171

It is actually called the EPR state.
 
  • #147
atyy said:
Although QM violates local causality, it is consistent with the wider notion of relativistic causality.

Which is wider and which is narrower? :) I can't tell anymore!
 
  • #148
stevendaryl said:
Quantum mechanics describes any collection of particles by a single wave function (or, more generally, a density matrix). There is no way to describe the interaction of two particles, or two subsystems without using a single, composite wave-function (or density matrix).

As far I know wave function is shared only between two interacting entities, like electron - proton interaction, it doesn't say what other electrons might be doing with some other protons. Wave function can be collective as an average, say a light beam interacting with a polarizer, but that again doesn't say what some other light beam is supposed to be doing with some other polarizer.

Is there any other example where a single wave function is applied to two separate systems and two pairs of interacting entities instead of a single system and two interacting entities?
 
  • #149
Quantum mechanics always uses a single wave function to describe all particles and subsystems of interest. If the subsystems don't interact very strongly, it is possible to get a good approximation in some circumstances by only analyzing the subsystems separately, but that's always only a matter of convenience and making the analysis simpler.
 
  • #150
DrChinese said:
Which is wider and which is narrower? :) I can't tell anymore!

The notion of relativistic causality (no signalling) is wider than local causality (local determinism or Bell nonlocality). The idea was that although quantum mechanics is nonlocal, it is still surprisingly consistent with special relativity. So people began to wonder whether QM is the maximal amount of nonlocality that is permitted by relativity. The surprising answer was that relativity is consistent with even more nonlocality than QM.

This isn't the peer-reviewed version of Popescu and Rohrlich's paper, which doesn't seem to be on the arXiv, but it sketches the idea: http://arxiv.org/abs/quant-ph/9508009.

There's also a schematic in Fig. 2 of the Brunner et al review: http://arxiv.org/abs/1303.2849.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 41 ·
2
Replies
41
Views
5K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
2
Views
2K
  • · Replies 19 ·
Replies
19
Views
5K