Scholarpedia article on Bell's Theorem

  • #51
ThomasT said:
I still think your inference of nonlocality might be overlooking or mistreating something important in the relationship between LR formulation and experimental design and execution.

I will be anxious to hear your diagnosis of what, exactly, was overlooked or mistreated.
 
Physics news on Phys.org
  • #52
harrylin said:
when checking the little stuff that I know rather well (SR, not QM) by way of test, I find it nicely informative but a bit inaccurate.

Could you say exactly what you thought was inaccurate? I couldn't understand, from what you wrote, what you had in mind exactly.
 
  • #53
DrChinese said:
Considering that the vast majority of the scientific community, including Einstein, believed that realism IS quite relevant to the EPR Paradox (completeness of QM), and therefore to Bell, you shouldn't be surprised ...

OK, OK, let's go through this again. It's not that complicated. There's no reason we can't all get onto the same page here.

1. Bohr asserts that "QM is complete". It's not entirely clear exactly what this is supposed to mean, but everybody agrees it at least means that particles can never possesses "simultaneous definite values" for non-commuting observables. For example, no spin 1/2 particle can ever possess, at the same time, a definite value for s_x and s_y.

2. EPR (really this is Bohm's 1951 version, but who cares) argue as follows: you can create a pair of spin 1/2 particles such that measuring s_x of particle 1 allows you to know, with certainty, what a subsequent measurement of s_x of particle 2 will yield. And similarly for s_y. So imagine the following experiment: such a pair is created, with one particle going toward Bob and one toward Alice. Now Alice is going to flip a coin (or in some other "random" way, i.e., a way that in no way relates to the physical state of the two particles here under discussion) and measure s_x or s_y on her particle depending on the outcome of the coin flip. She will thus come to know, with certainty, the value of one of these two properties of Bob's particle. So far there is nothing controversial here; it is just a summary of certain of QM's predictions. But now let us *assume locality*. This has several implications here. First, the outcome of Alice's coin flip cannot influence the state of Bob's particle. Second, Alice's subsequent measurement of either s_x or s_y on her particle cannot influence the state of Bob's particle. Now think about what all this implies. Suppose Alice got heads and so measured s_x. Now it is uncontroversial that Bob's particle now possesses a definite s_x value. But it couldn't have acquired this value as a result of anything Alice did; so it must have had it all along. And since Alice could (for all Bob's particle knows) have flipped tails instead, Bob's particle must also have possessed an s_y value all along. (Suppose it didn't. But then, if Alice had got tails, which she might have, Bob's particle wouldn't know how to "answer" if its s_y was subsequently measured... so it might sometimes answer "wrong", i.e., contrary to the perfect correlations predicted by QM.) Conclusion: locality requires Bob's particle to possesses simultaneous definite values for s_x and s_y. (A slightly more precise way to put this would be: simultaneous definite values which then simply get revealed by measurements, i.e., what are usually called "hidden variables", are the *only local way* to account for the perfect correlations that are observed when Alice and Bob measure along the same axis.) This conclusion of course contradicts Bohr's completeness doctrine, so for EPR (who took locality for granted, as an unquestioned premise) this showed that, contra Bohr, QM was actually *incomplete*.

3. Bell shows that these "simultaneously definite values that simply get revealed by measurements" (i.e., hidden variables) imply conflicts with other predictions of QM -- predictions we now know to be empirically correct. Bell concludes that these hidden variables are not the correct explanation of QM statistics, which in turn means that locality is false (since these hidden variables were the only way to locally explain *some* of the QM statistics).

Now the reason I wanted to lay this out is that you insist on grouping 1,2, and 3 together as if they were all some inseparable whole. But they're not. There are two different things going on here. The first one is: the EPR argument, which is a response to Bohr's completeness claim. The logic is simple: EPR prove that locality --> SDV (simultaneous definite values), which in turn shows that completeness is false... so long as you assume locality! Note in particular that if that's all you're talking about -- the EPR argument -- there is no implication whatsoever that reality is non-local, or anything like that. Now the second issue is Bell's theorem. This is the conjunction of 2 and 3 above: locality --> SDV, but then SDV --> conflict with QM predictions; hence locality --> conflict with QM predictions. What I want to stress here is that this completely disentangles from the "completeness doctrine" issue, the issue of whether or not there are hidden variables. That's what I wrote the other day, about "hidden variables" functioning merely as a "middle term" in the logic here. The point is, if you just run the EPR+Bell argument, i.e., prove that locality --> conflict with QM predictions, you don't make any *assumptions* about whether QM is complete or not, and you don't get to *infer* anything about whether QM is complete or not. It just doesn't speak to that at all one way or the other.

Yet you insist on repeating over and over again that "realism [i.e., hidden variables] is quite relevant to the EPR paradox and therefore to Bell". This isn't exactly the wrongest thing ever, but it sure is misleading! You make it sound as if (indeed, I'm pretty sure you believe that) one needs to make some *assumption* about "realism" in order to run Bell's argument. But that isn't the case. And the fact that Bell's argument starts by recapitulating the EPR argument, and that the EPR argument has some implications about "realism" in another discussion, don't change that at all.


unless you are prepared to say that Bell's argument is an unnecessary step to disproving EPR, you cannot ignore realism.

Here you equivocate on "EPR". Does this mean the *argument* from locality --> SDV? Or does it mean the *conclusion*, namely, SDV?

My view is that the *argument* is entirely valid. However we now know, thanks to Bell, that the premise (namely, locality) is false. So we now know that the EPR argument doesn't tell us anything one way or the other about SDV/realism/HVs.

Do you disagree? If so, where is the flaw in the *argument* (recapitulated in 2 above)?

(1) Locality + Perfect Correlations -> Realism

(2) Since Realism is deduced, and not assumed in (1), then it is not a necessary condition to arrive at the Bell result.

I agree with (1) but disagree with (2). For the leap to occur from (1) to (2), you must assume there exist *simultaneous* Perfect Correlations.

Huh? Recall that "realism" here means (for example) that Bob's particle possesses simultaneous definite pre-existing values for both s_x and s_y, which values are simply revealed if an s_x or s_y measurement is made on the particle. Nothing more than that is needed to derive a Bell inequality. (Less, actually, is needed... but this should suffice here.)


(3) Simultaneous (*see note below) Perfect Correlations -> Realism

Huh? You'll have to explain what this SPC means, and then run the proof.


Notice Locality is dropped as not being a necessary condition for this conclusion. On the other hand, Locality is required so you can satisfy the EPR requirement that you are not disturbing the particle being observed. So then you end up with:

(4) Locality + Realism -> Bell result

QED.

I'm sorry, I can't follow this at all.


Ultimately, depending on your perspective, you will adopt the definitions and requirements from EPR - or you will not. And that will drive what side you come down on.

I'm sorry, there is no such ambiguity in the definitions/requirements. The argument is clear. You haven't understood it properly.


* Simultaneous meaning: 3 or more. EPR had only 2 and that is why the EPR Paradox was not resolved prior to Bell. Bell added the 3rd. See after Bell's (14) where this is dropped in quietly.

No, no, no. 2 is plenty. You can have a Bell inequality with only 2 settings on each side; see CHSH. But it doesn't matter anyway. The same exact argument for

locality --> 2-realism

(where "2-realism" means s_x and s_y both have simultaneous definite hidden variable realistic values) also leads immediate to

locality --> 3-realism.

There is no difference at all. You are totally barking up the wrong tree.
 
  • #54
ThomasT said:
There's a third possibility. That there's no way to explicitly encode any locality condition in the function determining rate of coincidental detection that both clearly represents locality and which isn't at odds with the design and execution of Bell tests. At least I can't think of one.

How about Bell's locality condition? (See section 6 of the scholarpedia article.)
 
  • #55
ThomasT said:
Bell concludes that separable predetermination of rate of coincidental detection is ruled out. I agree. The key term here is separable. A nonseparable relationship between λa and λb can't be separated and encoded in the function determining coincidental detection vis the functions that determine individual detection, and be expected to produce the same correlation curve that using a single nonvarying and nonseparable λ would.

I can't parse these words, but the issue is simple: does the kind of thing you have in mind respect, or not respect, Bell's definition of locality? If it does, it will make predictions in accord with the inequality (and hence in conflict with experiment). If it doesn't, it's nonlocal and you might as well adopt this simpler characterization of it.

That's what the theorem says. And ... paraphrasing mattt ... it's a theorem. You can't just claim that you "interpret" it differently because you don't like it. Point out the flaw in the proof, or reconcile yourself to it. Those are the options.
 
  • #56
ttn said:
No, no, no. 2 is plenty. You can have a Bell inequality with only 2 settings on each side; see CHSH.

CHSH has 4 settings: 0, 22.5, 45, 67.5. Bell used 3 for his: a, b, c. EPR-B used 2. So you are counting the wrong things. We know entangled pairs can only be measured at 2 angles at a time. But if 2 were plenty, we wouldn't have needed Bell. That is why the EPR Paradox was a "tie" until Bell arrived.

Again, my goal was not to debate the point (as we won't agree or change our minds) but to answer the question of WHY your perspective is not generally accepted. You do not define things the way the rest of us do.
 
  • #57
ttn said:
OK, OK, let's go through this again. It's not that complicated. There's no reason we can't all get onto the same page here.

1. Bohr asserts that "QM is complete". It's not entirely clear exactly what this is supposed to mean, but everybody agrees it at least means that particles can never possesses "simultaneous definite values" for non-commuting observables. For example, no spin 1/2 particle can ever possess, at the same time, a definite value for s_x and s_y.

2. EPR (really this is Bohm's 1951 version, but who cares) argue as follows: you can create a pair of spin 1/2 particles such that measuring s_x of particle 1 allows you to know, with certainty, what a subsequent measurement of s_x of particle 2 will yield. And similarly for s_y. So imagine the following experiment: such a pair is created, with one particle going toward Bob and one toward Alice. Now Alice is going to flip a coin (or in some other "random" way, i.e., a way that in no way relates to the physical state of the two particles here under discussion) and measure s_x or s_y on her particle depending on the outcome of the coin flip. She will thus come to know, with certainty, the value of one of these two properties of Bob's particle. So far there is nothing controversial here; it is just a summary of certain of QM's predictions. But now let us *assume locality*. This has several implications here. First, the outcome of Alice's coin flip cannot influence the state of Bob's particle. Second, Alice's subsequent measurement of either s_x or s_y on her particle cannot influence the state of Bob's particle. Now think about what all this implies. Suppose Alice got heads and so measured s_x. Now it is uncontroversial that Bob's particle now possesses a definite s_x value. But it couldn't have acquired this value as a result of anything Alice did; so it must have had it all along. And since Alice could (for all Bob's particle knows) have flipped tails instead, Bob's particle must also have possessed an s_y value all along. (Suppose it didn't. But then, if Alice had got tails, which she might have, Bob's particle wouldn't know how to "answer" if its s_y was subsequently measured... so it might sometimes answer "wrong", i.e., contrary to the perfect correlations predicted by QM.) Conclusion: locality requires Bob's particle to possesses simultaneous definite values for s_x and s_y. (A slightly more precise way to put this would be: simultaneous definite values which then simply get revealed by measurements, i.e., what are usually called "hidden variables", are the *only local way* to account for the perfect correlations that are observed when Alice and Bob measure along the same axis.) This conclusion of course contradicts Bohr's completeness doctrine, so for EPR (who took locality for granted, as an unquestioned premise) this showed that, contra Bohr, QM was actually *incomplete*.

3. Bell shows that these "simultaneously definite values that simply get revealed by measurements" (i.e., hidden variables) imply conflicts with other predictions of QM -- predictions we now know to be empirically correct. Bell concludes that these hidden variables are not the correct explanation of QM statistics, which in turn means that locality is false (since these hidden variables were the only way to locally explain *some* of the QM statistics).

Now the reason I wanted to lay this out is that you insist on grouping 1,2, and 3 together as if they were all some inseparable whole. But they're not. There are two different things going on here. The first one is: the EPR argument, which is a response to Bohr's completeness claim. The logic is simple: EPR prove that locality --> SDV (simultaneous definite values), which in turn shows that completeness is false... so long as you assume locality! Note in particular that if that's all you're talking about -- the EPR argument -- there is no implication whatsoever that reality is non-local, or anything like that. Now the second issue is Bell's theorem. This is the conjunction of 2 and 3 above: locality --> SDV, but then SDV --> conflict with QM predictions; hence locality --> conflict with QM predictions. What I want to stress here is that this completely disentangles from the "completeness doctrine" issue, the issue of whether or not there are hidden variables. That's what I wrote the other day, about "hidden variables" functioning merely as a "middle term" in the logic here. The point is, if you just run the EPR+Bell argument, i.e., prove that locality --> conflict with QM predictions, you don't make any *assumptions* about whether QM is complete or not, and you don't get to *infer* anything about whether QM is complete or not. It just doesn't speak to that at all one way or the other.

You are right. I see it cristal clear. Even for those who don't undestand the previous explanation with words, in his scholarpedia article he proves it mathematically (with clearly stated mathematical definitions and mathematically correct proofs, as far as I could check).

The only way out I see (for those who don't like this result) is to show that what he calls "a necessary condition of locality" (and he defines it clearly in mathematical terms) is not a necessary condition of locality for YOUR definition of locality (and you must show your own definition of locality as clearly as possible and you must prove that it doesn't imply his condition).

Another way out is to think in an incredible great cosmic conspiration.
 
  • #58
assume the universe is a one path version of MWI.

there is no "non-locality" is there?
 
  • #59
ttn, in your description of the Alice and Bob experiment you keep talking about the two particles as separate systems, which they are not. I think it needs more careful phrasing.
 
  • #60
Hello Travis,

In the section titled "Bell's inequality theorem" you derive Bell's inequality supposing that the experimental outcomes were non-contextual (cf "To see this, suppose that the spin measurements for both particles do simply reveal pre-existing values."). To your credit, in the section on "Bell's theorem and non-contextual hidden variables" you discuss the fact that non-contextual hidden variables are naive and unreasonable.

You then proceed to show that you can still obtain the inequalities by assuming only locality in the section titled "The CHSH–Bell inequality: Bell's theorem without perfect correlations".

(1) You say
"While the values of A1 and A2 may vary from one run of the experiment to another even for the same choice of parameters, we assume that, for a fixed preparation procedure on the two systems, these outcomes exhibit statistical regularities. More precisely, we assume these are governed by probability distributions Pα1,α2(A1,A2) depending of course on the experiments performed, and in particular on α1 and α2."

By "statistical regularities" do you mean simply a probability distribution Pα1,α2(A1,A2) exists? Or are you talking about more than that.

(2) You say
"However, if locality is assumed, then it must be the case that any additional randomness that might affect system 1 after it separates from system 2 must be independent of any additional randomness that might affect system 2 after it separates from system 1. More precisely, locality requires that some set of data λ — made available to both systems, say, by a common source16 — must fully account for the dependence between A1 and A2 ; in other words, the randomness that generates A1 out of the parameter α1 and the data codified by λ must be independent of the randomness that generates A2 out of the parameter α2 and λ ."

What if instead you assumed that λ did not originate from the source but was instantaneoulsy (non-locally) imparted from a remote planet to produce result A2 together with α2, and result A1 together with α1. How can you explain away the suggestion that the rest of your argument, will now prove the impossibility of non-locality?

(3) You proceed to derive your expectation values Eα1,α2(A1A2|λ), defined over the probability measure, Pα1,α2(⋅|λ) and ultimately Bell's inequality based on it
C(\alpha_1,\alpha_2)=E_{\alpha_1,\alpha_2}(A_1A_2)=\int_\Lambda E_{\alpha_1,\alpha_2}(A_1A_2|\lambda)\,\mathrm dP(\lambda),
...
|C(\mathbf a,\mathbf b)-C(\mathbf a,\mathbf c)|+|C(\mathbf a',\mathbf b)+C(\mathbf a',\mathbf c)|\le2,

To make the following clear, I'm going to fully specify the implied notation in the above as follows:

|C(\mathbf a,\mathbf b|\lambda)-C(\mathbf a,\mathbf c|\lambda)|+|C(\mathbf a',\mathbf b|\lambda)+C(\mathbf a',\mathbf c|\lambda)|\le2,

Which starts revealing the problem. Unless all terms in the above inequality are defined over the exact same probability measure. The above inequality does not make sense. In other words, the only way you were able to derive such an inequality was to assume that all the terms are defined over the exact same probability measure P(λ). Do you agree? If not please, show the derivation. In fact the very next "Proof" section explicitly confirms my statement.

(4) In the section titled "Experiments", you start by saying:
Bell's theorem brings out the existence of a contradiction between the empirical predictions of quantum theory and the assumption of locality.
(a) Now since you did not show it explicity in the article, I presume when you say Bell's theorem contradicts quantum theory, you mean, you have calculated the LHS of the above inequality from quantum theory and it was greater than 2. If you will be kind as to show the calculation and in the process explain how you made sure in your calculation that all the terms you used were defined over the exact same probability measure P(λ).
(b) You also discussed how several experiments have demonstrated violation of Bell's inequality, I presume by also calculating the LHS and comparing with the RHS of the above. Are you aware of any experiments in which experimenters made sure the terms from their experiments were defined over the exact same probability measure?

(5) Since you obviously agree that non-contextual hidden variables are naive and unreasonable, let us look at the inequality from the perspective of how experiments are usually performed. For this purpose, I will rewrite the four terms obtained from a typical experiment as follows:

C(\mathbf a_1,\mathbf b_1)
C(\mathbf a_2,\mathbf c_2)
C(\mathbf a_3',\mathbf b_3)
C(\mathbf a_4',\mathbf c_4)

Where each term originates from a separate run of the experiment denoted by the subscripts. Let us assume for a moment that the same distribution of λ is in play for all the above terms. However, if we were to ascribe 4 different experimental contexts to the different runs, we will have the terms.

C(\mathbf a,\mathbf b|\lambda,1)
C(\mathbf a,\mathbf c|\lambda,2)
C(\mathbf a',\mathbf b|\lambda,3)
C(\mathbf a',\mathbf c|\lambda,4)

Where we have moved the indices into the conditions. We still find that each term is defined over a different probability measure P(λ,i), i=1,2,3,4 , where i encapsulates all the different conditions which make one run of the experiment different from another.

Therefore could you please explain why this is not a real issue when we compare experimental results with the inequality.
 
Last edited:
  • #61
billschnieder said:
Hello Travis,

Hi Bill, thanks for the thoughtful questions about the actual article! =)



By "statistical regularities" do you mean simply a probability distribution Pα1,α2(A1,A2) exists? Or are you talking about more than that.

Nothing more. But of course the real assumption is that this probability distribution can be written as in equations (3) and (4). In particular, that is where the "no conspiracies" and "locality" assumptions enter -- or really, here, are formulated.



What if instead you assumed that λ did not originate from the source but was instantaneoulsy (non-locally) imparted from a remote planet to produce result A2 together with α2, and result A1 together with α1. How can you explain away the suggestion that the rest of your argument, will now prove the impossibility of non-locality?

I don't understand. The λ here should be thought of as "whatever fully describes the state of the particle pair, or whatever you want to call the 'data' that influences the outcomes -- in particular, the part of that 'data' which is independent of the measurement interventions". It doesn't really matter where it comes from, though obviously if you have some theory where it swoops in at the last second from Venus, that would be a nonlocal theory.

But mostly I don't understand your last sentence above. What is suggesting that the rest of the argument will prove the impossibility of non-locality? I thought the argument proved the inevitability of non-locality!



To make the following clear, I'm going to fully specify the implied notation in the above as follows:

|C(\mathbf a,\mathbf b|\lambda)-C(\mathbf a,\mathbf c|\lambda)|+|C(\mathbf a',\mathbf b|\lambda)+C(\mathbf a',\mathbf c|\lambda)|\le2,

You've misunderstood something. The C's here involve averaging/integrating over λ. They are in no sense conditional/dependent on λ. See the equation just above where CHHS gets mentioned, which defines the C's.

Which starts revealing the problem. Unless all terms in the above inequality are defined over the exact same probability measure. The above inequality does not make sense. In other words, the only way you were able to derive such an inequality was to assume that all the terms are defined over the exact same probability measure P(λ). Do you agree?

No. You are confusing the probability P_{\alpha_1,\alpha_2}(\cdot|\lambda) with P(\lambda). You first average the product A_1 A_2 with respect to P_{\alpha_1,\alpha_2}(\cdot|\lambda) to get E_{\alpha_1,\alpha_2}(A_1 A_2 | \lambda). Then you average this over the possible λs using P(λ).

Maybe you missed the "no conspiracies" assumption, i.e., that P(λ) can't depend on \alpha_1 or \alpha_2.




(a) Now since you did not show it explicity in the article, I presume when you say Bell's theorem contradicts quantum theory, you mean, you have calculated the LHS of the above inequality from quantum theory and it was greater than 2. If you will be kind as to show the calculation and in the process explain how you made sure in your calculation that all the terms you used were defined over the exact same probability measure P(λ).

I don't understand. The QM calculation is well-known and not controversial. You really want me to take the time to explain that? Look in any book. But I have the sense you know how the calculation goes and you're trying to get at something. So just tell me where you're going. Your last statement makes no sense to me. In QM, λ is just the usual wave function or quantum state for the pair; typically we assume that this can be completely controlled, so P(λ) is a delta function. But in QM, you can't do the factorization that's done in equation (4). It's not a local theory. (Not that you need Bell's theorem to see/prove this.)



(b) You also discussed how several experiments have demonstrated violation of Bell's inequality, I presume by also calculating the LHS and comparing with the RHS of the above. Are you aware of any experiments in which experimenters made sure the terms from their experiments were defined over the exact same probability measure?

No, the experiments don't measure the LHS of what you had written above. What they can measure is the C's as we define them -- i.e., involving the averaging over λ.



(5) Since you obviously agree that non-contextual hidden variables are naive and unreasonable, let us look at the inequality from the perspective of how experiments are usually performed. For this purpose, I will rewrite the four terms obtained from a typical experiment as follows:

C(\mathbf a_1,\mathbf b_1)
C(\mathbf a_2,\mathbf c_2)
C(\mathbf a_3',\mathbf b_3)
C(\mathbf a_4',\mathbf c_4)

Where each term originates from a separate run of the experiment denoted by the subscripts. Let us assume for a moment that the same distribution of λ is in play for all the above terms. However, if we were to ascribe 4 different experimental contexts to the different runs, we will have the terms.

C(\mathbf a,\mathbf b|\lambda,1)
C(\mathbf a,\mathbf c|\lambda,2)
C(\mathbf a',\mathbf b|\lambda,3)
C(\mathbf a',\mathbf c|\lambda,4)

Where we have moved the indices into the conditions. We still find that each term is defined over a different probability measure P(λ,i), i=1,2,3,4 , where i encapsulates all the different conditions which make one run of the experiment different from another.

Therefore could you please explain why this is not a real issue when we compare experimental results with the inequality.

Yes, for sure, if P(λ) is different for the 4 different (types of) runs, then you can violate the inequality (without any nonlocality!). The thing we call the "no conspiracies" assumption precludes this, however. It is precisely the assumption that the distribution of λ's is independent of the alpha's.

So I guess your issue is just what I speculated above: you do not accept the reasonableness of "no conspiracies", or didn't realize this assumption was being made. (I doubt it's the latter since we drum this home big time in that section especially, and elsewhere.)
 
  • #62
unusualname said:
assume the universe is a one path version of MWI.

there is no "non-locality" is there?

I don't know exactly what you mean by "one path version of MWI". But in general, about MWI, I'd say the problem is that there is no locality there either.
 
  • #63
DrChinese said:
CHSH has 4 settings: 0, 22.5, 45, 67.5.

but only 2 for each particle, which is (I thought) what you were talking about.

But the main point is that this whole counting (2, 3, 4) business is nonsensical. Can you really not follow the EPR argument, which establishes -- on the assumption of locality! -- that definite pre-existing values must exist... for one angle, for 2, for 3, for 113, for however many you care to prove. Let me just put it simply: the EPR argument shows that locality + perfect correlations implies definite pre-existing values for the spin/polarization along *all* angles.

Either you accept the validity of this or you don't. If you don't, tell me where it goes off the track. If you do, then there's nothing further to discuss because now, clearly, you can derive a Bell inequality.


We know entangled pairs can only be measured at 2 angles at a time.

Uh, you mean, each particle can be measured at 1 angle at a time? That's true. But why in the world does that matter? Nobody ever said you could measure (e.g.) all four of the correlation coefficients in the CHHS inequality on one single pair of particles!



Again, my goal was not to debate the point (as we won't agree or change our minds) but to answer the question of WHY your perspective is not generally accepted. You do not define things the way the rest of us do.

I don't hold out a lot of hope of changing your mind, either, but still, as long as you keep saying stuff that makes no sense, I will continue to call it out. Maybe somebody watching will learn something?

Actually I have a serious question. What, exactly, do you think I define differently than others? You really think it's disagreement over the definition of some term that explains our difference of opinion? What term??
 
  • #64
ttn said:
So I guess your issue is just what I speculated above: you do not accept the reasonableness of "no conspiracies", or didn't realize this assumption was being made. (I doubt it's the latter since we drum this home big time in that section especially, and elsewhere.)
No, I don't think superdeterminism is the reason billschneider rejects Bell. If you want to see my (unsuccessful) attempt to ascertain what exactly billschneider is talking about, see the last page or so of this thread.
 
  • #65
ttn said:
I don't understand. The λ here should be thought of as "whatever fully describes the state of the particle pair, or whatever you want to call the 'data' that influences the outcomes -- in particular, the part of that 'data' which is independent of the measurement interventions". It doesn't really matter where it comes from, though obviously if you have some theory where it swoops in at the last second from Venus, that would be a nonlocal theory.

But mostly I don't understand your last sentence above. What is suggesting that the rest of the argument will prove the impossibility of non-locality? I thought the argument proved the inevitability of non-locality!
If lambda can be anything which influences the outcomes, then why do you think the proof restrincts it to locality? I can use the same argument to deny non-locality by simply redefining lambda the way I did. Why would this be wrong?

You've misunderstood something. The C's here involve averaging/integrating over λ. They are in no sense conditional/dependent on λ. See the equation just above where CHHS gets mentioned, which defines the C's.
If the C's are obtained by integrating over a certain probability distribution λ, then it means the C's are defined ONLY for the distribution of λ, let us call it ρ(λ), over which they were obtained. I included λ, and a conditioning bar just to reflect the fact that the C's are defined over a given distribution of λ which must be the same for each term. Do you disagree with this?

No. You are confusing the probability P_{\alpha_1,\alpha_2}(\cdot|\lambda) with P(\lambda). You first average the product A_1 A_2 with respect to P_{\alpha_1,\alpha_2}(\cdot|\lambda) to get E_{\alpha_1,\alpha_2}(A_1 A_2 | \lambda). Then you average this over the possible λs using P(λ).

Maybe you missed the "no conspiracies" assumption, i.e., that P(λ) can't depend on \alpha_1 or \alpha_2.
I don't think you are getting my point so let me try again using your Proof just above equation (5). Let us focus on what you are doing within the integral first. You start with (simplifying notation)

E(AB|λ) = E(A|λ)E(B|λ) which follows from your equation (4).Within the integral, you start with 4 terms based on this presumably with something like:

<br /> \big|E_{\mathbf a}(A_1|\lambda)E_{\mathbf b}(A_2|\lambda)-E_{\mathbf a}(A_1|\lambda)E_{\mathbf c}(A_2|\lambda)\big|\,+\,\big|E_{\mathbf a&#039;}(A_1|\lambda)E_{\mathbf b}(A_2|\lambda)+E_{\mathbf a&#039;}(A_1|\lambda)E_{\mathbf c}(A_2|\lambda)\big|

You the proceed to factor out the terms as follows:

<br /> \big|E_{\mathbf a}(A_1|\lambda)\big|\,\big(\big|E_{\mathbf b}(A_2|\lambda)-E_{\mathbf c}(A_2|\lambda)\big|\big)\,+\,\big|E_{\mathbf a&#039;}(A_1|\lambda)\big|\,\big(\big|E_{\mathbf b}(A_2|\lambda)+E_{\mathbf c}(A_2|\lambda)\big|\big)

Remember, we are still dealing with what is within the integral. It is therefore clear that according to your proof, that the Ea term from the E(a,b) experiment is exactly the same Ea term from the E(a,c) experiment. In other words, the E(a,b) and E(a,c) experiments must have the Ea term in common and the E(a′,b) and E(a′,c) must have the Ea′ term in common and the E(a,b) and E(a′,b) experiments must have the Eb term in common and E(a,c) and E(a′,c) experiments must have the Ec term in common. Note the cyclicity in the relationships between the terms. In fact, according to your proof, you really only have 4 terms individual terms of the type Ei which you have combined to form E(x,y) type terms using your factorizability condition (equation 4). If you now consider the integral, you now have lists of values so to speak which must be identical from term to term and reduceable to only 4 lists.

If the above condition does not hold, your proof fails. This is evidenced by the fact that you can not complete your proof without the factorization which you did. Another way of looking at it is to say that all of the paired products within the integral depend on the same λ. The proof depends on the fact that all the terms within the integral are defined over the same λ and contain the cyclicity described above which allows you to factor terms out.

So what does this mean for the experiment? In a typical experiment we collect lists of numbers (±1). For each run, you collect 2 lists, for 4 runs you collect 8 lists. You then calculate averages for each pair (cf integrating) to obtain a value for the corresponding E(x,y) term. However, according to your proof, and the above analysis, those 8 lists MUST be redundant in the sense that 4 of them must be duplicates. Unless experimenters make sure their 8 lists are sorted and reduced to 4, it is not mathematically correct to think the terms they are calculating will be similar to Bell's or the CHSH terms. Do you disagree?

I don't understand. The QM calculation is well-known and not controversial. You really want me to take the time to explain that? Look in any book. But I have the sense you know how the calculation goes and you're trying to get at something.
Ok let me present it differently. When you calculate the 4 CHSH terms from QM and using them simultaneouly in the LHS of the inequality, are you assuming that each term originated from a different particle pair, or that they all originate from the same particle pair?

No, the experiments don't measure the LHS of what you had written above. What they can measure is the C's as we define them -- i.e., involving the averaging over λ.
Do you know of any experimen in which in which the 8 lists of numbers could be reduced to 4 as implied by your proof?
Yes, for sure, if P(λ) is different for the 4 different (types of) runs, then you can violate the inequality (without any nonlocality!). The thing we call the "no conspiracies" assumption precludes this, however. It is precisely the assumption that the distribution of λ's is independent of the alpha's.

Your "no-consipracy" assumption boils down to : "the exact same series of λs apply to each run of the experiment"

As I hope you see now, all that is required for your "no-conspiracy" assumption to fail, is for the actual distribution of λs to be different from one run to another, which is not unreasonable. I think your "no-conspiracy" assumption is misleading because it gives the impression that there has to be some kind of conspiracy in order for the λs to be different. But given that the experimenters have no clue the exact nature of λ, or howmany distinct λ values exist it is reasonable to expect the distribution of λ to be different from run to run. My question to you therefore was if you knew of any experiment in which the experimenters made sure the exact same series of λs were realized for each run in order to be able to use the "no-conspiracy" assumption. Just becuase you chose the name "no-conspiracy" to describe the condition does not mean it's violation implies what is commonly known as "conspiracy". It is something that happens all the time in non-stationary processes. It would have been better to call it a "stationarity" assumption.

Note: if the same series of λs apply for each run, then the 8 lists of numbers MUST be reduceable to 4. Do you agree? We can easily verify this from the experimental data available.
 
Last edited:
  • #66
Demystifier said:
So, how should we call articles concerned with truth, but not containing new results?
Not sure I understand. If article is concerned with truth it should say something new about argumentation, perspective, whatever. If it says nothing new then how is it concerned with truth? And if it says something new then it is research article.

EDIT: I just thought that there can be new way how to explain something (in sense of teaching). In that case I am not sure about answer.
 
Last edited:
  • #67
ttn said:
Actually I have a serious question. What, exactly, do you think I define differently than others? You really think it's disagreement over the definition of some term that explains our difference of opinion? What term??

I told you that Perfect Correlations are really Simultaneous Perfect Correlations. Each Perfect Correlation defines an EPR element of reality, I hope that is clear. If they are *simultaneously* real, which I say is an assumption but you define as an inference, then you have realism. If it is an assumption, then QED. If it is inference, then realism is not assumed and you are correct.

My point is that if in fact spin is contextual, then there cannot be realism. Ergo, the realism inference fails. So, for example, if I have a time symmetric mechanism (local in that c is respected, but "quantum non-local" and not Bell local), it will fail the assumption of realism (since there are not definite values except where actually measured). MWI is exactly the same in this respect.

In other words, the existence of an explicitly contextual model invalidates the inference of realism. That is why it must be assumed. Anyway, you asked where the difference of opinion is, and this is it.
 
  • #68
ttn said:
OK then, I take it back. It's not a review article. It's an encyclopedia entry. Am I allowed to be concerned with truth now?
I guess no. Well, for example, wikipedia has very strict policy on neutrality - Wikipedia:Neutral point of view
And Scholarpedia:Aims and policy says:
"Scholarpedia does not publish "research" or "position" papers, but rather "living reviews" ..."
But of course it might be that Scholarpedia has more relaxed attitude toward neutrality because they have other priorities.

ttn said:
Well, of course the details depend on exactly what the entangled state is, but for the states standardly used for EPR-Bell type experiments, I would accept that as a rough description. But what's the point? Surely there's no controversy about what the predictions of QM are??
Then certainly "perfect correlations" are not convincingly confirmed by the experiment. Only the other one i.e. "sinusoidal relationship" prediction.
 
  • #69
billschnieder said:
If lambda can be anything which influences the outcomes, then why do you think the proof restrincts it to locality?

Quoting Bell: "It is notable that in this argument nothing is said about the locality, or even localizability, of the variable λ."


I can use the same argument to deny non-locality by simply redefining lambda the way I did. Why would this be wrong?

I guess I missed the argument. How does assuming λ comes from Venus result in denying non-locality??


If the C's are obtained by integrating over a certain probability distribution λ, then it means the C's are defined ONLY for the distribution of λ, let us call it ρ(λ), over which they were obtained. I included λ, and a conditioning bar just to reflect the fact that the C's are defined over a given distribution of λ which must be the same for each term. Do you disagree with this?

At best, it's bad notation. If you want to give them a subscript or something, to make explicit that they are defined for a particular assumed ρ(λ), then give them the subscript ρ, not λ. The whole idea here is that (in general) there is a whole spectrum of possible values of λ, with some distribution ρ, that are produced when the experimenter "does the same thing at the particle source". There is no control over, and no knowledge of, the specific value of λ for a given particle pair.


It is therefore clear that according to your proof, that the Ea term from the E(a,b) experiment is exactly the same Ea term from the E(a,c) experiment.

Yes, correct.


In other words, the E(a,b) and E(a,c) experiments must have the Ea term in common and the E(a′,b) and E(a′,c) must have the Ea′ term in common and the E(a,b) and E(a′,b) experiments must have the Eb term in common and E(a,c) and E(a′,c) experiments must have the Ec term in common.

Correct.


Note the cyclicity in the relationships between the terms. In fact, according to your proof, you really only have 4 terms individual terms of the type Ei which you have combined to form E(x,y) type terms using your factorizability condition (equation 4).

Correct.



If you now consider the integral, you now have lists of values so to speak which must be identical from term to term and reduceable to only 4 lists.

Just to make sure, by the "lists" you mean the functions (e.g.) E_a(A_1|\lambda)?


Another way of looking at it is to say that all of the paired products within the integral depend on the same λ.

No, they all assume the same *distribution* over the lambdas.


The proof depends on the fact that all the terms within the integral are defined over the same λ and contain the cyclicity described above which allows you to factor terms out.

I don't even know what that means. The things you are talking about are *functions* of λ. What does it even mean to say they "assume the same λ"? No one particular value of λ is being assumed anywhere. Suppose I have two functions of x: f(x) and g(x). Now I integrate their product from x=0 to x=1. Have I "assumed the same value of x"? I don't even know what that means. What you're doing is adding up, for all of the values x' of x, the product f(x')g(x'). No particular value of x is given any special treatment. Same thing here.


So what does this mean for the experiment? In a typical experiment we collect lists of numbers (±1). For each run, you collect 2 lists, for 4 runs you collect 8 lists. You then calculate averages for each pair (cf integrating) to obtain a value for the corresponding E(x,y) term. However, according to your proof, and the above analysis, those 8 lists MUST be redundant in the sense that 4 of them must be duplicates.

Huh? Nothing at all implies that. The lists here are lists of outcome pairs, (A1, A2). The experimenters will take the list for a given "run" (i.e., for a given setting pair) and compute the average value of the product A1*A2. That's how the experimenters compute the correlation functions that the inequality constrains. You are somehow confusing what the experimentalists do, with what is going on in the derivation of the inequality.



Unless experimenters make sure their 8 lists are sorted and reduced to 4, it is not mathematically correct to think the terms they are calculating will be similar to Bell's or the CHSH terms. Do you disagree?

I don't even understand what you're saying. There is certainly no sense in which the experimenters' lists (of A1, A2 values) will look like, or even be comparable to, the "lists" I thought you had in mind above (namely, the one-sided expectation functions).



Ok let me present it differently. When you calculate the 4 CHSH terms from QM and using them simultaneouly in the LHS of the inequality, are you assuming that each term originated from a different particle pair, or that they all originate from the same particle pair?

The question doesn't arise. You are just calculating 4 different things -- the predictions of QM for a certain correlation in a certain experiment -- and then adding them together in a certain way. No assumption is made, or needed, or even meaningful, about each of the 4 calculations somehow being based on the same particle pair. (I say it's not even meaningful because what you're calculating is an expectation value -- not the kind of thing you could even measure with only a single pair.)


Do you know of any experimen in which in which the 8 lists of numbers could be reduced to 4 as implied by your proof?

?


Your "no-consipracy" assumption boils down to : "the exact same series of λs apply to each run of the experiment"

I dont' know what you mean by "series of λs". What the assumption boils down to is: the distribution of λs (i.e., the fraction of the time that each possible value of λ is realized) is the same for the billion runs where the particles are measured along (a,b), the billion runs where the particles are measured along (a,c), etc. That is, basically, it is assumed that the settings of the instruments do not influence or even correlate with state of the particle pairs emitted by the source.

Note that in the real experiments, the experimenters go to great length to try to have the instrument settings (for each pair) be chosen "randomly", i.e., by some physical process that is (as far as any sane person could think) totally unrelated to what's going on at the particle source. It really is just like a randomized drug trial, where you flip a coin to decide who will get the drug and who will get the placebo. You have to assume that the outcome of the coin flip for a given person is uninfluenced by and uncorrelated with the person's state of health.


As I hope you see now, all that is required for your "no-conspiracy" assumption to fail, is for the actual distribution of λs to be different from one run to another, which is not unreasonable.

Yes, that's right. That's indeed exactly what would make it fail. We disagree about how unreasonable it is to deny this assumption, though. I tend to think, for example, that if a randomized drug trial shows that X cures cancer, you'd have to be pretty unreasonable to refuse to take the drug yourself (after you get diagnosed with cancer) on the grounds that the trial *assumed* that the distribution of initial healthiness for the drug and placebo groups were the same. This is an assumption that gets made (usually tacitly) whenever *anything* is learned/inferred from a scientific experiment. So to deny it is tantamount to denying the whole enterprise of trying to learn about nature through experiment.

I think your "no-conspiracy" assumption is misleading because it gives the impression that there has to be some kind of conspiracy in order for the λs to be different.

I think it's accurately-named, for the same reason.


But given that the experimenters have no clue the exact nature of λ, or howmany distinct λ values exist it is reasonable to expect the distribution of λ to be different from run to run.

I disagree. It is normal in science to be ignorant of all the fine details that determine the outcomes. Think again of the drug trial. Would you say that, because the doctors don't know exactly what properties determine whether somebody dies of cancer or survives, therefore it is reasonable to assume that the group of people who got the drug (because some coin landed heads) is substantially different in terms of those properties than the group who got the placebo (because the coin landed tails)?


My question to you therefore was if you knew of any experiment in which the experimenters made sure the exact same series of λs were realized for each run in order to be able to use the "no-conspiracy" assumption.

Uh, again, the λs aren't something the experimenters know about. Indeed, nobody even knows for sure what they are -- different quantum theories say different things! That's what makes the theorem general/interesting: you don't have to say/know what they are exactly to prove that, whatever they are, if locality and no conspiracies are satisfied, you will get statistics that respect the inequality.


Just becuase you chose the name "no-conspiracy" to describe the condition does not mean it's violation implies what is commonly known as "conspiracy". It is something that happens all the time in non-stationary processes. It would have been better to call it a "stationarity" assumption.

Of course I agree that the name doesn't make it so. The truth though is that we chose that name because we think it accurately reflects what the assumption actually amounts to. It's clear you disagree. Incidentally, did you read the whole article? There is some further discussion of this assumption elsewhere, so maybe that will help.


Note: if the same series of λs apply for each run, then the 8 lists of numbers MUST be reduceable to 4. Do you agree? We can easily verify this from the experimental data available.

No, I don't agree. What you're saying here doesn't make sense. You're confusing the A's that the experimentalists measure, with the λs that only theorists care about.
 
  • #70
DrChinese said:
I told you that Perfect Correlations are really Simultaneous Perfect Correlations. Each Perfect Correlation defines an EPR element of reality, I hope that is clear. If they are *simultaneously* real, which I say is an assumption but you define as an inference, then you have realism. If it is an assumption, then QED. If it is inference, then realism is not assumed and you are correct.

But we don't disagree about the definitions of "assumption" or "inference". I've explained how the argument goes several times, so I don't see how you can suggest that my claim (that it's an inference) is somehow a matter of definition. I inferred it, right out in public in front of you. If I made a mistake in that inference, then tell me what the mistake was. Burying your head in the sand won't make the argument go away!


My point is that if in fact spin is contextual, then there cannot be realism. Ergo, the realism inference fails.

The non-contextuality of spin *follows* from the EPR argument, i.e., that too is an *inference*. Maybe you're right at the end of the day that this is false. But if so, that doesn't show the *argument* was invalid -- it shows that one of the premises must have been wrong! This is elementary logic. I say "A --> B". You say, "ah, but B is false, therefore A doesn't --> B". That's not valid reasoning.
 
  • #71
ttn said:
But we don't disagree about the definitions of "assumption" or "inference". I've explained how the argument goes several times, so I don't see how you can suggest that my claim (that it's an inference) is somehow a matter of definition. I inferred it, right out in public in front of you. If I made a mistake in that inference, then tell me what the mistake was.

I told you that your inference is wrong, and that is because there are explicit models that are non-realistic but local and they feature perfect correlations. For example:

http://arxiv.org/abs/0903.2642

Relational Blockworld: Towards a Discrete Graph Theoretic Foundation of
Quantum Mechanics
W.M. Stuckey, Timothy McDevitt and Michael Silberstein

"BCTS [backwards-causation time-symmetric approaches] provides for a local account of entanglement (one without space-like influences) that not only keeps RoS [relativity of simultaneity], but in some cases relies on it by employing its blockworld consequence—the reality of all events past, present and future including the outcomes of quantum experiments (Peterson & Silberstein, 2009; Silberstein et al., 2007)."

So obviously, by our definitions, locality+PC does not imply realism as it does by yours. You must assume it, and that assumption is open to challenge. Again, I am simply explaining a position that should be clear at this point. A key word is including "simultaneous" with the perfect correlations. Realism, by definition, assumes that they are simultaneously real elements. For if they are not simultaneously real, you have equated realism and contextuality and that is not acceptable in the spirit of EPR.
 
Last edited:
  • #72
DrChinese said:
I told you that your inference is wrong, and that is because there are explicit models that are non-realistic but local and they feature perfect correlations.
OK, but is his inference right or wrong for models in which the future can't affect the past? I would consider backwards causation, even if it can be considered "local" on a technicality, to not really be what we mean in spirit by the word local. We obviously mean that causal influences can only propagate into the future light cone.
 
  • #73
ttn said:
The non-contextuality of spin *follows* from the EPR argument, i.e., that too is an *inference*. Maybe you're right at the end of the day that this is false. But if so, that doesn't show the *argument* was invalid -- it shows that one of the premises must have been wrong! This is elementary logic. I say "A --> B". You say, "ah, but B is false, therefore A doesn't --> B". That's not valid reasoning.

It would if we also agreed A were true. :smile:
 
  • #74
lugita15 said:
OK, but is his inference right or wrong for models in which the future can't affect the past? I would consider backwards causation, even if it can be considered "local" on a technicality, to not really be what we mean in spirit by the word local. We obviously mean that causal influences can only propagate into the future light cone.

MWI is such.

But no, I completely don't agree with you anyway. Clearly, relativistic equations don't need to limited to a single time direction for any particular reason other than by convention. So by local, I simply mean that c is respected and relativity describes the spacetime metric. This is a pretty important point.

On the other hand, obviously, Bohmian type models are "grossly" non-local. That's a big gap, and one which is fundamental.

So I resolve these issues by saying we live in a quantum non-local world because entanglement has the appearance of non-locality. But that could simply be an artifact of living in a local world with time symmetry, which is a lot different than a non-local world with a causal direction.
 
  • #75
lugita15 said:
OK, but is his inference right or wrong for models in which the future can't affect the past? I would consider backwards causation, even if it can be considered "local" on a technicality, to not really be what we mean in spirit by the word local. We obviously mean that causal influences can only propagate into the future light cone.

Exactly.

Maybe after all Dr C and I do disagree about how to define something: "locality". I thought I explained before how I was using this term (and in particular why retro-causal models don't count as "local") and I don't recall him disagreeing, so I had forgotten about this.

In any case, to recap, I think it is very silly to define "locality" in a way that embraces influences *from* the future light cone -- not only for the reason lugita15 gave above, but for the reason I mentioned earlier: with this definition, two "local" influences (from A to B and then from B to C) make a "nonlocal" influence (if A and C are spacelike separated). So the whole idea is actually quite incoherent: it doesn't rule *anything* out as "definitely in violation of locality". You can always just say "oh, that causal influence from A to C wasn't direct, it went through a B in the overlapping past or future light cones, so actually everything is local".
 
  • #76
DrChinese said:
It would if we also agreed A were true. :smile:

Uh, the A there was locality.

But whatever, that still is totally irrelevant. If "A --> B", and "B" is false, you can't conclude that "A --> B is false" -- whether "A" is true or not.
 
  • #77
DrChinese said:
MWI is such.

But no, I completely don't agree with you anyway. Clearly, relativistic equations don't need to limited to a single time direction for any particular reason other than by convention. So by local, I simply mean that c is respected and relativity describes the spacetime metric. This is a pretty important point.

On the other hand, obviously, Bohmian type models are "grossly" non-local. That's a big gap, and one which is fundamental.

So I resolve these issues by saying we live in a quantum non-local world because entanglement has the appearance of non-locality. But that could simply be an artifact of living in a local world with time symmetry, which is a lot different than a non-local world with a causal direction.

OK, so then you are in full agreement with Bell's conclusion: the world is nonlocal. (Where "nonlocal" here means that Bell's notion of locality is violated.)
 
  • #78
Let me make this clear: Bohr did not think EPR's perfect correlations implies realism. Otherwise EPR was right and he was wrong about the completeness of QM, and he would have conceded defeat.

Further, Bohr didn't think locality+perfect correlations->realism for the same reason. That too was part of EPR, and where does Bohr mention this subsequently?

Finally, were this to be a common perspective, then Einstein himself must have deduced this, and renounced locality. I mean, you don't need Bell at all to come to this conclusion if Travis is correct.

So again, my answer is that Travis' definitions clearly do not line up with any movement, past or present, other than Bohmians. I am not asking anyone to change their minds, but I hope my points are obvious at this juncture.
 
  • #79
ttn said:
Maybe after all Dr C and I do disagree about how to define something: "locality". I thought I explained before how I was using this term (and in particular why retro-causal models don't count as "local") and I don't recall him disagreeing, so I had forgotten about this.

Ah, but I did.
 
  • #80
DrChinese said:
Let me make this clear: Bohr did not think EPR's perfect correlations implies realism. Otherwise EPR was right and he was wrong about the completeness of QM, and he would have conceded defeat.

Bohr was a cotton-headed ninny-muggins.



Finally, were this to be a common perspective, then Einstein himself must have deduced this, and renounced locality. I mean, you don't need Bell at all to come to this conclusion if Travis is correct.

Huh? I really don't understand why this is so hard. The EPR argument was an argument that

locality + perfect correlations --> definite values for things that QM says can't have definite values

Einstein believed in locality, and he like everyone accepted the "perfect correlations" as a probably-correct prediction of QM. Now why should he have "renounced locality"?
 
  • #81
ttn said:
Uh, the A there was locality.

But whatever, that still is totally irrelevant. If "A --> B", and "B" is false, you can't conclude that "A --> B is false" -- whether "A" is true or not.
If A is true and B is false, then you can most certainly conclude that A implies B is false.
 
  • #82
DrChinese said:
Ah, but I did.

Really? Help me find it. I responded in post #7 of the thread to your comments about retro-causal models. I never saw a response to those comments, and couldn't find one now when I looked again. Help me find it if I missed it. Or maybe you meant that you disagreed, but "privately". =)
 
  • #83
What he meant was that "A--->B" and "no B" does not imply "no (A--->B)". It only implies "no A".

Anyway, I like it very much the way he codifies mathematically the premises in the "CHSH-Bell inequality: Bell's Theorem without perfect correlations".

That theorem rules out (if QM is always correct) ANY theory (deterministic or stochastic or whatever) that satisfies "his mathematical setup" + "his necessary condition for locality", and that mathematical setup is THAT general.
 
  • #84
lugita15 said:
If A is true and B is false, then you can most certainly conclude that A implies B is false.

Yes, sorry. I was being sloppy. The issue is not really the truth of the conditional "A --> B", but the validity or invalidity of the argument for it. Remember what we're talking about here. There's an argument (the EPR argument, which can be made mathematically rigorous using Bell's definition of locality) that shows that locality + perfect correlations requires deterministic non-contextual hidden variables. The point is that having some independent reason to question the existence of deterministic non-contextual hv's (say, the various no-hidden-variable proofs) doesn't give us any grounds whatsoever for denying what EPR argued. Same for locality.

The big picture here is that there is a long history of people saying things like "Bell put the final nail in EPR's coffin" or sometimes "Kochen-Specker put the final nail in EPR's coffin" or whatever. All such statements are based on the failure to appreciate that EPR actually presented an *argument* for the conclusion. Commentators (and I think this applies to Dr C here) typically miss the argument and instead understand EPR as having merely expressed "we like locality and we like hidden variables".
 
  • #85
mattt said:
What he meant was that "A--->B" and "no B" does not imply "no (A--->B)". It only implies "no A".

Yes.

Anyway, I like it very much the way he codifies mathematically the premises in the "CHSH-Bell inequality: Bell's Theorem without perfect correlations".

That theorem rules out (if QM is always correct) ANY theory (deterministic or stochastic or whatever) that satisfies "his mathematical setup" + "his necessary condition for locality", and that mathematical setup is THAT general.

Yes, good, I'm glad you appreciate the generality! That is really what's so amazing and profound about Bell's theorem. (Incidentally, don't forget the "no conspiracies" assumption is made as well -- I agree that, at some point, one should stop bothering to mention this each time, since it's part and parcel of science, and so not really on the table in the same way "locality" is. But maybe as long as billschnieder and others are still engaging in the discussion, we should make it explicit!)
 
  • #86
ttn said:
Bohr was a cotton-headed ninny-muggins.

That's pretty good! :biggrin:
 
  • #87
ttn said:
Really? Help me find it. I responded in post #7 of the thread to your comments about retro-causal models. I never saw a response to those comments, and couldn't find one now when I looked again. Help me find it if I missed it. Or maybe you meant that you disagreed, but "privately". =)

Disagree in private, me?

There is a problem distinguishing Bell's Locality condition from the question of what "Locality" means in the sense that causal/temporal direction was assumed to occur in one direction only. At this point, that cannot be assumed. It is fair to say that your definition is closest to what Bell intended, but I would not say it is closest to the most useful definition. Clearly, the relevant (useful) question is whether c is respected, regardless of the direction of time's arrow.
 
  • #88
DrChinese said:
Finally, were this to be a common perspective, then Einstein himself must have deduced this, and renounced locality.

OK, on re-reading this, it doesn't even make sense to me.

:smile:
 
  • #89
ttn said:
The big picture here is that there is a long history of people saying things like "Bell put the final nail in EPR's coffin" or sometimes "Kochen-Specker put the final nail in EPR's coffin" or whatever. All such statements are based on the failure to appreciate that EPR actually presented an *argument* for the conclusion. Commentators (and I think this applies to Dr C here) typically miss the argument and instead understand EPR as having merely expressed "we like locality and we like hidden variables".

EPR does demonstrate that if QM is complete and locality holds, then reality is contextual (which they consider unreasonable): "This makes the reality of P and Q depend upon the process of measurement carried out on the first system, which does not disturb the second system in any way. No reasonable definition of reality could be expected to permit this."

They speculate (but nowhere prove) that a more complete specification of the system is possible. I guess you could also conclude that they say "we like locality and we like hidden variables". :smile: (I think commentator would be a good term.)

The bigger picture after EPR is that local realism and QM could have an uneasy coexistence, with Bohr denying realism and Einstein asserting the incompleteness of QM - both while looking at the same underlying facts. Bell did put the nail in that coffin in the sense that at least one or the other view had to be wrong.
 
  • #90
ttn said:
The point is that having some independent reason to question the existence of deterministic non-contextual hv's (say, the various no-hidden-variable proofs) doesn't give us any grounds whatsoever for denying what EPR argued.

I would agree that when discussing Bell's Theorem, you can do it "mostly" independently of the later no-gos. On the other hand, you should at least mention those no-gos that Bell has spawned, including those which attack realism (such as GHZ).

Of course, to do that you would need to accept realism as part of Bell. The funny thing to me is that you mention in the article how EPR makes an argument for "pre-existing values" if QM is correct and locality holds... which to me IS realism. Then you deny that realism is relevant to Bell, when it is precisely those "pre-existing values" which Bell shows to be impossible.
 
  • #91
DrChinese said:
The funny thing to me is that you mention in the article how EPR makes an argument for "pre-existing values" if QM is correct and locality holds... which to me IS realism. Then you deny that realism is relevant to Bell, when it is precisely those "pre-existing values" which Bell shows to be impossible.

1) EPRB: "locality"+"QM is correct"--->"pre-existing values"

2) Bell's Inequality: "pre-existing values"+"QM is correct"--->Contradiction.

3) Join 1) and 2) and you get: "locality"+"QM is correct"--->Contradiction.


All this is explained ( (1) is not explained with total mathematical rigour at that stage, but (2) is ) before his "CHSH-Bell Inequality: Bell's Theorem without perfect correlation".

In this CHSH Theorem, what he proves is that "some very general mathematical setup (that accounts for almost any imaginable way a Theory could produce mathematical predictions, not only for those with pre-existing values)" + "factorizability condition"+"QM is correct"--->Contradiction.

Then he uses this CHSH-Theorem to prove mathematically (1).

At the end, to prove (1) with mathematical rigour he is using CHSH-Theorem, so in reality he is also using his "very general mathematical setup" to state and prove with mathematical rigour EPR argument.


But all you need to look at is CHSH-Theorem (the rest is only to make it easier for those who can not understand this CHSH-Theorem and proof). That very important mathematical Theorem states:

"a very general mathematical setup (that accounts for almost any imaginable way a Theory could produce mathematical predictions, not only for those with pre-existing values)"+"factorizability condition"+"QM is correct"--->Contradiction.
 
  • #92
Let me summarize my own viewpoint, and let's see how much agreement I can get. Let's suppose that QM is correct about all its experimental predictions. Then whenever you turn the polarizers to the same angle, you will get perfect correlation. From this you can reach three possible conclusions:

1. Even when you don't turn the polarizers to the same angle, it is still true that if you HAD turned the polarizers to the same angle, you WOULD have gotten perfect correlation.
2. When you don't turn the polarizes to the same angle, it makes no sense to ask what would have happened if you had turned them to the same angle.
3. When you don't turn the polarizers to the same angle, then it may be the case that you wouldn't have gotten perfect correlation if you had turned them to the same angle.

If we assume the principle of locality (i.e. excluding backward causation), then the only way option 3 would be possible is if the photons "knew" in advance what angle the polarizers would be turned to, or equivalently whatever is controlling the experiment decisions about the polarizer settings "knew" in advance whether the two photons would do the same thing or not. That would be superdeterminism, and we exclude it by the no-conspiracy condition.

So now we have two options left. Quantum mechanics takes option 2. But if you believe in counterfactual definiteness, you are forced into option 1. And then if you accept option 1 and the principle of locality (again, excluding backward causation), you are forced to conclude that the decision of each photon to go through or not go through must be determined by local hidden variables that are shared by the two photons. Is this a fair summary of the EPR argument?
 
  • #93
DrChinese said:
There is a problem distinguishing Bell's Locality condition from the question of what "Locality" means in the sense that causal/temporal direction was assumed to occur in one direction only. At this point, that cannot be assumed.

Except that, really, it already is being assumed, in the very act of using the words "cause" and "effect". A cause and an effect are two events that are linked by some law-governed process. Which one is the cause and which one is the effect, would be very hard to answer without just saying: the cause is the one that happens first, the effect is the one that happens later.

But, there's probably no point arguing about this. If we can agree that, on *Bell's* definition of "locality" (in which it is assumed that causation only goes forward in time), everything in the scholarpedia article is true, I will be satisfied. =)
 
  • #94
DrChinese said:
I would agree that when discussing Bell's Theorem, you can do it "mostly" independently of the later no-gos. On the other hand, you should at least mention those no-gos that Bell has spawned, including those which attack realism (such as GHZ).

I guess you haven't read sections 8 and 9 of the paper yet.


Of course, to do that you would need to accept realism as part of Bell. The funny thing to me is that you mention in the article how EPR makes an argument for "pre-existing values" if QM is correct and locality holds... which to me IS realism. Then you deny that realism is relevant to Bell, when it is precisely those "pre-existing values" which Bell shows to be impossible.

This is what I've explained several times already. For "Bell's theorem" (as we use that term, i.e., meaning the argument comprising both the EPR argument and "Bell's inequality theorem") the idea of "pre-existing values" or "realism" or whatever you want to call it, functions only as a middle term:

EPR: locality --> X

BIT: X --> inequality

Hence Bell's theorem: locality --> inequality.

If the two sub arguments are good arguments, then the conclusion follows, no matter what X is, whether you like X or not, whether you think X is true or not, etc.
 
  • #95
mattt said:
1) EPRB: "locality"+"QM is correct"--->"pre-existing values"

EPR-B would be summarized more like (using lingo of EPR):

[Ability to Predict with Certainty]
+ [Without first disturbing]
-> Element of Reality

To quote: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding lo this physical quantity."

This is for ONE particle, folks. Has nothing to do with two. The second particle is merely a tool to obtain the prediction, but anyway you do that would be acceptable. The locality condition is implicit in the idea that you are not disturbing the particle you are making the prediction on, especially by way of transmitting the nature of how you were able to make the prediction in the first place. Note that we are *not* assuming QM is correct. Just that we would have a setup in which we could make a suitable prediction. That might agree with the QM prediction, sure, but that does not mean QM is correct in other particulars. The discussion about the details of QM relates to the fact that QM does not allow for distinct values for non-commuting operators.

[Elements of Reality]
+ [Reasonable definition of reality assumes their simultaneous existence]
= Realism (this is a definition, nothing to argue about here)

To quote: "In accordance with our criterion of reality, in the first case we must consider the quantity P as being an element of reality, in the second case the quantity Q is an element of reality."

Realism -> More Completeness than QM/HUP allows

So there were 2 assumptions in route to the EPR-B conclusion: i) locality; ii) simultaneous elements of reality independent of observation. If you leave out ii) you end with a definition of reality which they considered unreasonable. So they explicitly assume ii), and I will re-quote this for the Nth time:

"One could object to this conclusion on the grounds that our criterion of reality is not sufficiently restrictive. Indeed, one would not arrive at our conclusion if one insisted that two or more physical quantities can be regarded as simultaneous elements of reality only when they can be simultaneously measured or predicted. ... No reasonable definition of reality could be expected to permit this."

They just said that if the simultaneity requirement is dropped, their argument is squat. Bell didn't much bother to mention it, he thought it was so obvious. But guess what, it is actually important. If there is not a predetermined result from the hidden variables at angle settings which are counterfactual, you don't get any contradictions.

(Just ask billschnieder about this point. :smile: )
 
Last edited:
  • #96
mattt said:
But all you need to look at is CHSH-Theorem (the rest is only to make it easier for those who can not understand this CHSH-Theorem and proof).

Yes, that's right. I almost always find that, once the "two part argument" character of Bell's overall argument is explained clearly, people get it right away. And that is nice, because both parts of the two part argument (namely, the EPR argument from locality to a certain kind of local HV's, and then the derivation of Bell's inequality from the local HV's) are pretty straightforward and can be explained clearly and convincingly without a lot of math. Dr C seems to have a block about it though... maybe for him it would be easier to get the point by looking at "Bell's theorem without perfect correlations"? It has the disadvantage of being a bit heavier mathematically, but does also have the crucial advantage that you never once have to even *mention* the "local realistic deterministic non-contextual hidden variable simultaneously definite values" that seem to be the source of the block.
 
  • #97
ttn said:
Yes, that's right. I almost always find that, once the "two part argument" character of Bell's overall argument is explained clearly, people get it right away. And that is nice, because both parts of the two part argument (namely, the EPR argument from locality to a certain kind of local HV's, and then the derivation of Bell's inequality from the local HV's) are pretty straightforward and can be explained clearly and convincingly without a lot of math. Dr C seems to have a block about it though... maybe for him it would be easier to get the point by looking at "Bell's theorem without perfect correlations"? It has the disadvantage of being a bit heavier mathematically, but does also have the crucial advantage that you never once have to even *mention* the "local realistic deterministic non-contextual hidden variable simultaneously definite values" that seem to be the source of the block.

As a way to introduce Bell's Theorem (BT) to beginners (and more), why not apply Bell to a classical local-realistic experiment?

PS: A challenge to do this (in an Einstein-local setting) already exists at https://www.physicsforums.com/showthread.php?p=3833480#post3833480. For some reason, it so far appears to be a stumbling block for those familiar with BT.

PPS: Travis, in the spirit of your OP, I am preparing a more detailed response to your article, which I very much appreciate. And for which I thank you! However, I expect that my comments will be critical (and hopefully helpful).

Some minor points include: the need for much better editing; to wit, the removal of repetition and the correction of typos; the re-location of much material to appendices; etc. The bias of the authors should be made clear to the reader; bias (imho) being a crucial consideration when it comes to proposed review articles on subjects which are still controversial; the bias in the article tending to the Bohmian (given the assumptions)?

Could you therefore please advise the general tenor of each author's physical beliefs and conceptualisations; e.g., Bohmian, MWI, CI, etc?

At the moment my primary focus is on unwarranted assumptions in your article: assumptions which I test (and find wanting) against a clearly Einstein-local and realistic (because it is wholly classical physics) experiment. That's where the above (even-simpler) experiment comes in.

And that is why I would welcome your thoughts about it. Especially should it be the case that, from your response, I might see that any further critique from me would be superfluous.

With thanks again,

Gordon
 
Last edited:
  • #98
ttn said:
I guess you haven't read sections 8 and 9 of the paper yet.

Ah, good, I did miss that.
 
  • #99
lugita15 said:
Let me summarize my own viewpoint, and let's see how much agreement I can get. Let's suppose that QM is correct about all its experimental predictions. Then whenever you turn the polarizers to the same angle, you will get perfect correlation. From this you can reach three possible conclusions:

1. Even when you don't turn the polarizers to the same angle, it is still true that if you HAD turned the polarizers to the same angle, you WOULD have gotten perfect correlation.
2. When you don't turn the polarizes to the same angle, it makes no sense to ask what would have happened if you had turned them to the same angle.
3. When you don't turn the polarizers to the same angle, then it may be the case that you wouldn't have gotten perfect correlation if you had turned them to the same angle.

If we assume the principle of locality (i.e. excluding backward causation), then the only way option 3 would be possible is if the photons "knew" in advance what angle the polarizers would be turned to, or equivalently whatever is controlling the experiment decisions about the polarizer settings "knew" in advance whether the two photons would do the same thing or not. That would be superdeterminism, and we exclude it by the no-conspiracy condition.

So now we have two options left. Quantum mechanics takes option 2. But if you believe in counterfactual definiteness, you are forced into option 1. And then if you accept option 1 and the principle of locality (again, excluding backward causation), you are forced to conclude that the decision of each photon to go through or not go through must be determined by local hidden variables that are shared by the two photons. Is this a fair summary of the EPR argument?

That's a nice, clear way to frame some issues. I agree completely with what you write in the first paragraph after the 1/2/3; 3 is out if you accept "no conspiracies". I don't agree, though, about your statement that "QM takes option 2" or even really that option 2 makes any sense as an option. QM, like any theory, tells you what will happen if you make certain measurements. It's just that it involves an element of (alleged) irreducible randomness: the first measurement collapses the 2-particle-wave-function (in an unpredictable, irreducibly random way), and subsequent predictions for what you will see if you make some measurement on the other particle are obviously affected. So the point is that QM is giving a *non-local* explanation for the statistics -- not that it's "denying counter-factual definiteness".

I really don't even know what this "counterfactual definiteness" stuff is supposed to mean. It seems to me inherently metaphysical. But we never need to get here into a discussion of what does or doesn't "really exist" in some counter-factual scenario. We just have to remember that we are talking about *theories* -- and a theory, by definition, is something that tells you what will happen *if you do such-and-such*. *All* of the predictions of a theory are in that sense hypothetical / counterfactual. Put it this way: the theory doesn't know and certainly doesn't care about what experiment you do in fact actually perform. It just tells you what will happen if you do such-and-such.

So back to your #2 above, of course it makes sense to ask what would have happened if you had turned the polarizers some other way. It makes just as much sense (after the fact, after you actually turned them one way) as it did before you did any experiment at all. How could the theory possibly care whether you've already done the experiment or not, and if so, which one you did? It doesn't care. It just tells you what happens in a given situation. QM works this way, and so does every other theory. So there really is no such thing as option #2.
 
  • #100
ttn said:
Quoting Bell: "It is notable that in this argument nothing is said about the locality, or even localizability, of the variable λ."
I guess I missed the argument. How does assuming λ comes from Venus result in denying non-locality??
If λ can be anything, then it can also be a non-local hidden variable. I'm trying to get you to explain how your derivation will be different if λ were non-local hidden variables? It appears your answer is that it won't be different.

The whole idea here is that (in general) there is a whole spectrum of possible values of λ, with some distribution ρ, that are produced when the experimenter "does the same thing at the particle source". There is no control over, and no knowledge of, the specific value of λ for a given particle pair.
Experimenters calculate their correlations using ONLY particles actually measured. Aren't you therefore assuming that for a given particle pair, a particluar value of λ is in play? Such that in a given run of the experiment, you could in principle think of making a list of all of the actually measured values of λ and their relative frequencies (if you knew them), to obtain a distribution of ρ(λ) that is applicable to the calculated correlation for the given run of the experiment? The actually measured distribution of λ for all 4 terms of the LHS must be identical according to your proof.

However as you say that the λs are hidden and the experimenters know nothing about it, you must therefore be making an additional assumption that the distributions are the same for all 4 terms calculated from 4 runs of the experiment, or you could be assuming that all 4 measured distributions are identical to the the distribution of λ leaving the source? Clearly you can not make such assumptions without justification and the justification can not simply be some vague impricise statement about scientific inquiry.

Just to make sure, by the "lists" you mean the functions (e.g.) E_a(A_1|\lambda)?
I'm referring to the list of outcomes from the experiments. In order to calculate E(a,b) from an experiment, you have a list of pairs of numbers with values ±1, as long as the number of particle pairs you actual measured and you calculate the mean of the paired product. For the 4 runs of the experiment used to obtained the 4 terms of the CHSH LHS, you therefore have 8 lists of numbers, or 4-pairs. Therfore Ea, Eb, Ea', Ec each correspond to a single list of numbers.

Huh? Nothing at all implies that. The lists here are lists of outcome pairs, (A1, A2). The experimenters will take the list for a given "run" (i.e., for a given setting pair) and compute the average value of the product A1*A2. That's how the experimenters compute the correlation functions that the inequality constrains. You are somehow confusing what the experimentalists do, with what is going on in the derivation of the inequality.
I'm trying to make you see that what experimenters do is not compatible with the conditions implied in the derivation of the inequalities -- the factorization within the integral, without which the inequality can not be obtained. I have already explained and you agreed that unless the *distribution* of λ is the same for the 4 CHSH LHS terms, the inequality is not derivable.
I don't even understand what you're saying. There is certainly no sense in which the experimenters' lists (of A1, A2 values) will look like, or even be comparable to, the "lists" I thought you had in mind above (namely, the one-sided expectation functions).

For the sake of illustration, assume we had a discrete set of lambdas, say (λ1, λ2, λ3, ... λn) for the theoretical case (forget about experiments for a moment). If we obtained E(a,b) by integrating over a series of λ values, say (λ1, λ2, λ4), the same must apply to E(a,c) and E(b,c) and all the other terms in the CHSH. In other words, you can not prove the inequality if you use E(a,b) calculated over (λ1, λ2, λ4), with E(a,c) calculated over (λ6, λ3, λ2) and E(b,c) calculated over (λ5, λ9, λ8), because in that case ρ(λ) will not be the same across the terms and the proof will not follow. Each one sided function, when considered in the context of the integral (or sum), obviously produces a codomain which corresponds to a list of values, ±1. For the eight lists from the left side of the CHSH, we should be able to sort all list in the order of the lambda indices and if we do this, we must find duplicates and be able to reduce the 8 lists to only 4 lists. Placing these 4 lists sideways therefore, the values for each row would have originated from the exact same λi value. Agreed?

You should then get something like this:


a b a' c
+ - - + λ1
- + - + λ2
- - + - λ3
... etc
+ - + - λn

where the last column corresponds to the actual value of lambda which resulted in the outcomes.
You can understand the list by saying the first row corresponds to A(a,λ1) = +1, A(b,λ1) = -1, A(a',λ1) = -1 and A(c, λ1) = +1

Note that the above is just another way of describing your factorization which you did within the proof. I'm just doing it this way because it makes it easier to see your error.

Now if we take the above theoretical case, and randomly pick a set of pairs from the a &b columns to calculate E1(a,b), randomly pick another set of pairs from the a and c columns to calculate E2(a,c), and the same for E3(a',b) and E4(a',c), don't you agree that this new case in which each term is obtained from a different "run" is more similar to the way the experiments are actually performed? Now starting with these terms, in order to prove the inequality, you have to make an additional assumption that the 8 lists of numbers used to calculate the inequality MUST be sortable and reduceable to 4 as described above. Simply because the inequality does not follow otherwise. Therefore you can not conclude reasonably that violation of an inequality means non-locality unless you have also ruled out the possibility that the terms from the experiment are not compatible with the mathematical requirements for deriving the inequality.

The question doesn't arise. You are just calculating 4 different things -- the predictions of QM for a certain correlation in a certain experiment -- and then adding them together in a certain way.
Very interesting! Note however that as I've explained above and you've mostly agreed, the terms in the LHS of the CHSH are not 4 different things. They are tightly linked to each other through the sharing of one-sided terms. The terms must not be assumed to be independent. They are linked to each other in a cylclic manner. I'm trying to get you to explain why you think using 4 different things
in an inequality which expects 4 tightly coupled things is mathematically correct. Why do you think this error is not the source of the violation.

If I tell you that 2 + 2 = 4. Anybody can violate it by saying 2inches + 2cm ≠ 4inches. So you need justification before you can plug terms willy-nilly into the LHS of the inequality.

I dont' know what you mean by "series of λs". What the assumption boils down to is: the distribution of λs (i.e., the fraction of the time that each possible value of λ is realized) is the same for the billion runs where the particles are measured along (a,b), the billion runs where the particles are measured along (a,c), etc. That is, basically, it is assumed that the settings of the instruments do not influence or even correlate with state of the particle pairs emitted by the source.
I take it you assume measuring a billion times does something special to the result? You said earler that the experimenters do not know anything about the nature or number of distinct λ values. So what makes you think "a billion" is enough? Let us then assume that there were 2 billion distinct values of λ. Will you still think a billion was enough?
What you're saying here doesn't make sense. You're confusing the A's that the experimentalists measure, with the λs that only theorists care about.

Theoretically, you can derive an inequality using terms which can not all be simultaneoulsy measured. However it is naive for experimentalists to think they can just measure any terms and plug them into the inequalities.
 
Last edited:

Similar threads

Replies
16
Views
3K
Replies
333
Views
17K
Replies
14
Views
4K
Replies
47
Views
5K
Replies
22
Views
33K
Replies
19
Views
2K
Back
Top