Scholarpedia article on Bell's Theorem

In summary, the article is a biased overview of the many criticisms of Bell's theorem and does not provide an unbiased perspective.
  • #71
ttn said:
But we don't disagree about the definitions of "assumption" or "inference". I've explained how the argument goes several times, so I don't see how you can suggest that my claim (that it's an inference) is somehow a matter of definition. I inferred it, right out in public in front of you. If I made a mistake in that inference, then tell me what the mistake was.

I told you that your inference is wrong, and that is because there are explicit models that are non-realistic but local and they feature perfect correlations. For example:

http://arxiv.org/abs/0903.2642

Relational Blockworld: Towards a Discrete Graph Theoretic Foundation of
Quantum Mechanics
W.M. Stuckey, Timothy McDevitt and Michael Silberstein

"BCTS [backwards-causation time-symmetric approaches] provides for a local account of entanglement (one without space-like influences) that not only keeps RoS [relativity of simultaneity], but in some cases relies on it by employing its blockworld consequence—the reality of all events past, present and future including the outcomes of quantum experiments (Peterson & Silberstein, 2009; Silberstein et al., 2007)."

So obviously, by our definitions, locality+PC does not imply realism as it does by yours. You must assume it, and that assumption is open to challenge. Again, I am simply explaining a position that should be clear at this point. A key word is including "simultaneous" with the perfect correlations. Realism, by definition, assumes that they are simultaneously real elements. For if they are not simultaneously real, you have equated realism and contextuality and that is not acceptable in the spirit of EPR.
 
Last edited:
Physics news on Phys.org
  • #72
DrChinese said:
I told you that your inference is wrong, and that is because there are explicit models that are non-realistic but local and they feature perfect correlations.
OK, but is his inference right or wrong for models in which the future can't affect the past? I would consider backwards causation, even if it can be considered "local" on a technicality, to not really be what we mean in spirit by the word local. We obviously mean that causal influences can only propagate into the future light cone.
 
  • #73
ttn said:
The non-contextuality of spin *follows* from the EPR argument, i.e., that too is an *inference*. Maybe you're right at the end of the day that this is false. But if so, that doesn't show the *argument* was invalid -- it shows that one of the premises must have been wrong! This is elementary logic. I say "A --> B". You say, "ah, but B is false, therefore A doesn't --> B". That's not valid reasoning.

It would if we also agreed A were true. :smile:
 
  • #74
lugita15 said:
OK, but is his inference right or wrong for models in which the future can't affect the past? I would consider backwards causation, even if it can be considered "local" on a technicality, to not really be what we mean in spirit by the word local. We obviously mean that causal influences can only propagate into the future light cone.

MWI is such.

But no, I completely don't agree with you anyway. Clearly, relativistic equations don't need to limited to a single time direction for any particular reason other than by convention. So by local, I simply mean that c is respected and relativity describes the spacetime metric. This is a pretty important point.

On the other hand, obviously, Bohmian type models are "grossly" non-local. That's a big gap, and one which is fundamental.

So I resolve these issues by saying we live in a quantum non-local world because entanglement has the appearance of non-locality. But that could simply be an artifact of living in a local world with time symmetry, which is a lot different than a non-local world with a causal direction.
 
  • #75
lugita15 said:
OK, but is his inference right or wrong for models in which the future can't affect the past? I would consider backwards causation, even if it can be considered "local" on a technicality, to not really be what we mean in spirit by the word local. We obviously mean that causal influences can only propagate into the future light cone.

Exactly.

Maybe after all Dr C and I do disagree about how to define something: "locality". I thought I explained before how I was using this term (and in particular why retro-causal models don't count as "local") and I don't recall him disagreeing, so I had forgotten about this.

In any case, to recap, I think it is very silly to define "locality" in a way that embraces influences *from* the future light cone -- not only for the reason lugita15 gave above, but for the reason I mentioned earlier: with this definition, two "local" influences (from A to B and then from B to C) make a "nonlocal" influence (if A and C are spacelike separated). So the whole idea is actually quite incoherent: it doesn't rule *anything* out as "definitely in violation of locality". You can always just say "oh, that causal influence from A to C wasn't direct, it went through a B in the overlapping past or future light cones, so actually everything is local".
 
  • #76
DrChinese said:
It would if we also agreed A were true. :smile:

Uh, the A there was locality.

But whatever, that still is totally irrelevant. If "A --> B", and "B" is false, you can't conclude that "A --> B is false" -- whether "A" is true or not.
 
  • #77
DrChinese said:
MWI is such.

But no, I completely don't agree with you anyway. Clearly, relativistic equations don't need to limited to a single time direction for any particular reason other than by convention. So by local, I simply mean that c is respected and relativity describes the spacetime metric. This is a pretty important point.

On the other hand, obviously, Bohmian type models are "grossly" non-local. That's a big gap, and one which is fundamental.

So I resolve these issues by saying we live in a quantum non-local world because entanglement has the appearance of non-locality. But that could simply be an artifact of living in a local world with time symmetry, which is a lot different than a non-local world with a causal direction.

OK, so then you are in full agreement with Bell's conclusion: the world is nonlocal. (Where "nonlocal" here means that Bell's notion of locality is violated.)
 
  • #78
Let me make this clear: Bohr did not think EPR's perfect correlations implies realism. Otherwise EPR was right and he was wrong about the completeness of QM, and he would have conceded defeat.

Further, Bohr didn't think locality+perfect correlations->realism for the same reason. That too was part of EPR, and where does Bohr mention this subsequently?

Finally, were this to be a common perspective, then Einstein himself must have deduced this, and renounced locality. I mean, you don't need Bell at all to come to this conclusion if Travis is correct.

So again, my answer is that Travis' definitions clearly do not line up with any movement, past or present, other than Bohmians. I am not asking anyone to change their minds, but I hope my points are obvious at this juncture.
 
  • #79
ttn said:
Maybe after all Dr C and I do disagree about how to define something: "locality". I thought I explained before how I was using this term (and in particular why retro-causal models don't count as "local") and I don't recall him disagreeing, so I had forgotten about this.

Ah, but I did.
 
  • #80
DrChinese said:
Let me make this clear: Bohr did not think EPR's perfect correlations implies realism. Otherwise EPR was right and he was wrong about the completeness of QM, and he would have conceded defeat.

Bohr was a cotton-headed ninny-muggins.



Finally, were this to be a common perspective, then Einstein himself must have deduced this, and renounced locality. I mean, you don't need Bell at all to come to this conclusion if Travis is correct.

Huh? I really don't understand why this is so hard. The EPR argument was an argument that

locality + perfect correlations --> definite values for things that QM says can't have definite values

Einstein believed in locality, and he like everyone accepted the "perfect correlations" as a probably-correct prediction of QM. Now why should he have "renounced locality"?
 
  • #81
ttn said:
Uh, the A there was locality.

But whatever, that still is totally irrelevant. If "A --> B", and "B" is false, you can't conclude that "A --> B is false" -- whether "A" is true or not.
If A is true and B is false, then you can most certainly conclude that A implies B is false.
 
  • #82
DrChinese said:
Ah, but I did.

Really? Help me find it. I responded in post #7 of the thread to your comments about retro-causal models. I never saw a response to those comments, and couldn't find one now when I looked again. Help me find it if I missed it. Or maybe you meant that you disagreed, but "privately". =)
 
  • #83
What he meant was that "A--->B" and "no B" does not imply "no (A--->B)". It only implies "no A".

Anyway, I like it very much the way he codifies mathematically the premises in the "CHSH-Bell inequality: Bell's Theorem without perfect correlations".

That theorem rules out (if QM is always correct) ANY theory (deterministic or stochastic or whatever) that satisfies "his mathematical setup" + "his necessary condition for locality", and that mathematical setup is THAT general.
 
  • #84
lugita15 said:
If A is true and B is false, then you can most certainly conclude that A implies B is false.

Yes, sorry. I was being sloppy. The issue is not really the truth of the conditional "A --> B", but the validity or invalidity of the argument for it. Remember what we're talking about here. There's an argument (the EPR argument, which can be made mathematically rigorous using Bell's definition of locality) that shows that locality + perfect correlations requires deterministic non-contextual hidden variables. The point is that having some independent reason to question the existence of deterministic non-contextual hv's (say, the various no-hidden-variable proofs) doesn't give us any grounds whatsoever for denying what EPR argued. Same for locality.

The big picture here is that there is a long history of people saying things like "Bell put the final nail in EPR's coffin" or sometimes "Kochen-Specker put the final nail in EPR's coffin" or whatever. All such statements are based on the failure to appreciate that EPR actually presented an *argument* for the conclusion. Commentators (and I think this applies to Dr C here) typically miss the argument and instead understand EPR as having merely expressed "we like locality and we like hidden variables".
 
  • #85
mattt said:
What he meant was that "A--->B" and "no B" does not imply "no (A--->B)". It only implies "no A".

Yes.

Anyway, I like it very much the way he codifies mathematically the premises in the "CHSH-Bell inequality: Bell's Theorem without perfect correlations".

That theorem rules out (if QM is always correct) ANY theory (deterministic or stochastic or whatever) that satisfies "his mathematical setup" + "his necessary condition for locality", and that mathematical setup is THAT general.

Yes, good, I'm glad you appreciate the generality! That is really what's so amazing and profound about Bell's theorem. (Incidentally, don't forget the "no conspiracies" assumption is made as well -- I agree that, at some point, one should stop bothering to mention this each time, since it's part and parcel of science, and so not really on the table in the same way "locality" is. But maybe as long as billschnieder and others are still engaging in the discussion, we should make it explicit!)
 
  • #86
ttn said:
Bohr was a cotton-headed ninny-muggins.

That's pretty good! :biggrin:
 
  • #87
ttn said:
Really? Help me find it. I responded in post #7 of the thread to your comments about retro-causal models. I never saw a response to those comments, and couldn't find one now when I looked again. Help me find it if I missed it. Or maybe you meant that you disagreed, but "privately". =)

Disagree in private, me?

There is a problem distinguishing Bell's Locality condition from the question of what "Locality" means in the sense that causal/temporal direction was assumed to occur in one direction only. At this point, that cannot be assumed. It is fair to say that your definition is closest to what Bell intended, but I would not say it is closest to the most useful definition. Clearly, the relevant (useful) question is whether c is respected, regardless of the direction of time's arrow.
 
  • #88
DrChinese said:
Finally, were this to be a common perspective, then Einstein himself must have deduced this, and renounced locality.

OK, on re-reading this, it doesn't even make sense to me.

:rofl:
 
  • #89
ttn said:
The big picture here is that there is a long history of people saying things like "Bell put the final nail in EPR's coffin" or sometimes "Kochen-Specker put the final nail in EPR's coffin" or whatever. All such statements are based on the failure to appreciate that EPR actually presented an *argument* for the conclusion. Commentators (and I think this applies to Dr C here) typically miss the argument and instead understand EPR as having merely expressed "we like locality and we like hidden variables".

EPR does demonstrate that if QM is complete and locality holds, then reality is contextual (which they consider unreasonable): "This makes the reality of P and Q depend upon the process of measurement carried out on the first system, which does not disturb the second system in any way. No reasonable definition of reality could be expected to permit this."

They speculate (but nowhere prove) that a more complete specification of the system is possible. I guess you could also conclude that they say "we like locality and we like hidden variables". :smile: (I think commentator would be a good term.)

The bigger picture after EPR is that local realism and QM could have an uneasy coexistence, with Bohr denying realism and Einstein asserting the incompleteness of QM - both while looking at the same underlying facts. Bell did put the nail in that coffin in the sense that at least one or the other view had to be wrong.
 
  • #90
ttn said:
The point is that having some independent reason to question the existence of deterministic non-contextual hv's (say, the various no-hidden-variable proofs) doesn't give us any grounds whatsoever for denying what EPR argued.

I would agree that when discussing Bell's Theorem, you can do it "mostly" independently of the later no-gos. On the other hand, you should at least mention those no-gos that Bell has spawned, including those which attack realism (such as GHZ).

Of course, to do that you would need to accept realism as part of Bell. The funny thing to me is that you mention in the article how EPR makes an argument for "pre-existing values" if QM is correct and locality holds... which to me IS realism. Then you deny that realism is relevant to Bell, when it is precisely those "pre-existing values" which Bell shows to be impossible.
 
  • #91
DrChinese said:
The funny thing to me is that you mention in the article how EPR makes an argument for "pre-existing values" if QM is correct and locality holds... which to me IS realism. Then you deny that realism is relevant to Bell, when it is precisely those "pre-existing values" which Bell shows to be impossible.

1) EPRB: "locality"+"QM is correct"--->"pre-existing values"

2) Bell's Inequality: "pre-existing values"+"QM is correct"--->Contradiction.

3) Join 1) and 2) and you get: "locality"+"QM is correct"--->Contradiction.


All this is explained ( (1) is not explained with total mathematical rigour at that stage, but (2) is ) before his "CHSH-Bell Inequality: Bell's Theorem without perfect correlation".

In this CHSH Theorem, what he proves is that "some very general mathematical setup (that accounts for almost any imaginable way a Theory could produce mathematical predictions, not only for those with pre-existing values)" + "factorizability condition"+"QM is correct"--->Contradiction.

Then he uses this CHSH-Theorem to prove mathematically (1).

At the end, to prove (1) with mathematical rigour he is using CHSH-Theorem, so in reality he is also using his "very general mathematical setup" to state and prove with mathematical rigour EPR argument.


But all you need to look at is CHSH-Theorem (the rest is only to make it easier for those who can not understand this CHSH-Theorem and proof). That very important mathematical Theorem states:

"a very general mathematical setup (that accounts for almost any imaginable way a Theory could produce mathematical predictions, not only for those with pre-existing values)"+"factorizability condition"+"QM is correct"--->Contradiction.
 
  • #92
Let me summarize my own viewpoint, and let's see how much agreement I can get. Let's suppose that QM is correct about all its experimental predictions. Then whenever you turn the polarizers to the same angle, you will get perfect correlation. From this you can reach three possible conclusions:

1. Even when you don't turn the polarizers to the same angle, it is still true that if you HAD turned the polarizers to the same angle, you WOULD have gotten perfect correlation.
2. When you don't turn the polarizes to the same angle, it makes no sense to ask what would have happened if you had turned them to the same angle.
3. When you don't turn the polarizers to the same angle, then it may be the case that you wouldn't have gotten perfect correlation if you had turned them to the same angle.

If we assume the principle of locality (i.e. excluding backward causation), then the only way option 3 would be possible is if the photons "knew" in advance what angle the polarizers would be turned to, or equivalently whatever is controlling the experiment decisions about the polarizer settings "knew" in advance whether the two photons would do the same thing or not. That would be superdeterminism, and we exclude it by the no-conspiracy condition.

So now we have two options left. Quantum mechanics takes option 2. But if you believe in counterfactual definiteness, you are forced into option 1. And then if you accept option 1 and the principle of locality (again, excluding backward causation), you are forced to conclude that the decision of each photon to go through or not go through must be determined by local hidden variables that are shared by the two photons. Is this a fair summary of the EPR argument?
 
  • #93
DrChinese said:
There is a problem distinguishing Bell's Locality condition from the question of what "Locality" means in the sense that causal/temporal direction was assumed to occur in one direction only. At this point, that cannot be assumed.

Except that, really, it already is being assumed, in the very act of using the words "cause" and "effect". A cause and an effect are two events that are linked by some law-governed process. Which one is the cause and which one is the effect, would be very hard to answer without just saying: the cause is the one that happens first, the effect is the one that happens later.

But, there's probably no point arguing about this. If we can agree that, on *Bell's* definition of "locality" (in which it is assumed that causation only goes forward in time), everything in the scholarpedia article is true, I will be satisfied. =)
 
  • #94
DrChinese said:
I would agree that when discussing Bell's Theorem, you can do it "mostly" independently of the later no-gos. On the other hand, you should at least mention those no-gos that Bell has spawned, including those which attack realism (such as GHZ).

I guess you haven't read sections 8 and 9 of the paper yet.


Of course, to do that you would need to accept realism as part of Bell. The funny thing to me is that you mention in the article how EPR makes an argument for "pre-existing values" if QM is correct and locality holds... which to me IS realism. Then you deny that realism is relevant to Bell, when it is precisely those "pre-existing values" which Bell shows to be impossible.

This is what I've explained several times already. For "Bell's theorem" (as we use that term, i.e., meaning the argument comprising both the EPR argument and "Bell's inequality theorem") the idea of "pre-existing values" or "realism" or whatever you want to call it, functions only as a middle term:

EPR: locality --> X

BIT: X --> inequality

Hence Bell's theorem: locality --> inequality.

If the two sub arguments are good arguments, then the conclusion follows, no matter what X is, whether you like X or not, whether you think X is true or not, etc.
 
  • #95
mattt said:
1) EPRB: "locality"+"QM is correct"--->"pre-existing values"

EPR-B would be summarized more like (using lingo of EPR):

[Ability to Predict with Certainty]
+ [Without first disturbing]
-> Element of Reality

To quote: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding lo this physical quantity."

This is for ONE particle, folks. Has nothing to do with two. The second particle is merely a tool to obtain the prediction, but anyway you do that would be acceptable. The locality condition is implicit in the idea that you are not disturbing the particle you are making the prediction on, especially by way of transmitting the nature of how you were able to make the prediction in the first place. Note that we are *not* assuming QM is correct. Just that we would have a setup in which we could make a suitable prediction. That might agree with the QM prediction, sure, but that does not mean QM is correct in other particulars. The discussion about the details of QM relates to the fact that QM does not allow for distinct values for non-commuting operators.

[Elements of Reality]
+ [Reasonable definition of reality assumes their simultaneous existence]
= Realism (this is a definition, nothing to argue about here)

To quote: "In accordance with our criterion of reality, in the first case we must consider the quantity P as being an element of reality, in the second case the quantity Q is an element of reality."

Realism -> More Completeness than QM/HUP allows

So there were 2 assumptions in route to the EPR-B conclusion: i) locality; ii) simultaneous elements of reality independent of observation. If you leave out ii) you end with a definition of reality which they considered unreasonable. So they explicitly assume ii), and I will re-quote this for the Nth time:

"One could object to this conclusion on the grounds that our criterion of reality is not sufficiently restrictive. Indeed, one would not arrive at our conclusion if one insisted that two or more physical quantities can be regarded as simultaneous elements of reality only when they can be simultaneously measured or predicted. ... No reasonable definition of reality could be expected to permit this."

They just said that if the simultaneity requirement is dropped, their argument is squat. Bell didn't much bother to mention it, he thought it was so obvious. But guess what, it is actually important. If there is not a predetermined result from the hidden variables at angle settings which are counterfactual, you don't get any contradictions.

(Just ask billschnieder about this point. :smile: )
 
Last edited:
  • #96
mattt said:
But all you need to look at is CHSH-Theorem (the rest is only to make it easier for those who can not understand this CHSH-Theorem and proof).

Yes, that's right. I almost always find that, once the "two part argument" character of Bell's overall argument is explained clearly, people get it right away. And that is nice, because both parts of the two part argument (namely, the EPR argument from locality to a certain kind of local HV's, and then the derivation of Bell's inequality from the local HV's) are pretty straightforward and can be explained clearly and convincingly without a lot of math. Dr C seems to have a block about it though... maybe for him it would be easier to get the point by looking at "Bell's theorem without perfect correlations"? It has the disadvantage of being a bit heavier mathematically, but does also have the crucial advantage that you never once have to even *mention* the "local realistic deterministic non-contextual hidden variable simultaneously definite values" that seem to be the source of the block.
 
  • #97
ttn said:
Yes, that's right. I almost always find that, once the "two part argument" character of Bell's overall argument is explained clearly, people get it right away. And that is nice, because both parts of the two part argument (namely, the EPR argument from locality to a certain kind of local HV's, and then the derivation of Bell's inequality from the local HV's) are pretty straightforward and can be explained clearly and convincingly without a lot of math. Dr C seems to have a block about it though... maybe for him it would be easier to get the point by looking at "Bell's theorem without perfect correlations"? It has the disadvantage of being a bit heavier mathematically, but does also have the crucial advantage that you never once have to even *mention* the "local realistic deterministic non-contextual hidden variable simultaneously definite values" that seem to be the source of the block.

As a way to introduce Bell's Theorem (BT) to beginners (and more), why not apply Bell to a classical local-realistic experiment?

PS: A challenge to do this (in an Einstein-local setting) already exists at https://www.physicsforums.com/showthread.php?p=3833480#post3833480. For some reason, it so far appears to be a stumbling block for those familiar with BT.

PPS: Travis, in the spirit of your OP, I am preparing a more detailed response to your article, which I very much appreciate. And for which I thank you! However, I expect that my comments will be critical (and hopefully helpful).

Some minor points include: the need for much better editing; to wit, the removal of repetition and the correction of typos; the re-location of much material to appendices; etc. The bias of the authors should be made clear to the reader; bias (imho) being a crucial consideration when it comes to proposed review articles on subjects which are still controversial; the bias in the article tending to the Bohmian (given the assumptions)?

Could you therefore please advise the general tenor of each author's physical beliefs and conceptualisations; e.g., Bohmian, MWI, CI, etc?

At the moment my primary focus is on unwarranted assumptions in your article: assumptions which I test (and find wanting) against a clearly Einstein-local and realistic (because it is wholly classical physics) experiment. That's where the above (even-simpler) experiment comes in.

And that is why I would welcome your thoughts about it. Especially should it be the case that, from your response, I might see that any further critique from me would be superfluous.

With thanks again,

Gordon
 
Last edited:
  • #98
ttn said:
I guess you haven't read sections 8 and 9 of the paper yet.

Ah, good, I did miss that.
 
  • #99
lugita15 said:
Let me summarize my own viewpoint, and let's see how much agreement I can get. Let's suppose that QM is correct about all its experimental predictions. Then whenever you turn the polarizers to the same angle, you will get perfect correlation. From this you can reach three possible conclusions:

1. Even when you don't turn the polarizers to the same angle, it is still true that if you HAD turned the polarizers to the same angle, you WOULD have gotten perfect correlation.
2. When you don't turn the polarizes to the same angle, it makes no sense to ask what would have happened if you had turned them to the same angle.
3. When you don't turn the polarizers to the same angle, then it may be the case that you wouldn't have gotten perfect correlation if you had turned them to the same angle.

If we assume the principle of locality (i.e. excluding backward causation), then the only way option 3 would be possible is if the photons "knew" in advance what angle the polarizers would be turned to, or equivalently whatever is controlling the experiment decisions about the polarizer settings "knew" in advance whether the two photons would do the same thing or not. That would be superdeterminism, and we exclude it by the no-conspiracy condition.

So now we have two options left. Quantum mechanics takes option 2. But if you believe in counterfactual definiteness, you are forced into option 1. And then if you accept option 1 and the principle of locality (again, excluding backward causation), you are forced to conclude that the decision of each photon to go through or not go through must be determined by local hidden variables that are shared by the two photons. Is this a fair summary of the EPR argument?

That's a nice, clear way to frame some issues. I agree completely with what you write in the first paragraph after the 1/2/3; 3 is out if you accept "no conspiracies". I don't agree, though, about your statement that "QM takes option 2" or even really that option 2 makes any sense as an option. QM, like any theory, tells you what will happen if you make certain measurements. It's just that it involves an element of (alleged) irreducible randomness: the first measurement collapses the 2-particle-wave-function (in an unpredictable, irreducibly random way), and subsequent predictions for what you will see if you make some measurement on the other particle are obviously affected. So the point is that QM is giving a *non-local* explanation for the statistics -- not that it's "denying counter-factual definiteness".

I really don't even know what this "counterfactual definiteness" stuff is supposed to mean. It seems to me inherently metaphysical. But we never need to get here into a discussion of what does or doesn't "really exist" in some counter-factual scenario. We just have to remember that we are talking about *theories* -- and a theory, by definition, is something that tells you what will happen *if you do such-and-such*. *All* of the predictions of a theory are in that sense hypothetical / counterfactual. Put it this way: the theory doesn't know and certainly doesn't care about what experiment you do in fact actually perform. It just tells you what will happen if you do such-and-such.

So back to your #2 above, of course it makes sense to ask what would have happened if you had turned the polarizers some other way. It makes just as much sense (after the fact, after you actually turned them one way) as it did before you did any experiment at all. How could the theory possibly care whether you've already done the experiment or not, and if so, which one you did? It doesn't care. It just tells you what happens in a given situation. QM works this way, and so does every other theory. So there really is no such thing as option #2.
 
  • #100
ttn said:
Quoting Bell: "It is notable that in this argument nothing is said about the locality, or even localizability, of the variable λ."
I guess I missed the argument. How does assuming λ comes from Venus result in denying non-locality??
If λ can be anything, then it can also be a non-local hidden variable. I'm trying to get you to explain how your derivation will be different if λ were non-local hidden variables? It appears your answer is that it won't be different.

The whole idea here is that (in general) there is a whole spectrum of possible values of λ, with some distribution ρ, that are produced when the experimenter "does the same thing at the particle source". There is no control over, and no knowledge of, the specific value of λ for a given particle pair.
Experimenters calculate their correlations using ONLY particles actually measured. Aren't you therefore assuming that for a given particle pair, a particluar value of λ is in play? Such that in a given run of the experiment, you could in principle think of making a list of all of the actually measured values of λ and their relative frequencies (if you knew them), to obtain a distribution of ρ(λ) that is applicable to the calculated correlation for the given run of the experiment? The actually measured distribution of λ for all 4 terms of the LHS must be identical according to your proof.

However as you say that the λs are hidden and the experimenters know nothing about it, you must therefore be making an additional assumption that the distributions are the same for all 4 terms calculated from 4 runs of the experiment, or you could be assuming that all 4 measured distributions are identical to the the distribution of λ leaving the source? Clearly you can not make such assumptions without justification and the justification can not simply be some vague impricise statement about scientific inquiry.

Just to make sure, by the "lists" you mean the functions (e.g.) [itex]E_a(A_1|\lambda)[/itex]?
I'm referring to the list of outcomes from the experiments. In order to calculate E(a,b) from an experiment, you have a list of pairs of numbers with values ±1, as long as the number of particle pairs you actual measured and you calculate the mean of the paired product. For the 4 runs of the experiment used to obtained the 4 terms of the CHSH LHS, you therefore have 8 lists of numbers, or 4-pairs. Therfore Ea, Eb, Ea', Ec each correspond to a single list of numbers.

Huh? Nothing at all implies that. The lists here are lists of outcome pairs, (A1, A2). The experimenters will take the list for a given "run" (i.e., for a given setting pair) and compute the average value of the product A1*A2. That's how the experimenters compute the correlation functions that the inequality constrains. You are somehow confusing what the experimentalists do, with what is going on in the derivation of the inequality.
I'm trying to make you see that what experimenters do is not compatible with the conditions implied in the derivation of the inequalities -- the factorization within the integral, without which the inequality can not be obtained. I have already explained and you agreed that unless the *distribution* of λ is the same for the 4 CHSH LHS terms, the inequality is not derivable.
I don't even understand what you're saying. There is certainly no sense in which the experimenters' lists (of A1, A2 values) will look like, or even be comparable to, the "lists" I thought you had in mind above (namely, the one-sided expectation functions).

For the sake of illustration, assume we had a discrete set of lambdas, say (λ1, λ2, λ3, ... λn) for the theoretical case (forget about experiments for a moment). If we obtained E(a,b) by integrating over a series of λ values, say (λ1, λ2, λ4), the same must apply to E(a,c) and E(b,c) and all the other terms in the CHSH. In other words, you can not prove the inequality if you use E(a,b) calculated over (λ1, λ2, λ4), with E(a,c) calculated over (λ6, λ3, λ2) and E(b,c) calculated over (λ5, λ9, λ8), because in that case ρ(λ) will not be the same across the terms and the proof will not follow. Each one sided function, when considered in the context of the integral (or sum), obviously produces a codomain which corresponds to a list of values, ±1. For the eight lists from the left side of the CHSH, we should be able to sort all list in the order of the lambda indices and if we do this, we must find duplicates and be able to reduce the 8 lists to only 4 lists. Placing these 4 lists sideways therefore, the values for each row would have originated from the exact same λi value. Agreed?

You should then get something like this:


a b a' c
+ - - + λ1
- + - + λ2
- - + - λ3
... etc
+ - + - λn

where the last column corresponds to the actual value of lambda which resulted in the outcomes.
You can understand the list by saying the first row corresponds to A(a,λ1) = +1, A(b,λ1) = -1, A(a',λ1) = -1 and A(c, λ1) = +1

Note that the above is just another way of describing your factorization which you did within the proof. I'm just doing it this way because it makes it easier to see your error.

Now if we take the above theoretical case, and randomly pick a set of pairs from the a &b columns to calculate E1(a,b), randomly pick another set of pairs from the a and c columns to calculate E2(a,c), and the same for E3(a',b) and E4(a',c), don't you agree that this new case in which each term is obtained from a different "run" is more similar to the way the experiments are actually performed? Now starting with these terms, in order to prove the inequality, you have to make an additional assumption that the 8 lists of numbers used to calculate the inequality MUST be sortable and reduceable to 4 as described above. Simply because the inequality does not follow otherwise. Therefore you can not conclude reasonably that violation of an inequality means non-locality unless you have also ruled out the possibility that the terms from the experiment are not compatible with the mathematical requirements for deriving the inequality.

The question doesn't arise. You are just calculating 4 different things -- the predictions of QM for a certain correlation in a certain experiment -- and then adding them together in a certain way.
Very interesting! Note however that as I've explained above and you've mostly agreed, the terms in the LHS of the CHSH are not 4 different things. They are tightly linked to each other through the sharing of one-sided terms. The terms must not be assumed to be independent. They are linked to each other in a cylclic manner. I'm trying to get you to explain why you think using 4 different things
in an inequality which expects 4 tightly coupled things is mathematically correct. Why do you think this error is not the source of the violation.

If I tell you that 2 + 2 = 4. Anybody can violate it by saying 2inches + 2cm ≠ 4inches. So you need justification before you can plug terms willy-nilly into the LHS of the inequality.

I dont' know what you mean by "series of λs". What the assumption boils down to is: the distribution of λs (i.e., the fraction of the time that each possible value of λ is realized) is the same for the billion runs where the particles are measured along (a,b), the billion runs where the particles are measured along (a,c), etc. That is, basically, it is assumed that the settings of the instruments do not influence or even correlate with state of the particle pairs emitted by the source.
I take it you assume measuring a billion times does something special to the result? You said earler that the experimenters do not know anything about the nature or number of distinct λ values. So what makes you think "a billion" is enough? Let us then assume that there were 2 billion distinct values of λ. Will you still think a billion was enough?
What you're saying here doesn't make sense. You're confusing the A's that the experimentalists measure, with the λs that only theorists care about.

Theoretically, you can derive an inequality using terms which can not all be simultaneoulsy measured. However it is naive for experimentalists to think they can just measure any terms and plug them into the inequalities.
 
Last edited:
  • #101
billschnieder said:
If λ can be anything, then it can also be a non-local hidden variable. I'm trying to get you to explain how your derivation will be different if λ were non-local hidden variables? It appears your answer is that it won't be different.

Yes, it won't be different. Indeed, if you asked me to characterize what λ is, in non-mathematical terms, I'd just admit openly that it's a "not-necessarily-local hidden variable". (Of course, the terminology "hidden variable" isn't ideal, since that connotes something specifically *supplementary* to the ordinary QM wf, which needn't at all be the case. Maybe "not-necessarily-local outcome-influencing variable".)


Experimenters calculate their correlations using ONLY particles actually measured. Aren't you therefore assuming that for a given particle pair, a particluar value of λ is in play?

Yes, each pair should have some particular λ.


Such that in a given run of the experiment, you could in principle think of making a list of all of the actually measured values of λ and their relative frequencies (if you knew them), to obtain a distribution of ρ(λ) that is applicable to the calculated correlation for the given run of the experiment? The actually measured distribution of λ for all 4 terms of the LHS must be identical according to your proof.

Yes, I think that's right. Of course, you can't/don't actually measure the values of λ. But apparently you meant this as a hypothetical, as in "if you could somehow magically measure them, then you could write down what the value was for each particle pair and look later at their statistical distributions in the different runs".


However as you say that the λs are hidden and the experimenters know nothing about it, you must therefore be making an additional assumption that the distributions are the same for all 4 terms calculated from 4 runs of the experiment

Yes, I've admitted this openly. We stress it in the article! Yes, yes yes. We *assume* that the distributions are the same for all 4 runs, i.e., for all 4 possible values of the setting parameters. That is, we assume that the distribution of λs is independent of the settings. We call this the "no conspiracy" assumption. Yes, this assumption is needed to derive the inequality. Yes, yes, yes.



or you could be assuming that all 4 measured distributions are identical to the the distribution of λ leaving the source?

I don't understand that. There is no consideration of the λs changing in time. If they change in time (between when they leave the source and "later" when they "do their thing", influence the outcomes somehow) then we need only ever talk about the "later" values and their distribution).



Clearly you can not make such assumptions without justification and the justification can not simply be some vague impricise statement about scientific inquiry.

Not everything can be deduced mathematically. If you find the assumption unreasonable, that's cool. Just say you accept the mathematical proof, but find the "no conspiracies" assumption unreasonable. Don't keep saying there's an "error" in the proof!





You can understand the list by saying the first row corresponds to A(a,λ1) = +1, A(b,λ1) = -1, A(a',λ1) = -1 and A(c, λ1) = +1

Sorry, I don't understand it. We are here deliberately trying to avoid the assumption that λ plus the local setting *determine* the outcome. That is, we are here deliberately trying to allow that there is some "residual indeterminism" at the measurement event. So I don't know where you got these functions A.


Note that the above is just another way of describing your factorization which you did within the proof. I'm just doing it this way because it makes it easier to see your error.

The factorization here is in terms of the E's. The idea is that λ and the local setting should determine the probabilities for the possible outcomes, hence the expected value; but they need not uniquely determine the outcome; we don't assume determinism.


Therefore you can not conclude reasonably that violation of an inequality means non-locality unless you have also ruled out the possibility that the terms from the experiment are not compatible with the mathematical requirements for deriving the inequality.

I don't know how to say it any more plainly. Yes, the conclusion of nonlocality only follows if you make the "no conspiracies" assumption.

The terms must not be assumed to be independent.

Just out of curiosity, would you say the same thing in the coin flip / drug trial analogy I described before? That is, does it violate your sense of scientific propriety to "just assume, without proof" that the coin flip outcomes are uncorrelated with the precise health status of the patients?


I take it you assume measuring a billion times does something special to the result? You said earler that the experimenters do not know anything about the nature or number of distinct λ values. So what makes you think "a billion" is enough? Let us then assume that there were 2 billion distinct values of λ. Will you still think a billion was enough?

I agree that, in principle, it might not be. But we know -- from experience/experiment -- that the statistics *converge* as you do more and more trials. That is, it takes a certain number of trials to get some accuracy, but then as you keep going things "settle in" to some values and don't change as you do more and more trials. Thus, the question of how many trials is enough is an empirical one: do enough trials such that doing more doesn't change the answer. Any experimentalist will tell you that in the case at hand the actual experiments already involve way more than enough trials.

Of course, a certain kind of mathematical rationalist will remain unsatisfied by this: "can you *prove* that things won't suddenly start changing if I take just one more billion data points?" No, I can't prove it. But you run the risk here of committing to a level of skepticism that would have you rejecting every single empirical scientific claim ever made. And that, in my book, cannot be rational.
 
  • #102
ttn said:
billschneider said:
Aren't you therefore assuming that for a given particle pair, a particluar value of λ is in play?
Yes, each pair should have some particular λ.
...

billschnieder said:
a b a' c
+ - - + λ1
- + - + λ2
- - + - λ3
... etc
+ - + - λn

where the last column corresponds to the actual value of lambda which resulted in the outcomes. You can understand the list by saying the first row corresponds to A(a,λ1) = +1, A(b,λ1) = -1, A(a',λ1) = -1 and A(c, λ1) = +1

Sorry, I don't understand it. We are here deliberately trying to avoid the assumption that λ plus the local setting *determine* the outcome. That is, we are here deliberately trying to allow that there is some "residual indeterminism" at the measurement event. So I don't know where you got these functions A.

It is baffling to me what part of the above you do not understand. Those functions are from Bell's equation (1). In fact A1 and A2 in your article are generated by functions of this type are they not? In anycase you did not say whether you agree or disagree with the requirement to be able to reduce the 8 lists of numbers to 4. Was that part not clear also?

The factorization here is in terms of the E's. The idea is that λ and the local setting should determine the probabilities for the possible outcomes, hence the expected value; but they need not uniquely determine the outcome; we don't assume determinism.
I don't recall using the word determinism. All I stated was the obvious fact, acknowledged by Bell in is first equation, that a given outcome for one particular particle is a function of the instrument setting and the specific lambda which is in play during the measurement of that one particle. ie A(a, λ) = ±1. In any case, I did not say each particular value of λ must occur only once in the list or that, every occurence of λ must be the same. All I'm saying, and you appear to agree, is that the way in which the inequality is derived demands that the 8 lists be reduceable to 4.

I don't know how to say it any more plainly. Yes, the conclusion of nonlocality only follows if you make the "no conspiracies" assumption.
But it is not sufficient to just "say" it, you have to demonstrate what exactly you mean by "no conspiracy" and hopefully this exercise is bringing out the fact that your "no-conspiracy" assumption is simply the assumption that the probability distribution of the λs actually realized in each run of the experiment are exactly identical to each other. This is an unreasonable assumption which can and is in fact violated in many cases where no conspiracy is in play.

Just out of curiosity, would you say the same thing in the coin flip / drug trial analogy I described before? That is, does it violate your sense of scientific propriety to "just assume, without proof" that the coin flip outcomes are uncorrelated with the precise health status of the patients?
This is a contrived analogy with little relevance to what we are discussing. I can randomly pick people from the physics forum members by flipping a coin and ask them if they like physics and conclude from the results that >90% of people like physics; and then when questioned, I can argue that the coin flip outcomes are uncorrelated with the disposition of the people towards physics. So arguments like this do not fly such discussions.

Let me give an analogy which is more relevant to the issue here (borrowed from rlduncan):

Three fair coins are tossed simultaneously by three individuals. For simplicity, let's them be a, b, and c and each coin is tossed eight times. It follows that the outcomes must obey the following inequality

nab(HH) + nbc(HH) ≥ nac(HH)

To see this, consider the following outcomes for the three coins
a =HTTTHTHH
b=TTHHTHHH,
c=HTHTTTHH,

2+3 ≥ 3 , the inequality is satisfied.

However, if in an experiment you decide to perform three different runs of the experiment such that you obtain

a1=HTTHTHHH
b1=THHTTHTT,

b2=HTHHTHHT
c1=TTTTHHTH,

a2=THHTHTTH
c2=HHHTHTTT,

You now have 1+1 ≥ 3 which violates the inequality. Why is that? The reason is simply because the three terms in the inequality are not independent. They are calculated from only 3 lists of outcomes so that there is a cyclic dependency. However in the latter experiment, we have 6 distinct lists not reduceable to 3!

I agree that, in principle, it might not be. But we know -- from experience/experiment -- that the statistics *converge* as you do more and more trials. That is, it takes a certain number of trials to get some accuracy, but then as you keep going things "settle in" to some values and don't change as you do more and more trials.
The issue is not the accuracy of an individual value but the relationship between one value and the others. In isolation, you can predict each value separately. But when considering them jointly in an inequality, which was derived the way you did, you can not do that because once one term is determined, the domain of applicability of the others are automatically restricted and their values are no-longer the same as what you predicted for the general case. You do not solve this problem by measuring different series of particles an infinite number of times. Like in the above example, with fair coins, it doesn't matter how many times you throw the coins you will still violate the inequality easily.

Of course, a certain kind of mathematical rationalist will remain unsatisfied by this: "can you *prove* that things won't suddenly start changing if I take just one more billion data points?" No, I can't prove it. But you run the risk here of committing to a level of skepticism that would have you rejecting every single empirical scientific claim ever made. And that, in my book, cannot be rational.
You misunderstand. It is up to you to complete your proof before you make extra-ordinary claims that locality is refuted. As you now admit, no experimenter can ever be sure that the same distribution of λ applies to all the terms they calculated. If that is the basis on which you reject locality, then it is indeed a weak basis.
 
Last edited:
  • #103
ttn said:
... Anyway, I just thought it might be helpful to advertise the existence of a really systematic, careful review article on Bell's Theorem that Goldstein, Tausk, Zanghi, and I finished last year (after working on it for more than a year). It's free online here

http://www.scholarpedia.org/article/Bell%27s_theorem

and addresses very explicitly and clearly a number of the issues being debated on the other several current Bell's Theorem threads. It is, in my hardly unbiased opinion, far and away the best and most complete existing resource for really understanding Bell's Theorem, so anybody with a remotely serious interest in the topic should study the article. I'd be happy to try to answer any questions anybody has, but post them here and base them somehow on the scholarpedia article since I won't have time to follow (let alone get entangled in) all the parallel threads.

Travis

Travis, just in case you missed it, I've added a PPS to my post at the top of this page:
Gordon Watson said:
Just click the arrow!


Its drift is thus: In the spirit of your OP, I am preparing a more detailed response to your article, which I very much appreciate. And for which I thank you! However, I expect that my comments will be critical (and hopefully helpful).

Some minor points include: the need for much better editing; to wit, the removal of repetition and the correction of typos; the re-location of much material to appendices; etc. The bias of the authors should be made clear to the reader; bias (imho) being a crucial consideration when it comes to proposed review articles on subjects which are still controversial; the bias in the article tending to the Bohmian (given the assumptions)?

Could you therefore please advise the general tenor of each author's physical beliefs and conceptualisations; e.g., Bohmian, MWI, CI, etc?

At the moment my primary focus is on unwarranted assumptions in your article: assumptions which I test (and find wanting) against a clearly Einstein-local and realistic experiment (because it is based wholly on classical physics) .

There's an even simpler experiment at: A challenge to discuss Bell's theorem (in an Einstein-local setting) at https://www.physicsforums.com/showthread.php?p=3833480#post3833480. For some reason, it so far appears to be a stumbling block for those familiar with BT.

How about you?

I would welcome your thoughts about it here: It is relevant to your article. And more especially: It could be the case that, from your response, I might see that any further critique from me would be superfluous or unwarranted (being wrong).

With thanks again,

GW
 
Last edited:
  • #104
I thought I should also ask what kind of conspiracy you think is taking place in the coin-flip example which violates the inequality.

Just say you accept the mathematical proof, but find the "no conspiracies" assumption unreasonable. Don't keep saying there's an "error" in the proof!
And just to be clear, I do not believe there is an error in the proof. The are two errors:

1 - Thinking that the terms from QM could be meaningfully plugged into the LHS of the CHSH.
2 - Thinking that the terms from Experiments could be meaningfully plugged into the LHS of the CHSH.

The terms from experiment and QM are not circularly linked in the same way the inequality was derived. So this is an extra assumption being made in order to use those terms. In order to proclaim that violation of the inequality disproves locality, You have to first prove that this assumption is valid for the QM terms and the experimental data. This has not been done in this article or any other "Bells theorem implies non-locality article".

This is not extreme skepticism, it is simply a matter of sound reasoning. Extra-ordinary claims require extra-ordinary proof.
 
  • #105
billschnieder said:
The are two errors:

1 - Thinking that the terms from QM could be meaningfully plugged into the LHS of the CHSH.
[...]
This is not extreme skepticism, it is simply a matter of sound reasoning.
At least this part IS extreme skepticism. I thought your (fringe) point of view was that any local hidden variable theory WOULD satisfy a Bell inequality, and thus would contradict QM in principle, but that this inequality would be absolutely untestable experimentally because you can't measure three polarization attributes of one entangled pair. (I'm not agreeing with your point, just saying what I thought your point was.) But now are you saying that in addition to all that, you're even skeptical about whether this untestable Bell inequality contradicts QM at all, even theoretically?
 

Similar threads

Replies
80
Views
3K
  • Quantum Interpretations and Foundations
10
Replies
333
Views
11K
  • Quantum Physics
Replies
16
Views
2K
  • Quantum Physics
Replies
4
Views
2K
  • Quantum Physics
Replies
14
Views
3K
  • Quantum Physics
2
Replies
47
Views
4K
Replies
75
Views
8K
  • Quantum Interpretations and Foundations
Replies
19
Views
1K
  • Quantum Physics
Replies
10
Views
2K
  • Quantum Physics
Replies
22
Views
32K
Back
Top