Scholarpedia article on Bell's Theorem

Click For Summary
The discussion centers on a newly published review article on Bell's Theorem by Goldstein, Tausk, Zanghi, and the author, which aims to clarify ongoing debates surrounding the theorem. The article is presented as a comprehensive resource for understanding Bell's Theorem, addressing various contentious issues. However, some participants express disappointment, noting that it lacks references to significant critiques of non-locality and fails to mention historical connections to Boole's inequalities. The conversation highlights differing interpretations of terms like "non-locality" and "realism," with some advocating for a more nuanced understanding. Overall, the article is seen as a valuable contribution, yet it also invites scrutiny and further discussion on its claims and omissions.
  • #61
billschnieder said:
Hello Travis,

Hi Bill, thanks for the thoughtful questions about the actual article! =)



By "statistical regularities" do you mean simply a probability distribution Pα1,α2(A1,A2) exists? Or are you talking about more than that.

Nothing more. But of course the real assumption is that this probability distribution can be written as in equations (3) and (4). In particular, that is where the "no conspiracies" and "locality" assumptions enter -- or really, here, are formulated.



What if instead you assumed that λ did not originate from the source but was instantaneoulsy (non-locally) imparted from a remote planet to produce result A2 together with α2, and result A1 together with α1. How can you explain away the suggestion that the rest of your argument, will now prove the impossibility of non-locality?

I don't understand. The λ here should be thought of as "whatever fully describes the state of the particle pair, or whatever you want to call the 'data' that influences the outcomes -- in particular, the part of that 'data' which is independent of the measurement interventions". It doesn't really matter where it comes from, though obviously if you have some theory where it swoops in at the last second from Venus, that would be a nonlocal theory.

But mostly I don't understand your last sentence above. What is suggesting that the rest of the argument will prove the impossibility of non-locality? I thought the argument proved the inevitability of non-locality!



To make the following clear, I'm going to fully specify the implied notation in the above as follows:

|C(\mathbf a,\mathbf b|\lambda)-C(\mathbf a,\mathbf c|\lambda)|+|C(\mathbf a',\mathbf b|\lambda)+C(\mathbf a',\mathbf c|\lambda)|\le2,

You've misunderstood something. The C's here involve averaging/integrating over λ. They are in no sense conditional/dependent on λ. See the equation just above where CHHS gets mentioned, which defines the C's.

Which starts revealing the problem. Unless all terms in the above inequality are defined over the exact same probability measure. The above inequality does not make sense. In other words, the only way you were able to derive such an inequality was to assume that all the terms are defined over the exact same probability measure P(λ). Do you agree?

No. You are confusing the probability P_{\alpha_1,\alpha_2}(\cdot|\lambda) with P(\lambda). You first average the product A_1 A_2 with respect to P_{\alpha_1,\alpha_2}(\cdot|\lambda) to get E_{\alpha_1,\alpha_2}(A_1 A_2 | \lambda). Then you average this over the possible λs using P(λ).

Maybe you missed the "no conspiracies" assumption, i.e., that P(λ) can't depend on \alpha_1 or \alpha_2.




(a) Now since you did not show it explicity in the article, I presume when you say Bell's theorem contradicts quantum theory, you mean, you have calculated the LHS of the above inequality from quantum theory and it was greater than 2. If you will be kind as to show the calculation and in the process explain how you made sure in your calculation that all the terms you used were defined over the exact same probability measure P(λ).

I don't understand. The QM calculation is well-known and not controversial. You really want me to take the time to explain that? Look in any book. But I have the sense you know how the calculation goes and you're trying to get at something. So just tell me where you're going. Your last statement makes no sense to me. In QM, λ is just the usual wave function or quantum state for the pair; typically we assume that this can be completely controlled, so P(λ) is a delta function. But in QM, you can't do the factorization that's done in equation (4). It's not a local theory. (Not that you need Bell's theorem to see/prove this.)



(b) You also discussed how several experiments have demonstrated violation of Bell's inequality, I presume by also calculating the LHS and comparing with the RHS of the above. Are you aware of any experiments in which experimenters made sure the terms from their experiments were defined over the exact same probability measure?

No, the experiments don't measure the LHS of what you had written above. What they can measure is the C's as we define them -- i.e., involving the averaging over λ.



(5) Since you obviously agree that non-contextual hidden variables are naive and unreasonable, let us look at the inequality from the perspective of how experiments are usually performed. For this purpose, I will rewrite the four terms obtained from a typical experiment as follows:

C(\mathbf a_1,\mathbf b_1)
C(\mathbf a_2,\mathbf c_2)
C(\mathbf a_3',\mathbf b_3)
C(\mathbf a_4',\mathbf c_4)

Where each term originates from a separate run of the experiment denoted by the subscripts. Let us assume for a moment that the same distribution of λ is in play for all the above terms. However, if we were to ascribe 4 different experimental contexts to the different runs, we will have the terms.

C(\mathbf a,\mathbf b|\lambda,1)
C(\mathbf a,\mathbf c|\lambda,2)
C(\mathbf a',\mathbf b|\lambda,3)
C(\mathbf a',\mathbf c|\lambda,4)

Where we have moved the indices into the conditions. We still find that each term is defined over a different probability measure P(λ,i), i=1,2,3,4 , where i encapsulates all the different conditions which make one run of the experiment different from another.

Therefore could you please explain why this is not a real issue when we compare experimental results with the inequality.

Yes, for sure, if P(λ) is different for the 4 different (types of) runs, then you can violate the inequality (without any nonlocality!). The thing we call the "no conspiracies" assumption precludes this, however. It is precisely the assumption that the distribution of λ's is independent of the alpha's.

So I guess your issue is just what I speculated above: you do not accept the reasonableness of "no conspiracies", or didn't realize this assumption was being made. (I doubt it's the latter since we drum this home big time in that section especially, and elsewhere.)
 
Physics news on Phys.org
  • #62
unusualname said:
assume the universe is a one path version of MWI.

there is no "non-locality" is there?

I don't know exactly what you mean by "one path version of MWI". But in general, about MWI, I'd say the problem is that there is no locality there either.
 
  • #63
DrChinese said:
CHSH has 4 settings: 0, 22.5, 45, 67.5.

but only 2 for each particle, which is (I thought) what you were talking about.

But the main point is that this whole counting (2, 3, 4) business is nonsensical. Can you really not follow the EPR argument, which establishes -- on the assumption of locality! -- that definite pre-existing values must exist... for one angle, for 2, for 3, for 113, for however many you care to prove. Let me just put it simply: the EPR argument shows that locality + perfect correlations implies definite pre-existing values for the spin/polarization along *all* angles.

Either you accept the validity of this or you don't. If you don't, tell me where it goes off the track. If you do, then there's nothing further to discuss because now, clearly, you can derive a Bell inequality.


We know entangled pairs can only be measured at 2 angles at a time.

Uh, you mean, each particle can be measured at 1 angle at a time? That's true. But why in the world does that matter? Nobody ever said you could measure (e.g.) all four of the correlation coefficients in the CHHS inequality on one single pair of particles!



Again, my goal was not to debate the point (as we won't agree or change our minds) but to answer the question of WHY your perspective is not generally accepted. You do not define things the way the rest of us do.

I don't hold out a lot of hope of changing your mind, either, but still, as long as you keep saying stuff that makes no sense, I will continue to call it out. Maybe somebody watching will learn something?

Actually I have a serious question. What, exactly, do you think I define differently than others? You really think it's disagreement over the definition of some term that explains our difference of opinion? What term??
 
  • #64
ttn said:
So I guess your issue is just what I speculated above: you do not accept the reasonableness of "no conspiracies", or didn't realize this assumption was being made. (I doubt it's the latter since we drum this home big time in that section especially, and elsewhere.)
No, I don't think superdeterminism is the reason billschneider rejects Bell. If you want to see my (unsuccessful) attempt to ascertain what exactly billschneider is talking about, see the last page or so of this thread.
 
  • #65
ttn said:
I don't understand. The λ here should be thought of as "whatever fully describes the state of the particle pair, or whatever you want to call the 'data' that influences the outcomes -- in particular, the part of that 'data' which is independent of the measurement interventions". It doesn't really matter where it comes from, though obviously if you have some theory where it swoops in at the last second from Venus, that would be a nonlocal theory.

But mostly I don't understand your last sentence above. What is suggesting that the rest of the argument will prove the impossibility of non-locality? I thought the argument proved the inevitability of non-locality!
If lambda can be anything which influences the outcomes, then why do you think the proof restrincts it to locality? I can use the same argument to deny non-locality by simply redefining lambda the way I did. Why would this be wrong?

You've misunderstood something. The C's here involve averaging/integrating over λ. They are in no sense conditional/dependent on λ. See the equation just above where CHHS gets mentioned, which defines the C's.
If the C's are obtained by integrating over a certain probability distribution λ, then it means the C's are defined ONLY for the distribution of λ, let us call it ρ(λ), over which they were obtained. I included λ, and a conditioning bar just to reflect the fact that the C's are defined over a given distribution of λ which must be the same for each term. Do you disagree with this?

No. You are confusing the probability P_{\alpha_1,\alpha_2}(\cdot|\lambda) with P(\lambda). You first average the product A_1 A_2 with respect to P_{\alpha_1,\alpha_2}(\cdot|\lambda) to get E_{\alpha_1,\alpha_2}(A_1 A_2 | \lambda). Then you average this over the possible λs using P(λ).

Maybe you missed the "no conspiracies" assumption, i.e., that P(λ) can't depend on \alpha_1 or \alpha_2.
I don't think you are getting my point so let me try again using your Proof just above equation (5). Let us focus on what you are doing within the integral first. You start with (simplifying notation)

E(AB|λ) = E(A|λ)E(B|λ) which follows from your equation (4).Within the integral, you start with 4 terms based on this presumably with something like:

<br /> \big|E_{\mathbf a}(A_1|\lambda)E_{\mathbf b}(A_2|\lambda)-E_{\mathbf a}(A_1|\lambda)E_{\mathbf c}(A_2|\lambda)\big|\,+\,\big|E_{\mathbf a&#039;}(A_1|\lambda)E_{\mathbf b}(A_2|\lambda)+E_{\mathbf a&#039;}(A_1|\lambda)E_{\mathbf c}(A_2|\lambda)\big|

You the proceed to factor out the terms as follows:

<br /> \big|E_{\mathbf a}(A_1|\lambda)\big|\,\big(\big|E_{\mathbf b}(A_2|\lambda)-E_{\mathbf c}(A_2|\lambda)\big|\big)\,+\,\big|E_{\mathbf a&#039;}(A_1|\lambda)\big|\,\big(\big|E_{\mathbf b}(A_2|\lambda)+E_{\mathbf c}(A_2|\lambda)\big|\big)

Remember, we are still dealing with what is within the integral. It is therefore clear that according to your proof, that the Ea term from the E(a,b) experiment is exactly the same Ea term from the E(a,c) experiment. In other words, the E(a,b) and E(a,c) experiments must have the Ea term in common and the E(a′,b) and E(a′,c) must have the Ea′ term in common and the E(a,b) and E(a′,b) experiments must have the Eb term in common and E(a,c) and E(a′,c) experiments must have the Ec term in common. Note the cyclicity in the relationships between the terms. In fact, according to your proof, you really only have 4 terms individual terms of the type Ei which you have combined to form E(x,y) type terms using your factorizability condition (equation 4). If you now consider the integral, you now have lists of values so to speak which must be identical from term to term and reduceable to only 4 lists.

If the above condition does not hold, your proof fails. This is evidenced by the fact that you can not complete your proof without the factorization which you did. Another way of looking at it is to say that all of the paired products within the integral depend on the same λ. The proof depends on the fact that all the terms within the integral are defined over the same λ and contain the cyclicity described above which allows you to factor terms out.

So what does this mean for the experiment? In a typical experiment we collect lists of numbers (±1). For each run, you collect 2 lists, for 4 runs you collect 8 lists. You then calculate averages for each pair (cf integrating) to obtain a value for the corresponding E(x,y) term. However, according to your proof, and the above analysis, those 8 lists MUST be redundant in the sense that 4 of them must be duplicates. Unless experimenters make sure their 8 lists are sorted and reduced to 4, it is not mathematically correct to think the terms they are calculating will be similar to Bell's or the CHSH terms. Do you disagree?

I don't understand. The QM calculation is well-known and not controversial. You really want me to take the time to explain that? Look in any book. But I have the sense you know how the calculation goes and you're trying to get at something.
Ok let me present it differently. When you calculate the 4 CHSH terms from QM and using them simultaneouly in the LHS of the inequality, are you assuming that each term originated from a different particle pair, or that they all originate from the same particle pair?

No, the experiments don't measure the LHS of what you had written above. What they can measure is the C's as we define them -- i.e., involving the averaging over λ.
Do you know of any experimen in which in which the 8 lists of numbers could be reduced to 4 as implied by your proof?
Yes, for sure, if P(λ) is different for the 4 different (types of) runs, then you can violate the inequality (without any nonlocality!). The thing we call the "no conspiracies" assumption precludes this, however. It is precisely the assumption that the distribution of λ's is independent of the alpha's.

Your "no-consipracy" assumption boils down to : "the exact same series of λs apply to each run of the experiment"

As I hope you see now, all that is required for your "no-conspiracy" assumption to fail, is for the actual distribution of λs to be different from one run to another, which is not unreasonable. I think your "no-conspiracy" assumption is misleading because it gives the impression that there has to be some kind of conspiracy in order for the λs to be different. But given that the experimenters have no clue the exact nature of λ, or howmany distinct λ values exist it is reasonable to expect the distribution of λ to be different from run to run. My question to you therefore was if you knew of any experiment in which the experimenters made sure the exact same series of λs were realized for each run in order to be able to use the "no-conspiracy" assumption. Just becuase you chose the name "no-conspiracy" to describe the condition does not mean it's violation implies what is commonly known as "conspiracy". It is something that happens all the time in non-stationary processes. It would have been better to call it a "stationarity" assumption.

Note: if the same series of λs apply for each run, then the 8 lists of numbers MUST be reduceable to 4. Do you agree? We can easily verify this from the experimental data available.
 
Last edited:
  • #66
Demystifier said:
So, how should we call articles concerned with truth, but not containing new results?
Not sure I understand. If article is concerned with truth it should say something new about argumentation, perspective, whatever. If it says nothing new then how is it concerned with truth? And if it says something new then it is research article.

EDIT: I just thought that there can be new way how to explain something (in sense of teaching). In that case I am not sure about answer.
 
Last edited:
  • #67
ttn said:
Actually I have a serious question. What, exactly, do you think I define differently than others? You really think it's disagreement over the definition of some term that explains our difference of opinion? What term??

I told you that Perfect Correlations are really Simultaneous Perfect Correlations. Each Perfect Correlation defines an EPR element of reality, I hope that is clear. If they are *simultaneously* real, which I say is an assumption but you define as an inference, then you have realism. If it is an assumption, then QED. If it is inference, then realism is not assumed and you are correct.

My point is that if in fact spin is contextual, then there cannot be realism. Ergo, the realism inference fails. So, for example, if I have a time symmetric mechanism (local in that c is respected, but "quantum non-local" and not Bell local), it will fail the assumption of realism (since there are not definite values except where actually measured). MWI is exactly the same in this respect.

In other words, the existence of an explicitly contextual model invalidates the inference of realism. That is why it must be assumed. Anyway, you asked where the difference of opinion is, and this is it.
 
  • #68
ttn said:
OK then, I take it back. It's not a review article. It's an encyclopedia entry. Am I allowed to be concerned with truth now?
I guess no. Well, for example, wikipedia has very strict policy on neutrality - Wikipedia:Neutral point of view
And Scholarpedia:Aims and policy says:
"Scholarpedia does not publish "research" or "position" papers, but rather "living reviews" ..."
But of course it might be that Scholarpedia has more relaxed attitude toward neutrality because they have other priorities.

ttn said:
Well, of course the details depend on exactly what the entangled state is, but for the states standardly used for EPR-Bell type experiments, I would accept that as a rough description. But what's the point? Surely there's no controversy about what the predictions of QM are??
Then certainly "perfect correlations" are not convincingly confirmed by the experiment. Only the other one i.e. "sinusoidal relationship" prediction.
 
  • #69
billschnieder said:
If lambda can be anything which influences the outcomes, then why do you think the proof restrincts it to locality?

Quoting Bell: "It is notable that in this argument nothing is said about the locality, or even localizability, of the variable λ."


I can use the same argument to deny non-locality by simply redefining lambda the way I did. Why would this be wrong?

I guess I missed the argument. How does assuming λ comes from Venus result in denying non-locality??


If the C's are obtained by integrating over a certain probability distribution λ, then it means the C's are defined ONLY for the distribution of λ, let us call it ρ(λ), over which they were obtained. I included λ, and a conditioning bar just to reflect the fact that the C's are defined over a given distribution of λ which must be the same for each term. Do you disagree with this?

At best, it's bad notation. If you want to give them a subscript or something, to make explicit that they are defined for a particular assumed ρ(λ), then give them the subscript ρ, not λ. The whole idea here is that (in general) there is a whole spectrum of possible values of λ, with some distribution ρ, that are produced when the experimenter "does the same thing at the particle source". There is no control over, and no knowledge of, the specific value of λ for a given particle pair.


It is therefore clear that according to your proof, that the Ea term from the E(a,b) experiment is exactly the same Ea term from the E(a,c) experiment.

Yes, correct.


In other words, the E(a,b) and E(a,c) experiments must have the Ea term in common and the E(a′,b) and E(a′,c) must have the Ea′ term in common and the E(a,b) and E(a′,b) experiments must have the Eb term in common and E(a,c) and E(a′,c) experiments must have the Ec term in common.

Correct.


Note the cyclicity in the relationships between the terms. In fact, according to your proof, you really only have 4 terms individual terms of the type Ei which you have combined to form E(x,y) type terms using your factorizability condition (equation 4).

Correct.



If you now consider the integral, you now have lists of values so to speak which must be identical from term to term and reduceable to only 4 lists.

Just to make sure, by the "lists" you mean the functions (e.g.) E_a(A_1|\lambda)?


Another way of looking at it is to say that all of the paired products within the integral depend on the same λ.

No, they all assume the same *distribution* over the lambdas.


The proof depends on the fact that all the terms within the integral are defined over the same λ and contain the cyclicity described above which allows you to factor terms out.

I don't even know what that means. The things you are talking about are *functions* of λ. What does it even mean to say they "assume the same λ"? No one particular value of λ is being assumed anywhere. Suppose I have two functions of x: f(x) and g(x). Now I integrate their product from x=0 to x=1. Have I "assumed the same value of x"? I don't even know what that means. What you're doing is adding up, for all of the values x' of x, the product f(x')g(x'). No particular value of x is given any special treatment. Same thing here.


So what does this mean for the experiment? In a typical experiment we collect lists of numbers (±1). For each run, you collect 2 lists, for 4 runs you collect 8 lists. You then calculate averages for each pair (cf integrating) to obtain a value for the corresponding E(x,y) term. However, according to your proof, and the above analysis, those 8 lists MUST be redundant in the sense that 4 of them must be duplicates.

Huh? Nothing at all implies that. The lists here are lists of outcome pairs, (A1, A2). The experimenters will take the list for a given "run" (i.e., for a given setting pair) and compute the average value of the product A1*A2. That's how the experimenters compute the correlation functions that the inequality constrains. You are somehow confusing what the experimentalists do, with what is going on in the derivation of the inequality.



Unless experimenters make sure their 8 lists are sorted and reduced to 4, it is not mathematically correct to think the terms they are calculating will be similar to Bell's or the CHSH terms. Do you disagree?

I don't even understand what you're saying. There is certainly no sense in which the experimenters' lists (of A1, A2 values) will look like, or even be comparable to, the "lists" I thought you had in mind above (namely, the one-sided expectation functions).



Ok let me present it differently. When you calculate the 4 CHSH terms from QM and using them simultaneouly in the LHS of the inequality, are you assuming that each term originated from a different particle pair, or that they all originate from the same particle pair?

The question doesn't arise. You are just calculating 4 different things -- the predictions of QM for a certain correlation in a certain experiment -- and then adding them together in a certain way. No assumption is made, or needed, or even meaningful, about each of the 4 calculations somehow being based on the same particle pair. (I say it's not even meaningful because what you're calculating is an expectation value -- not the kind of thing you could even measure with only a single pair.)


Do you know of any experimen in which in which the 8 lists of numbers could be reduced to 4 as implied by your proof?

?


Your "no-consipracy" assumption boils down to : "the exact same series of λs apply to each run of the experiment"

I dont' know what you mean by "series of λs". What the assumption boils down to is: the distribution of λs (i.e., the fraction of the time that each possible value of λ is realized) is the same for the billion runs where the particles are measured along (a,b), the billion runs where the particles are measured along (a,c), etc. That is, basically, it is assumed that the settings of the instruments do not influence or even correlate with state of the particle pairs emitted by the source.

Note that in the real experiments, the experimenters go to great length to try to have the instrument settings (for each pair) be chosen "randomly", i.e., by some physical process that is (as far as any sane person could think) totally unrelated to what's going on at the particle source. It really is just like a randomized drug trial, where you flip a coin to decide who will get the drug and who will get the placebo. You have to assume that the outcome of the coin flip for a given person is uninfluenced by and uncorrelated with the person's state of health.


As I hope you see now, all that is required for your "no-conspiracy" assumption to fail, is for the actual distribution of λs to be different from one run to another, which is not unreasonable.

Yes, that's right. That's indeed exactly what would make it fail. We disagree about how unreasonable it is to deny this assumption, though. I tend to think, for example, that if a randomized drug trial shows that X cures cancer, you'd have to be pretty unreasonable to refuse to take the drug yourself (after you get diagnosed with cancer) on the grounds that the trial *assumed* that the distribution of initial healthiness for the drug and placebo groups were the same. This is an assumption that gets made (usually tacitly) whenever *anything* is learned/inferred from a scientific experiment. So to deny it is tantamount to denying the whole enterprise of trying to learn about nature through experiment.

I think your "no-conspiracy" assumption is misleading because it gives the impression that there has to be some kind of conspiracy in order for the λs to be different.

I think it's accurately-named, for the same reason.


But given that the experimenters have no clue the exact nature of λ, or howmany distinct λ values exist it is reasonable to expect the distribution of λ to be different from run to run.

I disagree. It is normal in science to be ignorant of all the fine details that determine the outcomes. Think again of the drug trial. Would you say that, because the doctors don't know exactly what properties determine whether somebody dies of cancer or survives, therefore it is reasonable to assume that the group of people who got the drug (because some coin landed heads) is substantially different in terms of those properties than the group who got the placebo (because the coin landed tails)?


My question to you therefore was if you knew of any experiment in which the experimenters made sure the exact same series of λs were realized for each run in order to be able to use the "no-conspiracy" assumption.

Uh, again, the λs aren't something the experimenters know about. Indeed, nobody even knows for sure what they are -- different quantum theories say different things! That's what makes the theorem general/interesting: you don't have to say/know what they are exactly to prove that, whatever they are, if locality and no conspiracies are satisfied, you will get statistics that respect the inequality.


Just becuase you chose the name "no-conspiracy" to describe the condition does not mean it's violation implies what is commonly known as "conspiracy". It is something that happens all the time in non-stationary processes. It would have been better to call it a "stationarity" assumption.

Of course I agree that the name doesn't make it so. The truth though is that we chose that name because we think it accurately reflects what the assumption actually amounts to. It's clear you disagree. Incidentally, did you read the whole article? There is some further discussion of this assumption elsewhere, so maybe that will help.


Note: if the same series of λs apply for each run, then the 8 lists of numbers MUST be reduceable to 4. Do you agree? We can easily verify this from the experimental data available.

No, I don't agree. What you're saying here doesn't make sense. You're confusing the A's that the experimentalists measure, with the λs that only theorists care about.
 
  • #70
DrChinese said:
I told you that Perfect Correlations are really Simultaneous Perfect Correlations. Each Perfect Correlation defines an EPR element of reality, I hope that is clear. If they are *simultaneously* real, which I say is an assumption but you define as an inference, then you have realism. If it is an assumption, then QED. If it is inference, then realism is not assumed and you are correct.

But we don't disagree about the definitions of "assumption" or "inference". I've explained how the argument goes several times, so I don't see how you can suggest that my claim (that it's an inference) is somehow a matter of definition. I inferred it, right out in public in front of you. If I made a mistake in that inference, then tell me what the mistake was. Burying your head in the sand won't make the argument go away!


My point is that if in fact spin is contextual, then there cannot be realism. Ergo, the realism inference fails.

The non-contextuality of spin *follows* from the EPR argument, i.e., that too is an *inference*. Maybe you're right at the end of the day that this is false. But if so, that doesn't show the *argument* was invalid -- it shows that one of the premises must have been wrong! This is elementary logic. I say "A --> B". You say, "ah, but B is false, therefore A doesn't --> B". That's not valid reasoning.
 
  • #71
ttn said:
But we don't disagree about the definitions of "assumption" or "inference". I've explained how the argument goes several times, so I don't see how you can suggest that my claim (that it's an inference) is somehow a matter of definition. I inferred it, right out in public in front of you. If I made a mistake in that inference, then tell me what the mistake was.

I told you that your inference is wrong, and that is because there are explicit models that are non-realistic but local and they feature perfect correlations. For example:

http://arxiv.org/abs/0903.2642

Relational Blockworld: Towards a Discrete Graph Theoretic Foundation of
Quantum Mechanics
W.M. Stuckey, Timothy McDevitt and Michael Silberstein

"BCTS [backwards-causation time-symmetric approaches] provides for a local account of entanglement (one without space-like influences) that not only keeps RoS [relativity of simultaneity], but in some cases relies on it by employing its blockworld consequence—the reality of all events past, present and future including the outcomes of quantum experiments (Peterson & Silberstein, 2009; Silberstein et al., 2007)."

So obviously, by our definitions, locality+PC does not imply realism as it does by yours. You must assume it, and that assumption is open to challenge. Again, I am simply explaining a position that should be clear at this point. A key word is including "simultaneous" with the perfect correlations. Realism, by definition, assumes that they are simultaneously real elements. For if they are not simultaneously real, you have equated realism and contextuality and that is not acceptable in the spirit of EPR.
 
Last edited:
  • #72
DrChinese said:
I told you that your inference is wrong, and that is because there are explicit models that are non-realistic but local and they feature perfect correlations.
OK, but is his inference right or wrong for models in which the future can't affect the past? I would consider backwards causation, even if it can be considered "local" on a technicality, to not really be what we mean in spirit by the word local. We obviously mean that causal influences can only propagate into the future light cone.
 
  • #73
ttn said:
The non-contextuality of spin *follows* from the EPR argument, i.e., that too is an *inference*. Maybe you're right at the end of the day that this is false. But if so, that doesn't show the *argument* was invalid -- it shows that one of the premises must have been wrong! This is elementary logic. I say "A --> B". You say, "ah, but B is false, therefore A doesn't --> B". That's not valid reasoning.

It would if we also agreed A were true. :smile:
 
  • #74
lugita15 said:
OK, but is his inference right or wrong for models in which the future can't affect the past? I would consider backwards causation, even if it can be considered "local" on a technicality, to not really be what we mean in spirit by the word local. We obviously mean that causal influences can only propagate into the future light cone.

MWI is such.

But no, I completely don't agree with you anyway. Clearly, relativistic equations don't need to limited to a single time direction for any particular reason other than by convention. So by local, I simply mean that c is respected and relativity describes the spacetime metric. This is a pretty important point.

On the other hand, obviously, Bohmian type models are "grossly" non-local. That's a big gap, and one which is fundamental.

So I resolve these issues by saying we live in a quantum non-local world because entanglement has the appearance of non-locality. But that could simply be an artifact of living in a local world with time symmetry, which is a lot different than a non-local world with a causal direction.
 
  • #75
lugita15 said:
OK, but is his inference right or wrong for models in which the future can't affect the past? I would consider backwards causation, even if it can be considered "local" on a technicality, to not really be what we mean in spirit by the word local. We obviously mean that causal influences can only propagate into the future light cone.

Exactly.

Maybe after all Dr C and I do disagree about how to define something: "locality". I thought I explained before how I was using this term (and in particular why retro-causal models don't count as "local") and I don't recall him disagreeing, so I had forgotten about this.

In any case, to recap, I think it is very silly to define "locality" in a way that embraces influences *from* the future light cone -- not only for the reason lugita15 gave above, but for the reason I mentioned earlier: with this definition, two "local" influences (from A to B and then from B to C) make a "nonlocal" influence (if A and C are spacelike separated). So the whole idea is actually quite incoherent: it doesn't rule *anything* out as "definitely in violation of locality". You can always just say "oh, that causal influence from A to C wasn't direct, it went through a B in the overlapping past or future light cones, so actually everything is local".
 
  • #76
DrChinese said:
It would if we also agreed A were true. :smile:

Uh, the A there was locality.

But whatever, that still is totally irrelevant. If "A --> B", and "B" is false, you can't conclude that "A --> B is false" -- whether "A" is true or not.
 
  • #77
DrChinese said:
MWI is such.

But no, I completely don't agree with you anyway. Clearly, relativistic equations don't need to limited to a single time direction for any particular reason other than by convention. So by local, I simply mean that c is respected and relativity describes the spacetime metric. This is a pretty important point.

On the other hand, obviously, Bohmian type models are "grossly" non-local. That's a big gap, and one which is fundamental.

So I resolve these issues by saying we live in a quantum non-local world because entanglement has the appearance of non-locality. But that could simply be an artifact of living in a local world with time symmetry, which is a lot different than a non-local world with a causal direction.

OK, so then you are in full agreement with Bell's conclusion: the world is nonlocal. (Where "nonlocal" here means that Bell's notion of locality is violated.)
 
  • #78
Let me make this clear: Bohr did not think EPR's perfect correlations implies realism. Otherwise EPR was right and he was wrong about the completeness of QM, and he would have conceded defeat.

Further, Bohr didn't think locality+perfect correlations->realism for the same reason. That too was part of EPR, and where does Bohr mention this subsequently?

Finally, were this to be a common perspective, then Einstein himself must have deduced this, and renounced locality. I mean, you don't need Bell at all to come to this conclusion if Travis is correct.

So again, my answer is that Travis' definitions clearly do not line up with any movement, past or present, other than Bohmians. I am not asking anyone to change their minds, but I hope my points are obvious at this juncture.
 
  • #79
ttn said:
Maybe after all Dr C and I do disagree about how to define something: "locality". I thought I explained before how I was using this term (and in particular why retro-causal models don't count as "local") and I don't recall him disagreeing, so I had forgotten about this.

Ah, but I did.
 
  • #80
DrChinese said:
Let me make this clear: Bohr did not think EPR's perfect correlations implies realism. Otherwise EPR was right and he was wrong about the completeness of QM, and he would have conceded defeat.

Bohr was a cotton-headed ninny-muggins.



Finally, were this to be a common perspective, then Einstein himself must have deduced this, and renounced locality. I mean, you don't need Bell at all to come to this conclusion if Travis is correct.

Huh? I really don't understand why this is so hard. The EPR argument was an argument that

locality + perfect correlations --> definite values for things that QM says can't have definite values

Einstein believed in locality, and he like everyone accepted the "perfect correlations" as a probably-correct prediction of QM. Now why should he have "renounced locality"?
 
  • #81
ttn said:
Uh, the A there was locality.

But whatever, that still is totally irrelevant. If "A --> B", and "B" is false, you can't conclude that "A --> B is false" -- whether "A" is true or not.
If A is true and B is false, then you can most certainly conclude that A implies B is false.
 
  • #82
DrChinese said:
Ah, but I did.

Really? Help me find it. I responded in post #7 of the thread to your comments about retro-causal models. I never saw a response to those comments, and couldn't find one now when I looked again. Help me find it if I missed it. Or maybe you meant that you disagreed, but "privately". =)
 
  • #83
What he meant was that "A--->B" and "no B" does not imply "no (A--->B)". It only implies "no A".

Anyway, I like it very much the way he codifies mathematically the premises in the "CHSH-Bell inequality: Bell's Theorem without perfect correlations".

That theorem rules out (if QM is always correct) ANY theory (deterministic or stochastic or whatever) that satisfies "his mathematical setup" + "his necessary condition for locality", and that mathematical setup is THAT general.
 
  • #84
lugita15 said:
If A is true and B is false, then you can most certainly conclude that A implies B is false.

Yes, sorry. I was being sloppy. The issue is not really the truth of the conditional "A --> B", but the validity or invalidity of the argument for it. Remember what we're talking about here. There's an argument (the EPR argument, which can be made mathematically rigorous using Bell's definition of locality) that shows that locality + perfect correlations requires deterministic non-contextual hidden variables. The point is that having some independent reason to question the existence of deterministic non-contextual hv's (say, the various no-hidden-variable proofs) doesn't give us any grounds whatsoever for denying what EPR argued. Same for locality.

The big picture here is that there is a long history of people saying things like "Bell put the final nail in EPR's coffin" or sometimes "Kochen-Specker put the final nail in EPR's coffin" or whatever. All such statements are based on the failure to appreciate that EPR actually presented an *argument* for the conclusion. Commentators (and I think this applies to Dr C here) typically miss the argument and instead understand EPR as having merely expressed "we like locality and we like hidden variables".
 
  • #85
mattt said:
What he meant was that "A--->B" and "no B" does not imply "no (A--->B)". It only implies "no A".

Yes.

Anyway, I like it very much the way he codifies mathematically the premises in the "CHSH-Bell inequality: Bell's Theorem without perfect correlations".

That theorem rules out (if QM is always correct) ANY theory (deterministic or stochastic or whatever) that satisfies "his mathematical setup" + "his necessary condition for locality", and that mathematical setup is THAT general.

Yes, good, I'm glad you appreciate the generality! That is really what's so amazing and profound about Bell's theorem. (Incidentally, don't forget the "no conspiracies" assumption is made as well -- I agree that, at some point, one should stop bothering to mention this each time, since it's part and parcel of science, and so not really on the table in the same way "locality" is. But maybe as long as billschnieder and others are still engaging in the discussion, we should make it explicit!)
 
  • #86
ttn said:
Bohr was a cotton-headed ninny-muggins.

That's pretty good! :biggrin:
 
  • #87
ttn said:
Really? Help me find it. I responded in post #7 of the thread to your comments about retro-causal models. I never saw a response to those comments, and couldn't find one now when I looked again. Help me find it if I missed it. Or maybe you meant that you disagreed, but "privately". =)

Disagree in private, me?

There is a problem distinguishing Bell's Locality condition from the question of what "Locality" means in the sense that causal/temporal direction was assumed to occur in one direction only. At this point, that cannot be assumed. It is fair to say that your definition is closest to what Bell intended, but I would not say it is closest to the most useful definition. Clearly, the relevant (useful) question is whether c is respected, regardless of the direction of time's arrow.
 
  • #88
DrChinese said:
Finally, were this to be a common perspective, then Einstein himself must have deduced this, and renounced locality.

OK, on re-reading this, it doesn't even make sense to me.

:smile:
 
  • #89
ttn said:
The big picture here is that there is a long history of people saying things like "Bell put the final nail in EPR's coffin" or sometimes "Kochen-Specker put the final nail in EPR's coffin" or whatever. All such statements are based on the failure to appreciate that EPR actually presented an *argument* for the conclusion. Commentators (and I think this applies to Dr C here) typically miss the argument and instead understand EPR as having merely expressed "we like locality and we like hidden variables".

EPR does demonstrate that if QM is complete and locality holds, then reality is contextual (which they consider unreasonable): "This makes the reality of P and Q depend upon the process of measurement carried out on the first system, which does not disturb the second system in any way. No reasonable definition of reality could be expected to permit this."

They speculate (but nowhere prove) that a more complete specification of the system is possible. I guess you could also conclude that they say "we like locality and we like hidden variables". :smile: (I think commentator would be a good term.)

The bigger picture after EPR is that local realism and QM could have an uneasy coexistence, with Bohr denying realism and Einstein asserting the incompleteness of QM - both while looking at the same underlying facts. Bell did put the nail in that coffin in the sense that at least one or the other view had to be wrong.
 
  • #90
ttn said:
The point is that having some independent reason to question the existence of deterministic non-contextual hv's (say, the various no-hidden-variable proofs) doesn't give us any grounds whatsoever for denying what EPR argued.

I would agree that when discussing Bell's Theorem, you can do it "mostly" independently of the later no-gos. On the other hand, you should at least mention those no-gos that Bell has spawned, including those which attack realism (such as GHZ).

Of course, to do that you would need to accept realism as part of Bell. The funny thing to me is that you mention in the article how EPR makes an argument for "pre-existing values" if QM is correct and locality holds... which to me IS realism. Then you deny that realism is relevant to Bell, when it is precisely those "pre-existing values" which Bell shows to be impossible.
 

Similar threads

Replies
80
Views
7K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 333 ·
12
Replies
333
Views
18K
Replies
58
Views
4K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 75 ·
3
Replies
75
Views
11K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 22 ·
Replies
22
Views
33K
  • · Replies 19 ·
Replies
19
Views
2K