Mathematica Mathematical expression of Bell's local realism

AI Thread Summary
Bell's Theorem demonstrates that local realistic theories cannot fully explain quantum phenomena, as it asserts that particles possess attributes independent of observation. The discussion emphasizes the complexity of defining "local" and "realistic" in the context of Bell's Theorem, particularly regarding simultaneous measurements of particles. It argues that the existence of hidden variables does not necessitate deterministic outcomes and that the probabilities of measurement results should be conditioned on a complete description of the system's state. The conversation also highlights that Bell Locality is a stronger condition than Signal Locality, as it requires that measurement outcomes are only influenced by prior states. Ultimately, the implications of Bell's Theorem challenge the compatibility of quantum mechanics with classical notions of locality and realism.
DrChinese
Science Advisor
Homework Helper
Gold Member
Messages
8,498
Reaction score
2,128
Mathematical expression of Bell's "local realism"

I have started this thread to continue a discussion with NateTG that was starting to get a bit off-topic there. I will repeat the base comment and then reply to NateTG's last comment. I would invite anyone interested to please join in!

Bell's Theorem rules out local realistic theories, as is well known. Nailing down *exactly* how Bell defines "local" and "realistic" - especially what is necessary for a proof of Bell's Theorem - is a bit more complicated. That is the discussion topic.

---------------------

The definition of a "realistic" theory is that particles have observable attributes independent of the act of observation. This is all EPR says; and that is why EPR says QM is incomplete. It is not an assertion of EPR that there are hidden variables that predetermine outcomes. It is that the outcome values themselves exist independent of a measurement.

And Bell follows that thinking completely. So if there are 2 simultaneous values for a single particle (corresponding to 2 different measurement settings), then there are 3 as well.

a, b and c are different settings to measure observables on a single particle. But such a simultaneous measurement is not possible without disturbing the system under view. So if you measure the particle and a "clone", then you might be able to get 2 values simultaneously.

In this example, we are testing the hypothesis - of Bell - that a single particle has 3 simultaneous values. I think your characterization is OK but let me repeat the experimental questions.

I. Experimental test of Bell Locality so it does NOT need to be assumed a priori:

P(Alice+ (at polarizer setting=a), Bob (any setting b)) =
P(Alice+ (at polarizer setting=a), Bob (any setting c))

and similarly (I left this out in the earlier post I think)

P(Alice- (at polarizer setting=a), Bob (any setting b)) =
P(Alice- (at polarizer setting=a), Bob (any setting c))

We are looking at the variations of the setting for Bob and how it affects things over at Alice, but are not concerned with Bob's outcome in this statement. Because this scenario exactly - word for word - maps to Bell's statement as to his locality assumption. That being that the result at Alice is independent of the setting at Bob.

The interesting thing: It just doesn't matter whether there is signal locality or not; if the particles are space-like separated or not; or if there are slower than light influences. None of these can matter in our experiment II IF the experimental result above is first proven. Therefore, there is no need to assume Bell Locality or locality of any kind. In fact, you are free to assume the opposite: that there are such effects because they just won't matter.

II. Experimental test of Bell's Inequality

This would test correlations between Alice and Bob once we have ruled out - by experiment - that the outcome at Alice is affected by the setting at Bob. So now we can see that the correlations are too strong to obey Bell's Inequality - because there is NO SIMULTANEOUS a, b and c to begin with.
 
Last edited:
Physics news on Phys.org
NateTG said:
First off, the assumption that there is some suitable hidden state \lambda is (more or less) equivalent to assuming that the process is realistic and stochasitc. There's some i-dotting and t-crossing missing from this, but:

If all of the probabilities exist and are well defined, then we can clearly describe the behavior of the system by having \lambda be a particular set of measurement results, and \lambda can be assigned with the appropriate probability distribution - so we can safely assume that there is indeed some hidden state \lambda, even if it only exists in our minds.

Conversely, let's say that the systems state can be completely described by some hidden state \lambda which is selected from a set \Lambda. Then the system is clearly stochastic.

In a sense, \lambda is just a list of measurement results, and \Lambda is the set of possible measurement results, so summing over the possible results is the same as summing over \lambda.

Well, this is pretty much how I see it as well.

If a hidden variable is a hypothetical result/outcome/measurement - i.e. the output side of some black box - then I say that is an "element of reality" in EPR's terms. Since there is a black box, we are silent on the point of whether there is an internal state that exactly maps to the outcome and whether we can see inside the black box itself to see its inner workings - and then learn if it is deterministic or not. So I don't believe the elements of reality needs to extend that far, just to the point that we agree that there were "answers" (the results) to the "questions" (the set of possible measurement settings, i.e. polarizer angle settings to be specific.) You might assume it is stochastic, but it doesn't need to be to make Bell's Theorem important.

EPR says that these elements of reality (P and Q in their example, polarization in ours) exist independently of a measurement. EPR also says insists that they not be dependent on a measurement setting on distant particle: "This makes the reality of P and Q depend on the process of measurement carried out on the first system which does not disturb the second system in any way. No reasonable definition of reality could be expected to permit this." This is the locality assumption - and Bell essentially defines locality in an identical manner.

What I don't agree with is those who insist that the hidden variables are the inputs to a deterministic equation which generates the resultset. That is an extra and unnecessary step that can't be taken until we know if the elements of reality - which are observables - exist simultaneously and independent of observation.
 
As a note, please quote what you're replying to, or at least link to it, unless the two are in direct sequence. Context can be quite handy.
This is a digression from posts 15+ of the thread:
https://www.physicsforums.com/showthread.php?t=101863&page=2

We've got a pretty clear notion of realistic:
The definition of a "realistic" theory is that particles have observable attributes independent of the act of observation.
I'm going to assume that it's actually 'all observable attributes exist independant of observation'. So let's say that, in a realistic world, for every particle, p there is a hidden state \vec{\lamda}_p that is sufficient to provide the result any concievable measurement on that particle. One way to describe this hidden state would be to simply list all of these observable quantities.


Note: For the remainder of this post I refer to a particular measurement e.g. m_1 I mean not just the act of measurement, but also the past of that measurement, including any 'hidden' state.

Now, we want to look at Signal Locality, and Bell Locality.
Signal Locality is also a fairily clear notion in some sense:

A physical model is signal local if it is impossible to have faster than light communication in it.

That means that if we have two measurements m_1 and m_2, neither in the past of the other, with possible result events R_1 and R_2 respectively we have:
\forall r_2 \in R_2 p(r_2)=\int_{R_1} p(r_1) p(r_2 | r_1) dr_1[/itex]<br /> and<br /> \forall r_1 \in R_1 p(r_1)=\int_{R_2} p(r_2) p(r_1 | r_2) dr_2[/itex]&lt;br /&gt; &lt;br /&gt; And Bell Locality:&lt;br /&gt; A physical model is Bell local if the probability of a particular measurement result can only be affected by things in that measurement&amp;#039;s past, so, if the system is Bell Local, we also have:&lt;br /&gt; \forall r_1 \in R_1 \forall r_2 \in R_2 p(r_2)=p(r_2 | r_1)&lt;br /&gt; and&lt;br /&gt; \forall r_1 \in R_1 \forall r_2 \in R_2 p(r_1)=p(r_1 | r_2)&lt;br /&gt; &lt;br /&gt; It should be clear that Bell Locality (at least as I have defined it) is a stronger condition than signal locality since it&amp;#039;s possible to simply integrate over R_1 or R_2.
 
DrChinese said:
What I don't agree with is those who insist that the hidden variables are the inputs to a deterministic equation which generates the resultset. That is an extra and unnecessary step that can't be taken until we know if the elements of reality - which are observables - exist simultaneously and independent of observation.
Let's say that we have some hidden state \lambda \in \Lambda and some stochastic, but not necessarily deterministic measurement m which produces a result in the range R.
Then we can construct a probabilty distribution on \Lambda&#039;=\Lambda \times R (where \times is the cartesian product) so that \lambda&#039; will deterministically describe the result of measurement m.
So, provided the process is stochastic, there will be some way to describe the hidden state so that the measurement results are deterministic based on it.
 
NateTG said:
And Bell Locality:
A physical model is Bell local if the probability of a particular measurement result can only be affected by things in that measurement's past, so, if the system is Bell Local, we also have:
\forall r_1 \in R_1 \forall r_2 \in R_2 p(r_2)=p(r_2 | r_1)
and
\forall r_1 \in R_1 \forall r_2 \in R_2 p(r_1)=p(r_1 | r_2)
It should be clear that Bell Locality (at least as I have defined it) is a stronger condition than signal locality since it's possible to simply integrate over R_1 or R_2.

This isn't Bell Locality. You've forgotten one absolutely crucial thing: all the probabilities here need to be conditionalized on a *complete* description of the state of the particles in the past. If you leave that out, then you have to say things like this violate Bell Locality: break an arrow in half and randomly put the head and tail ends into two identical boxes, then carry the boxes to distant locations, then open one of them and see whether it has the tail or the head. Before you look, by assumption, the probability is 50/50 either way. But the conditional probability that you'll find the tail is definitely not 50% given the outcome of the similar experiment on the distant other box -- it's either 100% or 0%, depending on whether the distant box contains the head or tail. So, according to your definition above, this situation would involve a violation of Bell Locality! But Bell wasn't nearly so stupid as to think this kind of thing was problematic from the point of view of relativity. That's why he stressed that the Locality condition involved probabilities that were conditional on a complete specification of the state of the system prior to measurement. Then, in the arrow example, the probability is either 100% or 0% (for finding the tail, say) *regardless* of whether or not you *also* conditionalize on the outcome of the other distant experiment. That's the whole point. Since you've already calculated the probability conditional on all the relevant info in the past light cone of the measurement in question, adding *extra* information pertaining to space-like separated measurements doesn't change things -- that extra info is either *irrelevant* or *redundant*, and in neither case does the probability in question change when you conditionalize on it.

Of course, if the probability *does* change (when you add conditionalization on the distant outcome), even though you've already conditionalized on the complete state in the past of your measurement, that seems to imply some kind of superluminal causation. The probability for what happens *here* doesn't just depend on stuff in the past light cone of "here", but depends also on space-like separated stuff. And normally in science when we find such stochastic dependencies we think they signal a causal dependence. And such causal dependence from outside the past light cone is precisely what special relativity is supposed to have prohibited. That's why people think that there is an inconsistency between relativity and non-Bell-Local theories. And that's why this whole Bell's theorem thing is so damn interesting -- because there is *no* Bell Local theory which can explain what is observed in the experiments.
 
NateTG said:
Now, we want to look at Signal Locality, and Bell Locality.

1. Signal Locality is also a fairily clear notion in some sense:

A physical model is signal local if it is impossible to have faster than light communication in it.

That means that if we have two measurements m_1 and m_2, neither in the past of the other, with possible result events

R_1 and R_2 respectively we have:
\forall r_2 \in R_2 p(r_2)=\int_{R_1} p(r_1) p(r_2 | r_1) dr_1[/itex]<br /> and<br /> \forall r_1 \in R_1 p(r_1)=\int_{R_2} p(r_2) p(r_1 | r_2) dr_2[/itex]&lt;br /&gt; &lt;br /&gt; And Bell Locality:&lt;br /&gt; &lt;br /&gt; A physical model is Bell local if the probability of a particular measurement result can only be affected by things in that measurement&amp;#039;s past, so, if the system is Bell Local, we also have:&lt;br /&gt; &lt;br /&gt; \forall r_1 \in R_1 \forall r_2 \in R_2 p(r_2)=p(r_2 | r_1)&lt;br /&gt; &lt;br /&gt; and&lt;br /&gt; &lt;br /&gt; \forall r_1 \in R_1 \forall r_2 \in R_2 p(r_1)=p(r_1 | r_2)&lt;br /&gt; &lt;br /&gt; 3. It should be clear that Bell Locality (at least as I have defined it) is a stronger condition than signal locality since it&amp;#039;s possible to simply integrate over R_1 or R_2.
&lt;br /&gt; &lt;br /&gt; 1. This is good.&lt;br /&gt; &lt;br /&gt; 2. This may be a good summary of traditional Bell Locality, but I submit it is absolutely NOT what either EPR defined it as. And importantly, it is NOT necessary and sufficient for Bell&amp;#039;s Theorem to work (although it does work). The reason this is relevant is discussed below. I would say that the mathematical formulation should exactly correspond to Bell&amp;#039;s words:&lt;br /&gt; &lt;br /&gt; &amp;quot;The vital assumption is that the result B for particle 2 does not depend on the setting a, of the magnet of particle 1, nor A on b.&amp;quot; &lt;br /&gt; &lt;br /&gt; Note that the location of the particles and the speed of any hypothetical influence is not even mentioned! That is because it is simply NOT needed in any way to get to our conclusion - to prove Bell&amp;#039;s Theorem!&lt;br /&gt; &lt;br /&gt; 3. This is where I disagree slightly. I say that we want the least assumptions possible, and therefore we want the weakest assumptions possible. So we need signal locality to be stronger than (or equal to) Bell Locality. That makes Bell&amp;#039;s Theorem as strong as possible. So by weakening Bell Locality as I have in 2. that result is accomplished. Note that we don&amp;#039;t need to test in a wide range of scenarios because we only need to cover enough ground with Locality to allow us to continue on with the rest of our proof. In other words, don&amp;#039;t think of Bell as relating to locality at all. Think of it as relating to the realism requirement, which is entirely what EPR was about, and really what Bell is mostly about (except in much subsequent discussion).&lt;br /&gt; &lt;br /&gt; After all, what happens exactly if the locality argument was not included? You couldn&amp;#039;t rely on his conclusion because in an actual experimental situation, the sample is compromised. I.e. the results at Alice are affected when we measure at Bob. And we need them to be independent as to the statistics. So we only need enough to make the statistics valid, and for that we only need Bell&amp;#039;s &amp;quot;vital assumption&amp;quot; and no more. We don&amp;#039;t care if superluminal signaling can or cannot occur, or any kind of signalling or influence for that matter, as long as it does not affect our particular test.
 
Last edited:
ttn said:
This isn't Bell Locality. You've forgotten one absolutely crucial thing: all the probabilities here need to be conditionalized on a *complete* description of the state of the particles in the past.

That is wrong. You don't understand WHY Bell Locality exists. It ONLY exists to make the Theorem work. No more, no less.

Using an analogy:

1. I have a coin I want Alice and Bob to analyze, and I can break the coin in half and give it to each. Each will examine a different attribute of the coin. Unfortunately, their test is a destructive test and so they can each do only one test and they need at least half a coin to do a test.

When done, we will know 2 things about the coin and our conclusion should be reliable - as long as we make sure that Alice's test does not distort Bob's - and vice versa. Of course no one is saying that it should distort anything because the 2 halves of the coin are separate in this classical example.

2. But we have a very special case with a Bell test: it must use ENTANGLED particles and so we are at great risk that Alice's test will skew Bob's. But there is good news: we can rule it out by assumption (which weakens our conclusion of course). So the locality assumption is that our separate tests will yield independently valid results and will not skew the outcomes.

Of course, if I could PROVE this type of locality by experiment then I wouldn't need to assume it, would I? :smile:
 
DrChinese said:
Note that the location of the particles and the speed of any hypothetical influence is not even mentioned! That is because it is simply NOT needed in any way to get to our conclusion - to prove Bell's Theorem!

What you say initially is correct: details like the speed of light don't appear in Bell's locality condition. It is just a condition saying that one thing is independent of another. Then, from that condition, the inequality follows -- so it is an indirect test of whether or not the one thing depends on the other. If it doesn't so depend, the inequality should be respected by the experimental results. If there is some dependence on the distant setting/outcome, the inequality will be violated.

Any information about the speed of light comes exclusively from experiment, not from the theorem. If you do the experiment such that Alice and Bob are 3*10^8 meters apart, and they randomly pick their polarizer settings one second before their particles arrive, then the "signal" (or causal effect or whatever you want to call it) that "carries" the dependence (which we know is present if the inequality is violated) must be propagating at the speed of light or faster. If you change the distance or the timing, you will set some different lower limit on the speed involved in the "nonlocal dependence."

This is all of course just why Aspect's experiment was so important. Before that, there were experiments showing that Bell's inequalities were violated, but this in no way implicated relativistic causality, since the kind of causal dependence one needed to postulate to account for the experimental results moved with a speed that was well less than c. Aspect's experiment finally got a long enough distance between Alice and Bob and a brief enough switch timing, that the "nonlocal action at a distance" has to be happening faster than signals propagating subluminally could possibly explain.



3. ... I say that we want the least assumptions possible, and therefore we want the weakest assumptions possible. So we need signal locality to be stronger than (or equal to) Bell Locality. That makes Bell's Theorem as strong as possible. So by weakening Bell Locality as I have in 2. that result is accomplished.

I can't really follow this, but it sounds like you're saying you can derive a Bell type inequality from a weaker locality assumption (weaker than Bell Locality, i.e., something like signal locality). That is of course false, since we already know of at least two signal local theories which correctly predict violations of Bell's inequalities.


In other words, don't think of Bell as relating to locality at all. Think of it as relating to the realism requirement, which is entirely what EPR was about, and really what Bell is mostly about (except in much subsequent discussion).

Yeah, neither Einstein nor Bell were concerned with locality at all. EPR and Bell's theorem have nothing whatever to do with locality. :smile:
 
DrChinese said:
1. I have a coin I want Alice and Bob to analyze, and I can break the coin in half and give it to each. Each will examine a different attribute of the coin. Unfortunately, their test is a destructive test and so they can each do only one test and they need at least half a coin to do a test.
When done, we will know 2 things about the coin and our conclusion should be reliable - as long as we make sure that Alice's test does not distort Bob's - and vice versa. Of course no one is saying that it should distort anything because the 2 halves of the coin are separate in this classical example.

I don't understand the analogy. What are they measuring about the coin?


2. But we have a very special case with a Bell test: it must use ENTANGLED particles and so we are at great risk that Alice's test will skew Bob's.

Um, hello, that is the whole freaking point. If Alice's test skews Bob's -- i.e., the fact that Alice performs an experiment on her particle *changes* the state of Bob's particle and/or the outcome of Bob's experiment -- that is a nonlocal action at a distance. It's exactly the kind of thing that, if really happening, would be a complete and total shock from the point of view of relativity, which is usually taken to prohibit such action at a distance.


But there is good news: we can rule it out by assumption (which weakens our conclusion of course).

Huh? The whole point of Bell's theorem is that it gives you an empirical test for whether Alice's experiment skews Bob's. Why would you want to just "rule it out by assumption"? Do the freaking experiment and find out whether or not the one skews the other. What you find is: it does!


So the locality assumption is that our separate tests will yield independently valid results and will not skew the outcomes.
Of course, if I could PROVE this type of locality by experiment then I wouldn't need to assume it, would I? :smile:

I have no idea what you are talking about.

I'm starting to think you have no interest in really understanding this topic. Nothing you say makes any sense. You go on and on and on about EPR and Bell and Locality and Realism and all sorts of things, but you never get any closer to actually understanding anything. It's starting to look like you have a vested interest in staying confused -- and hence starting to make me think this discussion is a complete and total waste of my time.
 
  • #10
I got into trouble because I was (foolishly) trying to shoehorn signal to make it a weaker condition than Bell Locality.
Here's a second attempt:
Let's say we have some environment that can be accurately modeled as a stochastic process.
Let's also say that we have a disturbance principle so we won't deal with anything that has any observations in its past. (There is, of course, the issue that a changed measurement setting qualifies as an event for just about any reasonable notion of observed event, but I'll ignore that for now.)
Then the environment is Bell Local if, for any two events e_1 and e_2, neither of which is in the other's past, and, which have the common past h_{1,2} (note that this past includes any unobserved events or other hidden state) we have
that
p(e_1 | h_{1,2}) = p(e_1 | e_2 \rm{and } h_{1,2})
That is to say, the probability of e_1 and e_2 occurring for any particular common history is equal to the probability of just e_1 occurring for that common history.
And, if the environment is signal local if, for any two events e_1 and e_2, neither of which is in the other's past, and, which have the common past observations o_{1,2} we have
p(e_1 | o_{1,2}) = p(e_1 | e_2 \rm{and } o_{1,2})
Since, we're only considering events with empty observation histories, that reduces to
p(e_1)=p(e_1 | e_2)
And, now we can integrate over the possible h_{1,2} to show that signal locality is indeed a weaker condition than Bell Locality, provided the measurement settings are not 'observable events'.
Now, it's unclear whether Bohmian Mechanics should be considered Bell Local because the notion of past is rendered ambiguous by the manifestly non-local aspects of the theory. However, signal locality is (more or less by definition) restricted to considering the observations in the past light-cone so that is still well-defined in Bohmian Mechanics.
 
  • #11
Slighlty off-topic, but since we're talking about Bell's theorem, I have the following question:

What is the justification for the assumption that the detector settings are assigned independantly of the spin-state of the particles?
 
  • #12
ttn said:
1. Um, hello, that is the whole freaking point.

2. Huh? The whole point of Bell's theorem is that it gives you an empirical test for whether Alice's experiment skews Bob's. Why would you want to just "rule it out by assumption"? Do the freaking experiment and find out whether or not the one skews the other. What you find is: it does!
I have no idea what you are talking about.

1. I said they were entangled. But apparently you do not understand that the entanglement is used for Bell tests because the entangled particles essentially have identical STATES (yes, I know, they are usually orthogonal) - and NOT because testing one affects the other. We must rule that out or our results will be skewed.

2. Apparently you keeping missing Bell's words in his paper, which specify it is an assumption:

The vital assumption is that the result B for particle 2 does not depend on the setting a, of the magnet of particle 1, nor A on b.

My point is that this assumption is no longer necessary because since Bell wrote his paper, this particular element has been demonstrated experimentally. This is NOT because the CHSH inequality was violated, because that is a test of a Bell Inequality. I am talking about the assumption above, which is always tested during calibration of a Bell test. The setting at Alice is held constant, and the setting at Bob is varied. (This is done to locate the maximum correlation setting, but it also proves our assumption experimentally. The same calibration is also used to prove that there is entanglement - which is an experimental requirement and not a theoretical one.) There is no change in the outcomes at Alice. Therefore Bell Locality is proven by experiment.

And the result is that there IS NO SKEWING; and if there were any skewing, you could send a message that way.

As everyone knows, the only thing that "changes" is the correlation percentage. A simple reference to the definition above will convince you that correlation is not part of it. The correlation ONLY relates to the Inequalities which is not part of the assumption above.
 
  • #13
ttn said:
Nothing you say makes any sense.

Ouch, that stings. :smile:

I am nearing completion of a revised proof of the Theorem which should be short and sweet. Hindsight is great, especially when you know the answer beforehand. It is a slightly more formal proof than here but uses the same basic logic. So see for yourself - I derived this particular formulation myself and didn't copy anyone else's work.

(So I guess it might be hard to say I am parroting someone else's words... In fact, if you google "Bell's Theorem Negative Probabilities" you won't find another actual derivation of it other than mine - for the angle settings I present the expectation value (for the A=B<>C cases directly) is -.1036.)

No, I don't really think I need to justify my understanding of Bell's Theorem to you. I have provided quotes, formulae, specifics, references, etc. I am here to learn and I hope you are too. I enjoy my participation here and I hope others do as well.

I prefer to stay focused on the substance of the discussion. I think you have plenty to offer to this discussion; perhaps my points are not clear to you and I can do a better job of expressing myself. So even if you are a bit acidic, I value your contributions.
 
Last edited:
  • #14
NateTG said:
Slighlty off-topic, but since we're talking about Bell's theorem, I have the following question:

What is the justification for the assumption that the detector settings are assigned independantly of the spin-state of the particles?

Do you mean: Alice and Bob's choices of detector settings?
 
  • #15
DrChinese said:
Do you mean: Alice and Bob's choices of detector settings?

Yes, that's what I mean.
 
  • #16
DrChinese said:
1. I said they were entangled. But apparently you do not understand that the entanglement is used for Bell tests because the entangled particles essentially have identical STATES (yes, I know, they are usually orthogonal) - and NOT because testing one affects the other. We must rule that out or our results will be skewed.

If entanglement simply meant that the initial spins of the particles were correlated (such that there was, later, no influence of Alice's measurement on Bob's outcome or vice versa) Bell's inequalities would *not* be violated. That's the whole point here. The violation of Bell inequalities proves that testing one affects the other. The correlations can *not* be accounted for in terms of pre-correlated properties which locally determine the outcomes.


2. Apparently you keeping missing Bell's words in his paper, which specify it is an assumption:
The vital assumption is that the result B for particle 2 does not depend on the setting a, of the magnet of particle 1, nor A on b.
My point is that this assumption is no longer necessary because since Bell wrote his paper, this particular element has been demonstrated experimentally. This is NOT because the CHSH inequality was violated, because that is a test of a Bell Inequality. I am talking about the assumption above, which is always tested during calibration of a Bell test. The setting at Alice is held constant, and the setting at Bob is varied. (This is done to locate the maximum correlation setting, but it also proves our assumption experimentally. The same calibration is also used to prove that there is entanglement - which is an experimental requirement and not a theoretical one.) There is no change in the outcomes at Alice. Therefore Bell Locality is proven by experiment.

You are again simply confusing Bell Locality with signal locality.

I don't know how to say it any clearer than I've said it before, so I'm giving up trying to explain this to you. But let me just say this: you are not understanding Bell yet, and I would urge you to get ahold of "Speakable and Unspeakable" and read through all of his papers. I think you are being thrown by focusing exclusively on his original paper. But he makes things much, much clearer in his later writings. See in particular "La Nouvelle Cuisine", an absolutely brilliant piece of writing that is the last chapter in the 2nd edition of "Speakable..." It will overturn all your confusions if anything will.


And the result is that there IS NO SKEWING; and if there were any skewing, you could send a message that way.

Now you're just defining "skewing" as signal locality. Yes, if there were that kind of skewing, you could send a message. Duh. But the point is that Bell Locality is a stronger condition that signal locality. Bell Locality can be violated, even by a theory that still respects signal locality. OQM and BM are two theories with just these properties -- they're Bell NonLocal but still signal local. There's non-local action at a distance going on "behind the scenes" according to both theories, but, alas, it cannot be used to transmit messages (in the one case because of ineliminable uncertainty in the initial conditions, and in the other because of the inherent randomness associated with the collapse dynamics).
 
  • #17
NateTG said:
Yes, that's what I mean.

I can't say for sure, but I don't think it really can be justified completely. So it would need to be assumed. Maybe that's just a Many World branch...
 
  • #18
ttn said:
If entanglement simply meant that the initial spins of the particles were correlated (such that there was, later, no influence of Alice's measurement on Bob's outcome or vice versa) Bell's inequalities would *not* be violated. That's the whole point here. The violation of Bell inequalities proves that testing one affects the other. The correlations can *not* be accounted for in terms of pre-correlated properties which locally determine the outcomes.

You are focused on the phenomenon of entanglement and trying to explain that. Entanglement is a tool to gain knowledge, and we use it to prove Bell's Theorem as follows: You do a direct test on Alice. You can only do one such test and then Alice is disturbed. But then you do a different test on Bob. That give you indirect knowledge about Alice. It's very simple.

(Of course, we know that quantum mechanics limits the validity of this information - but that is the whole point. Bell's Theorem does not assume that Quantum Mechanics is true! Quantum mechanics is not a realistic theory so of course this concept does not apply.)
 
  • #19
Bell's Inequality without Locality

What is Bell Realism?
===============

The definition of realism used in Bell’s paper is tied back to the ideas originally put forth in EPR [2] – after all, the paper was titled “On the Einstein Podolsky Rosen Paradox”. The definition is always tied to observable quantities as these are objectively verifiable and therefore within the realm of science. Of course, one might presume that there is a deeper level too, but we do not need this for our definition.

a. Realism according to EPR: EPR asserts that there must be simultaneous reality – and therefore definite values - for non-commuting operators and by extension all physical operators.[2]
b. Realism according to Einstein: “I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it."
c. Realism according to Bell: “It follows that c is another unit vector [in addition to observables a and b]…”. [1]
d. Realism of a particle as used here: A particle has simultaneous definite values for any and all observables.

Here, we define “realism” in specific mathematical form for photons:

For any three angle settings a, b and c for an individual photon, measured singly or as a statistical ensemble via a polarizer which yields an observable result labeled as + or – arbitrarily, where p() represents the observable probability of the stated outcomes:

Rule 1 Assumption:
1 >= p(a+, b+, c+) >= 0
(..and similar for all permutations of the above.)

Rule 2 Assumption:
p(a+) = p(a+, b+) + p(a+, b-)
= p(a+, b+, c+) + p(a+, b-, c+) + p(a+, b+, c-) + p(a+, b-, c-)
(…and similar for all permutations of the above.)

So our two assumptions are: the likelihood of any outcome is between 0 and 1 inclusive; and any outcome is the sum of the + and – outcomes permutations with an additional setting variable included – even if they were not observed. These innocent assumptions form our definition of realism.

We will also define 2 helper functions, corr() and noncorr(), which stand for correlation and non-correlation, which will simplify the arithmetic, such that:

Rule 3 Definition:
Where we assume there is a sample of suitable size:

corr(a, b) = p(a+, b+) + p(a-, b-)
(…and similar for all permutations of the above.)

noncorr(a, b) = p(a+, b-) + p(a-, b+)
(…and similar for all permutations of the above.)

Derivation of Bell’s Inequality
=====================
Using these rules, we derive as follows, beginning with the value of 2 specific outcome permutations (out of 8 possible when there are 3 settings):

(1) p(a+, b+, c-) + p(a-, b-, c+)

Double and divide by 2:
= ( (p(a+, b+, c-) + p(a-, b-, c+)) + (p(a+, b+, c-) + p(a-, b-, c+)) ) / 2

Add and subtract the same value to the numerator:
= ( (p(a+, b+, c-) + p(a-, b-, c+)) + (p(a+, b+, c-) + p(a-, b-, c+)) +
(p(a+, b-, c-) + p(a-, b+, c+)) + (p(a+, b+, c+) + p(a-, b-, c-)) –
(p(a+, b-, c-) + p(a-, b+, c+)) + (p(a+, b+, c+) + p(a-, b-, c-)) ) / 2

Rearrange terms:
= ( (p(a+, b+, c+) + p(a+, b+, c-) + p(a-, b-, c+) + p(a-, b-, c-)) +
(p(a+, b+, c-) + p(a+, b-, c-) + p(a-, b+, c+) + p(a-, b-, c+)) –
(p(a+, b+, c+) + p(a-, b+, c+) + p(a+, b-, c-) + p(a-, b-, c-)) ) / 2

Simplify by using Rule 2 assumption substitutions:
= ( (p(a+, b+) + p(a-, b-)) +
((p(a+, c-) + p(a-, c+)) –
((p(b+, c+) + p(b-, c-)) ) / 2

Simplify by using Rule 3 substitutions:
(2) = (corr(a,b) + noncorr(a,c) – corr(b,c) ) / 2

Recalling that by our Rule 1 assumption:
p(a+, b+, c-) >= 0 and
p(a-, b-, c+) >= 0 and therefore also
p(a+, b+, c-) + p(a-, b-, c+) >= 0

By equivalence of (1) and (2), the last becomes:
corr(a,b) + noncorrc(a,c) – corr(b,c) ) / 2 >= 0

To be explicit that we are talking about one photon, which we will call Alice:
(3) corr(Alice.a, Alice.b) + noncorr(Alice.a, Alice.c) – corr(Alice.b, Alice.c) ) / 2 >= 0

Result (3) is a deduction from assuming simultaneous reality to Alice’s a, b and c polarization observables. This is a Bell Inequality, and makes reference only to the internal relationship of 3 hypothetical simultaneous observables. This is an uncontroversial requirement of any “realistic” theory.

References:
[1] J.S. Bell: "On the Einstein Podolsky Rosen paradox" Physics 1 #3, 195 (1964).
[2] A. Einstein, B. Podolsky, N. Rosen: "Can quantum-mechanical description of physical reality be considered complete?" Physical Review 41, 777 (15 May 1935).
 
Last edited:
  • #20
I'd rather not read through this whole argument - I just have a question for Dr. Chinese. Would you call the so-called "PR boxes" nonlocal? (See e.g. http://arxiv.org/abs/quant-ph/0506180 ) for an intro to them. They are basically a pair of hypothetical magic boxes (used in quantum information to help quantify nonlocal resources) which are "maximally nonlocal" but still do not allow for signalling. Normally they imagined as a pair of boxes into which Alice can feed into her box a 0 or a 1, Bob can feed into his box a 0 or a 1 and the outputs satisfy that when both parties input a 1 their outputs are different (i.e. 01 or 10, though which of these two cases is chosen randomly), but in the other three cases their outputs are the same (00, 11 - again each chosen with probability 1/2). (See Eq. 2 in the above paper). Its unclear to me whether you would call such boxes local or nonlocal, but they certainly don't allow signalling - since locally each party sees a 0 or 1 output with probability 1/2 regardless of what (they or) the other person does.

I think its useful to have a completely operational definition of what one calls local/nonlocal - i.e. theory and philosophy independent, just a definition in terms of the statistics of measurement outcomes. Thus my question - If we had such a pair of boxes would characterise them operationally as local or nonlocal? I suspect you might call them local, which is ok, but then its clear your definition is incompatible with the more common usage.
 
  • #21
DrChinese said:
What is Bell Realism? [...]

This all appears under the heading "Bell's inequality without locality". But do you seriously think you've avoided a locality assumption here? Your requirements for the probabilities of outcomes on each side have one notable feature: none of the probabilities depend on what's going on on the *other* side. Well that is precisely the locality assumption. You haven't *mentioned* it, true. But you've made that assumption nonetheless.

And you're dreaming if you think you can get an inequality without making that locality assumption. You can *always* make a theory that predicts the same as QM by allowing the probabilities on either side to depend explicitly on the setting/outcome on the far side.


The definition of realism used in Bell’s paper is tied back to the ideas originally put forth in EPR [2] – after all, the paper was titled “On the Einstein Podolsky Rosen Paradox”. The definition is always tied to observable quantities as these are objectively verifiable and therefore within the realm of science. Of course, one might presume that there is a deeper level too, but we do not need this for our definition.

I'm sorry, but this is confused too. Here you make it sound like you also aren't making a "hidden variable" assumption. Rather, you're just making some definite statements about the probabilities for various observables. But guess what? These are probabilities for observable quantities to have particular values regardless of whether or not they are observed. Such probabilities cannot be defined in OQM; hence these involve "hidden variables" -- specifically, your probabilities are all probabilities for the hidden variables to have particular values.


c. Realism according to Bell: “It follows that c is another unit vector [in addition to observables a and b]…”. [1]

Huh? "It follows that c is another unit vector..." has nothing to do with realism. And a, b and c aren't "observables". They are directions in space.



Rule 1 Assumption:
1 >= p(a+, b+, c+) >= 0

What is this the probability *of*? It's the probability for three hidden variables to possesses a certain particular value (+), right? And these are *local* hidden variables in the sense that the variables are "carried" by each particle separately and determine the outcomes of spin measurements done on that particle (without any additional dependence on distant things like which measurements are performed on the partner particle). Right?

I'm not quite sure what you're thinking, but what you're *doing* here is assuming local deterministic hidden variables, and then following Bell in deriving an inequality from that assumption. But then you seem to want to say afterwards that you haven't assumed locality (or hidden variables). In other words, you seem pretty unclear on what you're actually doing.


Result (3) is a deduction from assuming simultaneous reality to Alice’s a, b and c polarization observables. This is a Bell Inequality, and makes reference only to the internal relationship of 3 hypothetical simultaneous observables. This is an uncontroversial requirement of any “realistic” theory.

It's an uncontroversial requirement of any *local* hidden variable (i.e., "realistic", if you insist) theory. You haven't mentioned locality here only because you've glossed over the first half of the argument and just assumed, from the beginning, that local hidden variables determine the outcomes on each side.
 
  • #22
ttn said:
This all appears under the heading "Bell's inequality without locality". But do you seriously think you've avoided a locality assumption here? Your requirements for the probabilities of outcomes on each side have one notable feature: none of the probabilities depend on what's going on on the *other* side. Well that is precisely the locality assumption. You haven't *mentioned* it, true. But you've made that assumption nonetheless.

Of course I avoided the locality assumption. It isn't present, and there is no flaw. The problem is that everyone always jumps to the 2 particle experiment. Bell's Theorem is not a theorem about entanglement, and it is not a theorem about quantum superpositions... it is a theorem regarding local reality. First let's define reality. I have.

The reality everyone cares about is the simultaneous reality of observables, whether actually observed or not. This is the reality related to a single particle. It has absolutely NOTHING to do with pairs of particles. This is the entire point of EPR, because completeness requires a theory which does not require the reality of non-commuting observables to be dependent on how measurements are performed (those measurements could be on the particle itself or elsewhere). I don't know how better to express it than in Einstein's own words:

"I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it."

If you care to frame the definition of reality some other way, let's hear it. I have presented my definition; it is not only specific and in keeping with both Bell and EPR, but leads to a Bell Inequality. Note that I do NOT claim - yet - that locality is not necessary to the conduct of a Bell test. All I have done is define reality and shown that it alone leads to a Bell Inequality. The inequality makes a specific prediction for the relationship of Alice.a, Alice.b and Alice.c. There can really be no controversy on this portion of the argument as noi controversial claims are made and the conclusion matches everything so far.
 
  • #23
DrChinese said:
Of course I avoided the locality assumption. It isn't present, and there is no flaw. The problem is that everyone always jumps to the 2 particle experiment.

Ah, I see, you were just talking about 3 different spin measurements on a single particle. Fair enough. Then I agree, there is no locality assumption in what you did.

But then why in the world do you call the result a Bell inequality? It has some superficial resemblance, yes. But it is of no physical import. Here's why: your inequality can't even be tested! It says things about the joint probability of various different hidden variables, e.g.,

P(a+,b+)

and how these relate to other joint probabilities. But none of these quantities can even be measured empirically. On a single particle, you get to measure the spin along a or the spin along b, but you can't measure both at the same time, and if you measure one, you are no longer entitled to assume that a subsequent measurement on the other will give the same outcome it would have given before.

So you tell me: why should anyone be remotely interested in this inequality you wrote down? It's true that a "realistic" theory (in the way you're defining that, which most people would just call local hidden variables that determine the spin outcomes, but whatever) will have its hidden variables constrained by this inequality. But who cares, since I can't test that empirically? And even if I could test it empirically (and the inequality were violated) what would that prove? Only something equivalent to the Kochen-Specker theorem, that spin outcomes can't be determined in this way by hidden variables -- at very least, there is some kind of "contextuality."



Note that I do NOT claim - yet - that locality is not necessary to the conduct of a Bell test. All I have done is define reality and shown that it alone leads to a Bell Inequality. The inequality makes a specific prediction for the relationship of Alice.a, Alice.b and Alice.c. There can really be no controversy on this portion of the argument as noi controversial claims are made and the conclusion matches everything so far.

You're absolutely right. I misunderstood because I just assumed you meant by Bell inequality what other people usually mean, i.e., I assumed you were talking about a 2 particle correlation type experiment.
 
  • #24
DrChinese said:
The reality everyone cares about is the simultaneous reality of observables, whether actually observed or not. This is the reality related to a single particle. It has absolutely NOTHING to do with pairs of particles. This is the entire point of EPR, because completeness requires a theory which does not require the reality of non-commuting observables to be dependent on how measurements are performed (those measurements could be on the particle itself or elsewhere). I don't know how better to express it than in Einstein's own words:
"I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it."
If you care to frame the definition of reality some other way, let's hear it.
Considering that the Heisenberg Uncertainty Principle makes it abundantly clear that there is some interaction between the process of measurement, and the 'state' of the particle, it seems rather odd that one would automatically assume that a particle's properties exist independant of measurement.
For example, the two-slot experiment suggests that a particle's location is not well-defined. However, it seems entirely possible that there are other properties that the particle could have.
So, how about this instead:
The results of any repeatable experiment can be accurately modeled as a stochastic process.
Which has the advantages of being explicit for things like Bell's Theorem, and, also avoids specifying what 'properties' a particle has.
 
  • #25
Tez said:
I think its useful to have a completely operational definition of what one calls local/nonlocal - i.e. theory and philosophy independent, just a definition in terms of the statistics of measurement outcomes. Thus my question - If we had such a pair of boxes would characterise them operationally as local or nonlocal? I suspect you might call them local, which is ok, but then its clear your definition is incompatible with the more common usage.

I agree totally - the definition of locality must be clear, mathematically precise and lastly: something we can agree upon. If we don't see locality the same way, then naturally we will come to different conclusions. So before I answer about the PR boxes, I would like to ask this question back:

Is the purpose of this definition of locality to formulate a condition that is experimentally testable? Is it to use to differentiate a theory so we can call it local or non-local?

There are two different programs, as I see it:

a. Locality-oriented: Use a Bell test to determine if there exist non-local influences (either signal-type or not).

b. Reality-oriented: Use a Bell test to determine if there are simultaneous reality to non-commuting observables.

The standard view of the results of Bell tests is:

a. If you assume reality, then non-local effects are demonstrated by Bell tests for theories that are realistic. (Some people also extend the results to indicate that non-local effects are demonstrated independently of the assumption of reality. I believe ttn would qualify as a member of that camp.) However, note that the non-local effects are essentially limited to collapse of the wave function and nothing else because there is no violation of signal locality. So now our definition of locality is: wave function collapse cannot occur faster than c. Therefore: Non-local theories can be realistic. By this definition QM is both non-local and non-realistic.

b. If you assume locality, as I do, then realistic theories are not viable as a result of Bell tests. (I also believe - but I am a minority on this - that the underlying reality of particle observables is now excluded for ALL theories - local or not. I have not proven that - yet.) But locality does not need to be defined as above to reach this conclusion. It only needs to be defined as per the requirements of Bell's Theorem, which I believe will be shown to be much less restrictive than the above. And that definition should match Bell's verbatim: "The vital assumption is that the result B for particle 2 does not depend on the setting a, of the magnet of particle 1, nor A on b." That is a very different definition of locality, as I am sure you would agree - and I didn't make it up, I am simply following it explicitly. Therefore: All theories that respect this particular type of locality must not be realistic too. They may, however, be non-local in other ways. By this definition, QM is "local" and non-realistic. So that is why Bell Locality must be defined differently than other possible definitions.

At least one of a. or b. is justified by the results of Bell tests. For most, a. or b. is just a personal preference. So it is easy to see that ttn sees a. as the answer, and I see b. as the answer.
 
  • #26
NateTG said:
So, how about this instead:
The results of any repeatable experiment can be accurately modeled as a stochastic process.

Which has the advantages of being explicit for things like Bell's Theorem, and, also avoids specifying what 'properties' a particle has.

Does that trade one can of worms for another? For example, I find it difficult to accept that pq<>qp can be modeled stochastically, but I know this has been claimed already as feasible. So I am personally very wary of stochastic theories. That is because I see non-commuting observables as evidence against realism - and therefore by your yardstick as evidence against stochastic theories too. So I am not sure if they are or are not the same yardstick.

But that is just my opinion, and I am not certain I am correct. What do others think? Comments?
 
  • #27
ttn said:
Ah, I see, you were just talking about 3 different spin measurements on a single particle. Fair enough. Then I agree, there is no locality assumption in what you did.
But then why in the world do you call the result a Bell inequality? It has some superficial resemblance, yes. But it is of no physical import. Here's why: your inequality can't even be tested!
...

You're absolutely right. I misunderstood because I just assumed you meant by Bell inequality what other people usually mean, i.e., I assumed you were talking about a 2 particle correlation type experiment.

Thanks, I hope we can agree to where we have come so far. That is why I called it uncontroversial, because so far it is of limited use. But I'm not done yet! :smile:

We need a correspondence to a Bell test, and I have not accomplished that yet. I have that portion and will post it shortly. It does not have a locality assumption either - I know that sounds impossible but it doesn't.

But before I post it, I want to pose a question about my result so far. I will do that in a follow-on post.
 
  • #28
DrChinese said:
Does that trade one can of worms for another? For example, I find it difficult to accept that pq<>qp can be modeled stochastically, but I know this has been claimed already as feasible. So I am personally very wary of stochastic theories. That is because I see non-commuting observables as evidence against realism - and therefore by your yardstick as evidence against stochastic theories too. So I am not sure if they are or are not the same yardstick.

That something can be disturbed or destroyed doesn't mean that it isn't real, so I don't see a conflict between non-commutative measurements and realism.
 
  • #29
Question...

Repeating my result from a previous post, we are talking about one photon, which we will call Alice:

(3) corr(Alice.a, Alice.b) + noncorr(Alice.a, Alice.c) – corr(Alice.b, Alice.c) ) / 2 >= 0

Here is my question: why is it that we can't test Alice in 2 sequential tests? I.e. If we could simply measure a pair of Alice's properties (not all 3, just 2), such as Alice.a and Alice.b, then we could plug the results into (3) and we'd know if realism is viable or not.

Of course, it happens that Alice.a and Alice.b don't commute. But so what? I haven't assumed any result that depends on this fact. So the question is: why would you reject a test of (3) by testing Alice.a and Alice.b. I mean, after all, it's pretty easy to prove that at the time I measured Alice.b that Alice also simultaneously had the value of Alice.a I already measured. (We all agree that the measurement of Alice.b means that we may or may not get the previous Alice.a again if we measure that property a second time.)

So again, what would the criticism of this test be? Why do we need to jump to a Bell test rather than simply do a test as I describe? After all, I have set it up for a direct test of realism...
 
  • #30
DrChinese said:
There are two different programs, as I see it:
a. Locality-oriented: Use a Bell test to determine if there exist non-local influences (either signal-type or not).
b. Reality-oriented: Use a Bell test to determine if there are simultaneous reality to non-commuting observables.

I won't argue about what is or isn't "standard", but this is not quite right. The two issues are related, not two alternative ways of viewing the same one argument. The piece you are missing -- the thing that links these two issues together -- is the EPR argument. They showed that, under the assumption of locality, and given the perfect correlations when A and B measure along the same axis, there *must exist* local hidden variables which determine the outcomes. (This can be made rigorous in terms of Bell Locality: every Bell Local theory which successfully accounts for the perfect correlation *must* be a deterministic local hidden variable theory in the sense of EPR.)

Then, Bell's Theorem is simply a further step from where EPR leaves off: *given* these local hidden variables which determine the outcomes (which remember are *required* by locality and the perfect correlations!), can the rest of the QM predictions be matched? Answer: no. That's Bell's Theorem.

My point here is that, if you think we have some kind of *choice* about whether to reject locality or realism (in the face of empirical violations of Bell's inequality), that's because you've forgotten that the "realism" assumption (which Bell does indeed make, no doubt about that) *follows* from locality (in the first half of the argument). So the whole thing is a 2 part argument: part 1 is EPR, and shows that locality requires the existence of local hv's which determine the outcomes on each side independent of what's going on on the other side. So we have local deterministic hidden variables as a *consequence* of locality. Then, part 2 of the argument (Bell's Theorem) shows that this consequence has another logical implication which is in conflict with experiment (or at any rate with the QM predictions). So putting these two parts together, it is clear that locality (viz, Bell Locality) is empirically falsified.


The standard view of the results of Bell tests is:
a. If you assume reality, then non-local effects are demonstrated by Bell tests for theories that are realistic. (Some people also extend the results to indicate that non-local effects are demonstrated independently of the assumption of reality. I believe ttn would qualify as a member of that camp.)

I am, but you aren't appreciating yet (I think) why I say that. It's *not* based exclusively on Bell's Theorem. Bell assumes *local* *hidden variables* and, if all we knew was that Bell's inequalities were violated, we'd have a choice about which of those two ideas to blame. But we *also* know (from EPR) that the hidden variables are a logical *consequence* of locality (and some other empirically confirmed predictions of QM). So, when those two parts are combined, we have: {locality} + {some empirically correct predictions of QM} --> {some predictions that conflict with QM and experiment}. So locality is false. *That's* why I'm "in that camp."


However, note that the non-local effects are essentially limited to collapse of the wave function and nothing else because there is no violation of signal locality. So now our definition of locality is: wave function collapse cannot occur faster than c.

This is unnecessarily tied to OQM. You might have a theory which explains all the data that doesn't even have wave functions in it. But what we know for sure about such a theory is that it will violate Bell Locality.


b. If you assume locality, as I do, then realistic theories are not viable as a result of Bell tests.

Likewise, if you assume locality, then non-realistic theories are not viable as shown by the EPR argument.

A non-realistic theory is, by definition, one in which there are *not* "pre-measurement values" encoded (as hv's) in the particles, values which are merely revealed by what we call "spin measurements". Right? But without those, there is *no way* you are going to explain the perfect correlations that are *actually observed* when Alice and Bob measure along the same axis. You are *bound* to predict that the stochastic coming-into-existence of values on the two sides sometimes has a + and a + coming into existence when, in order to have the perfect correlations, we need a + and a - (or whatever). Bottom line: there is no Bell Local way to account for the perfect correlations except to attribute local outcome-determining hidden variables on both sides. Under the assumption of locality, such local hv's simply have to exist. (Yet Bell shows that they *can't* exist. So the locality assumption is false.)


At least one of a. or b. is justified by the results of Bell tests. For most, a. or b. is just a personal preference. So it is easy to see that ttn sees a. as the answer, and I see b. as the answer.

It's only a question of "personal preference" if you drop half the argument, i.e., forget EPR. See "La Nouvelle Cuisine" for futher discussion.
 
  • #31
DrChinese said:
So again, what would the criticism of this test be? Why do we need to jump to a Bell test rather than simply do a test as I describe? After all, I have set it up for a direct test of realism...

The test assumes that the measurements are non-destructive.
 
  • #32
DrChinese said:
Repeating my result from a previous post, we are talking about one photon, which we will call Alice:
(3) corr(Alice.a, Alice.b) + noncorr(Alice.a, Alice.c) – corr(Alice.b, Alice.c) ) / 2 >= 0
Here is my question: why is it that we can't test Alice in 2 sequential tests? I.e. If we could simply measure a pair of Alice's properties (not all 3, just 2), such as Alice.a and Alice.b, then we could plug the results into (3) and we'd know if realism is viable or not.

Looks like NateTG already cut to the heart of this, but let me elaborate just a bit. You seem to be asking here why this can't be directly tested, since you only need to be able to measure 2 of the spin components simultaneously, not 3. But 2 is just as impossible as 3. In particular, measuring two different spin components simultaneously is impossible. So your inequality is not empirically testable.



Of course, it happens that Alice.a and Alice.b don't commute. But so what? I haven't assumed any result that depends on this fact.

You haven't assumed it in the derivation, that's true. But if you remember what your inequality is *about*, it becomes clear: the inequality refers to the hidden variables (specifically, their probaiblity distribution). So if you could find out the values of those, you could test the inequality. But you can't find out the values -- at least, not if you believe that measuring Alice.a messes up the value-to-be-measured for Alice.b. It's *there* that you have to make this extra assumption -- in leaping from what you measure to what your inequality was originally about.


So the question is: why would you reject a test of (3) by testing Alice.a and Alice.b. I mean, after all, it's pretty easy to prove that at the time I measured Alice.b that Alice also simultaneously had the value of Alice.a I already measured.

Huh? Now I'm quite confused. So Alice is measuring the spin component of this photon along one direction, and you are measuring the spin component along some other direction? I don't see why you want there to be two people involved, but it doesn't really matter. Whoever's doing it, only one component can be measured at a time, and the act of measuring one *might* mess up the value-to-be-measured for the other one. So, unless you just arbitrarily *assume* that the one measurement doesn't disturb the later outcome, you just can't empirically access the probabilities that your inequality is about.



(We all agree that the measurement of Alice.b means that we may or may not get the previous Alice.a again if we measure that property a second time.)

In other words, the measurement of Alice.b disturbs the value of Alice.a. That's exactly why you can't obtain simultaneously-believable values for a and b.


So again, what would the criticism of this test be? Why do we need to jump to a Bell test rather than simply do a test as I describe? After all, I have set it up for a direct test of realism...

It would be, if you could do it. But look, this is just equivalent to going back to the EPR argument with position/momentum and saying: why don't we just empirically measure the position and momentum at the same time, and see if they both have simultaneous definite values? Then we could directly test Bohr's completeness doctrine (which says they don't). Well, if you could do that, it'd be great, but you just can't. Same thing here.
 
  • #33
NateTG said:
The test assumes that the measurements are non-destructive.

Mmmm... it actually may not be. We don't know. That is sort of what we are trying to find out - or at least it's wrapped up in it. But I agree it is the main objection to using this kind of test.

There is no question that the Alice.a and Alice.b don't commute, so you would expect destructive results as you point out. So let's be clear about this sequence so we don't get off track.

I measure Alice.a
I measure Alice.b
I measure Alice.a which I will call Alice.a'

There is no question that Alice.a' may or may not be the value of Alice.a. We can't say. So that is a pretty clear statement that Alice.a and Alice.b don't commute.

I measure Alice.a
I measure Alice.a which I will call Alice.a'
I measure Alice.b
I measure Alice.b which I will call Alice.b'

In this case we have confirmed that Alice.a=Alice.a', and Alice.b=Alice.b'. So we must conclude that we know Alice.a' and Alice.b at the same time. In other words, the measurement is hypothesized to be destructive only for non-commuting measurements and a and a' commute.

But that is essentially like saying that which we intended to find out from my (3): if Alice.a and Alice.b have simultaneous reality by any objective standard - and if they don't commute then they don't. So if that is raised as an objection - as you point out - then you almost need to throw in the towel on objectively putting forth a realistic theory because there is no way to demonstrate the reality of non-commuting variables.

Or is there? We will have to try a version of the EPR argument and perform a Bell test to see if we break this barrier.
 
  • #34
ttn said:
The piece you are missing -- the thing that links these two issues together -- is the EPR argument. They showed that, under the assumption of locality, and given the perfect correlations when A and B measure along the same axis, there *must exist* local hidden variables which determine the outcomes. (This can be made rigorous in terms of Bell Locality: every Bell Local theory which successfully accounts for the perfect correlation *must* be a deterministic local hidden variable theory in the sense of EPR.)

Then, Bell's Theorem is simply a further step from where EPR leaves off: *given* these local hidden variables which determine the outcomes (which remember are *required* by locality and the perfect correlations!), can the rest of the QM predictions be matched? Answer: no. That's Bell's Theorem.

If that were the argument of EPR, then Bell proved them wrong. But that wasn't their argument. Their conclusion was (almost verbatim): if QM is complete, then there is not simultaneous reality to non-commuting operators. Bell did not prove that conclusion to be wrong.
 
  • #35
ttn said:
So, unless you just arbitrarily *assume* that the one measurement doesn't disturb the later outcome, you just can't empirically access the probabilities that your inequality is about.

In other words, the measurement of Alice.b disturbs the value of Alice.a. That's exactly why you can't obtain simultaneously-believable values for a and b.

I didn't assume Alice.a and Alice.b are non-commuting measurements; I mean, they are, there isn't any question of that. It just wasn't part of my proof.

So it actually works the other way: If non-commuting measurements are not allowed in a test of the simultaneous reality of non-commuting observables, then it is impossible to prove the simultaneous reality of non-commuting observables. You might also say it is impossible to disprove them too. Of course, exactly this type of test is used in Bell tests.

So the question becomes: do you allow non-commuting measurements in a test of the simultaneous reality of non-commuting observables if that is a test on entangled particles? Even though such test is rejected if performed in the form as my (3)?

This is my question, and I am asking for comments. I would guess that the consensus answer would be: yes, it is OK on entangled particles Alice and Bob; but not on Alice alone.
 
  • #36
DrChinese said:
So the question becomes: do you allow non-commuting measurements in a test of the simultaneous reality of non-commuting observables if that is a test on entangled particles? Even though such test is rejected if performed in the form as my (3)?
This is my question, and I am asking for comments. I would guess that the consensus answer would be: yes, it is OK on entangled particles Alice and Bob; but not on Alice alone.
If you mean, "does measuring Alice.a and Bob.b commute even if Alice.a and Alice.b are not commutatively observable?" then the answer depends a bit on which interpretation you chose, and I suppose, on what you mean by commute.
The problem with the definition of commute is that the order of the two measurements can be observer-dependant. So, from that perspective, the order in which the measurements occur cannot matter. On the other hand, from an information point of view, it appears to be the case that when measurement occurs at one particle, information is 'lost' at the other.
My understanding is that some people view such a pair of entangled particles as a single waveform that collapses when measurement occurs at either spatial location - which seems to indicate that the measurements are not commutative. Conversely, AFAICT Bohmian mechanics has no special properties for measurement, so it would appear that from the Bohmian point of view, the measurements are commutative.
 
  • #37
And just a couple of points for those who think I am drifting into never never land in my treatment of Bell's Theorem without Locality:

1. What else is the GHZ theorem about if it is not about the simultaneous reality of non-commuting observables? Locality is not present as an assumption in that proof either.

2. We were just discussing a few days ago in this thread a published paper which says virtually the same thing, although coming at it from a different perspective ("All quantum observables in a hidden-variables model must commute simultaneously" by James Malley). Note that the non-commuting issue is central to his paper, just as it is to EPR. (There is a connection between this and the definition of reality; I just can't formulate that exact connection yet beyond what I have in my earlier posts. :smile: )

3. And these are not the only examples of theorems that are no-go for ALL realistic theories. So even if it is still a minority opinion, there is support for this perspective.

So I really just looking at the same issue - the role of reality and locality - from an angle that we are more familiar with - that being Bell. And don't get me wrong, I still take a mainstream oQM view of the situation.
 
Last edited:
  • #38
NateTG said:
If you mean, "does measuring Alice.a and Bob.b commute even if Alice.a and Alice.b are not commutatively observable?" then the answer depends a bit on which interpretation you chose, and I suppose, on what you mean by commute.

The problem with the definition of commute is that the order of the two measurements can be observer-dependant. So, from that perspective, the order in which the measurements occur cannot matter. On the other hand, from an information point of view, it appears to be the case that when measurement occurs at one particle, information is 'lost' at the other.

My understanding is that some people view such a pair of entangled particles as a single waveform that collapses when measurement occurs at either spatial location - which seems to indicate that the measurements are not commutative. Conversely, AFAICT Bohmian mechanics has no special properties for measurement, so it would appear that from the Bohmian point of view, the measurements are commutative.

In my mind, entangled Alice.a and Bob.b don't commute any more than Alice.a and Alice.b do. If you measure Bob.a after measuring Bob.b you have no guarantee you will get Bob.a=Alice.a.

Does anyone disagree with this interpretation?
 
  • #39
DrChinese said:
In my mind, entangled Alice.a and Bob.b don't commute any more than Alice.a and Alice.b do. If you measure Bob.a after measuring Bob.b you have no guarantee you will get Bob.a=Alice.a.
Does anyone disagree with this interpretation?

If we operate with the assumption that Bob.a and Bob.b don't commute, I don't see how that has anything to do with Bob.a and Alice.b commuting.

...
"Truely you have a dizzying intelect" - The Man in Black
 
  • #40
NateTG said:
My understanding is that some people view such a pair of entangled particles as a single waveform that collapses when measurement occurs at either spatial location - which seems to indicate that the measurements are not commutative. Conversely, AFAICT Bohmian mechanics has no special properties for measurement, so it would appear that from the Bohmian point of view, the measurements are commutative.

Ah, but Bohm's theory is explicitly non-local (and not, like OQM, in denial about it!). So, really, to even define Bohm's theory, we need to fix a preferred frame at the beginning -- this is the frame in which the "instantaneous action at a distance" occurs, i.e., involves the effect happening simultaneously with the distant cause. So then there is no longer any ambiguity about the order of the measurements. One just really did happen before the other. And whichever one happened first, caused the trajectory of the distant particle to veer off a bit from what it would otherwise have done, so the outcome of the second measurement is influenced by the performance of the first measurement.

But... of course... in a way that can't be used to send signals faster than light (even though there is definitely faster than light non-local dynamics at work!).
 
  • #41
DrChinese said:
In my mind, entangled Alice.a and Bob.b don't commute any more than Alice.a and Alice.b do. If you measure Bob.a after measuring Bob.b you have no guarantee you will get Bob.a=Alice.a.
Does anyone disagree with this interpretation?

Um, yes, absolutely I disagree. Even in regular textbook QM, the spin operators for one particle commute with the spin operators for a different particle. I mean, of course they commute -- and this doesn't even have anything to do with whether you like OQM or Bohm's theory or whatever. If there's one thing people of all these different camps can agree about, it's that Alice.a commutes with Bob.b.

I also don't understand your second sentence. But it's probably irrelevant given my disagreement with the first.
 
  • #42
DrChinese said:
If that were the argument of EPR, then Bell proved them wrong. But that wasn't their argument. Their conclusion was (almost verbatim): if QM is complete, then there is not simultaneous reality to non-commuting operators. Bell did not prove that conclusion to be wrong.

Well this is one of those things we've argued over in the past. I'll just remind everyone that Einstein's own summary of the EPR argument (or, at any rate, what the EPR argument was *supposed* to have been -- since, according to Einstein, Podolsky, who wrote the paper, kind of buried the main point in a bunch of distracting irrelevancies):

"By this way of looking at the matter it becomes evidence that the paradox [EPR] forces us to relinquish one of the following two assertions:
1. the description by means of the psi-function is complete [i.e., there are no hidden variables, or]
2. the real state of spatially separated objects are independent of each other [i.e., locality]
...it is possible to adhere to (2) if one regards the psi-function as the description of a (statistical) ensemble of systems (and therefore relinquishes (1)). However, this view blasts the framework of the 'orthodox quantum theory.'"

So here Einstein is clearly saying the point of EPR is that you have to either reject the completeness doctrine, or you have to accept non-locality. Or framing the same point slightly differently, the assumption of locality requires you to reject the completeness doctrine (and, in particular, accept the existence of a certain definite class of hidden variables -- just the kind, it turns out, that Bell later assumed in his theorem).

You are also repeating the *classic* misconception about how EPR relates to Bell. And it's really an elementary point of logic. EPR showed that locality --> hv's. (The assumption of locality requires the existence of certain hidden variables.) Bell showed that the existence of those hidden variables leads to a certain statement (the inequality) which has been shown to conflict with experiment. So it's correct in one sense to say that Bell proved EPR wrong -- namely, since EPR believed in locality, they accepted that they had shown the existence of the hidden variables (i.e., blasted the completeness doctrine). And Bell did indeed show that that *conclusion* is no longer viable. It's just *wrong* to believe in local hidden variables, because we now know that those variables entail Bell's inequality, which is empirically false.

But there's a different, and more relevant, sense in which Bell doesn't touch EPR: namely, Bell doesn't in any way show that the *reasoning* of EPR -- the proof that "locality entails hv's" -- is wrong. That reasoning is, was, and always will be completely valid. Locality really does require the existence of those hidden variables -- whether or not those hidden variables actually exist. So it is a really serious error (that is bound to lead to serious confusion) to say "we can just ignore the epr argument, since Bell showed they were wrong."

Let me make this as clear as possible by sketching out the logic involved:

EPR say:

Premise 1: Locality
Premise 2: Locality --> HV's
---
Conclusion: HV's

(where by "HV's" I mean the existence of the local hidden variables that determine the spin outcomes locally)

Bell says:

Premise 1: HV's
----
Conclusion: Inequality


Experiment says that "Inequality" is false. Well, that was a straight consequence of Bell's assumption "HV's", so it must be that "HV's" is false. But that was a logical consequence of EPR's two assumptions. One of those is controversial (namely "Locality --> HV's") but I think is entirely correct. (Basically, EPR's *reasoning* was valid; they did indeed prove that Locality requires hidden variables.) So the only thing to point to, as the flawed premise that led to the empirically false statement "Inequality", is EPR's premise 1: locality.

That, in a nutshell, is why I think the violation of Bell's inequality proves that nature is nonlocal. (Specifically: violates Bell Locality.) The reasoning of EPR plays (obviously) a crucial role here, and you can't just dismiss that by saying Bell proved they were wrong -- this just equivocates on what they were wrong about. Their conclusion turns out to be untenable, yes, but their *reasoning* for it was entirely valid... hence it is the locality premise that is left as the faulty assumption which led to the contradiction with experiment.
 
  • #43
NateTG said:
If we operate with the assumption that Bob.a and Bob.b don't commute, I don't see how that has anything to do with Bob.a and Alice.b commuting.

I am saying, for entangled Alice and Bob (which has Alice.b=Bob.b to begin with by definition):

Alice.a, Bob.b <> Alice.b, Bob.a

Just as:

Alice.a, Alice.b <> Alice.b, Alice.a

In other words, after testing Alice.b and Bob.b, I am certain to get Alice.b = Bob.b unless if I do Alice.a first. I mean, we are talking about a superposition of states. I don't know what else to call it except to say that the order of the measurements changes the outcome.
 
  • #44
ttn said:
Um, yes, absolutely I disagree. Even in regular textbook QM, the spin operators for one particle commute with the spin operators for a different particle. I mean, of course they commute -- and this doesn't even have anything to do with whether you like OQM or Bohm's theory or whatever. If there's one thing people of all these different camps can agree about, it's that Alice.a commutes with Bob.b.

So you would say that applies to entangled particles too? For most people? Or would they make an exception when it comes to entangled particles? (I'm not trying to split hairs, I just want to make sure we are using lingo we agree upon. I won't use this terminology if you object.)

Keep in mind for your answer... there is absolutely nothing that requires Alice and Bob to be space-like separated. They can be in the same location in the same reference frame. We already know experimentally (Aspect, Weihs, etc.) that performing Bell tests with space-like separation does NOT affect the outcome. (Any reservations anyone has about test "loopholes" does not apply to this discussion.) So we CAN do our tests on Alice and Bob in specific sequences.
 
Last edited:
  • #45
DrChinese said:
So you would say that applies to entangled particles too? For most people? Or would they make an exception when it comes to entangled particles? (I'm not trying to split hairs, I just want to make sure we are using lingo we agree upon. I won't use this terminology if you object.)

Keep in mind for your answer... there is absolutely nothing that requires Alice and Bob to be space-like separated. They can be in the same location in the same reference frame. We already know experimentally (Aspect, Weihs, etc.) that performing Bell tests with space-like separation does NOT affect the outcome. (Any reservations anyone has about test "loopholes" does not apply to this discussion.) So we CAN do our tests on Alice and Bob in specific sequences.

Yes, it's non-controversially true that the spin operators for different particles commute. This is not affected by the particles being entangled.

I'm not sure where you're going with this, though. That the operators do or don't commute just tells you something about how OQM works. It doesn't directly imply *anything* about what affects what, in reality.
 
  • #46
Continuing with the argument...

Our previous result was:

To be explicit that we are talking about one photon, which we will call Alice:
(3) corr(Alice.a, Alice.b) + noncorr(Alice.a, Alice.c) – corr(Alice.b, Alice.c) ) / 2 >= 0


Internal Inconsistency of Realistic Theories
================================

It has been known for nearly 200 years that Alice.a and Alice.b have the correlated relationship cos2(a-b) per Malus’ Law – which has substantial experimental verification. This formula leads to an internal inconsistency for any realistic theory.

Where Alice has polarizer settings a=0 degrees, b=67.5 degrees, c=45 degrees:

corr(Alice.a, Alice.b) + noncorr(Alice.a, Alice.c) – corr(Alice.b, Alice.c) ) =
cos2(a-b) + cos2(a-c) – cos2(b-c) =
cos2(67.5 degrees) + cos2(45 degrees) – cos2(22.5 degrees) =
-.1036

This prediction is negative and clearly conflicts with the realistic prediction (3) above, which is non-negative. Recall that a single counterexample is sufficient to invalidate any theory. Thus:

(4) No realistic theory can be internally consistent if Malus’ Law is accepted.

Is this result valid and meaningful? Absolutely it is, and here is why: The inequality (3) above demonstrates a requirement of any realistic theory, a result which is uncontroversial. So why should (4) be controversial?

The objection raised is: the application of Malus’ Law is not valid to test (3) because measurements of Alice.a and Alice.b do not commute. The order of the measurements on Alice affects the outcomes; and measurements of Alice at various settings don’t give any additional information about Alice at any single point in time.

For instance: if we measure Alice.a and then Alice.b, and then measure Alice.a a second time – which we will call Alice.a’, our general result is:

1 > corr(Alice.a, Alice.a’) > .500

We would expect it to be very close to 1 if we had really learned new information about Alice when the Alice.b measurement was performed. As EPR noted: “…a measurement however disturbs the particle and thus alters its state.” [2]

Note: To rebut this objection: The reason that Alice.a and Alice.b don’t commute is precisely because our realistic assumptions do not hold. And that is exactly what we were trying to test! So we should not dismiss (4) a priori. We asked for a way to test the realistic assumptions and we got it in (3); and we demonstrated that this leads to an internal inconsistency. For the realistic assumptions to be valid, the internal relationship of Alice.a, Alice.b and Alice.c need to be something other than that described by Malus. In addition, rejecting (4) because Alice.a and Alice.b do not commute is unjustified because the is a consequence of QM; and so far we have not needed to assume QM to make our proof.

While it is asserted here that (4) is valid, we will continue on as if it were not. In the next section, we will circumvent the objection presented against result (4); namely that a direct test on Alice.a and Alice.b cannot be performed in such a way as to demonstrate that Alice.a and Alice.b are simultaneously valid.
 
Last edited:
  • #47
Experimental Setup for Bell Test

Experimental Setup for Bell Test

Can we test (3) experimentally using a Bell test? EPR put forth an ingenious idea: if you had 2 particles that had a relationship of some known nature, perhaps you could factor that in and do a measurement on one particle to gain information about the other. They postulated that using this method, the limits of the Heisenberg Uncertainty Principle could be exceeded. At the time, there was no specific concept of entanglement. Later, Bohm discovered that a pair of particles could be theoretically created in the so-called singlet state – a form of entanglement. But no experimental setup was available at the time, and result (3) above was not yet known.

Bell saw that the singlet state could be used as a tool to extend his discovery of the requirement of realism to an experimental form. Although no direct test of (3) was possible, an indirect test was using this tool. For this, we require a test setup in which there is a pair of particles with identical states. This is obtainable, for example, using Type I Parametric Down Converted photons [3] – or analogously, using other types of entanglement in which the particles have orthogonal states. Such photons are in a superposition – a polarization entangled state.

Our program is that we cannot directly measure (Alice.a and Alice.b) simultaneously; nor (Alice.a and Alice.c); nor (Alice.b and Alice.c). But we can measure these indirectly using a second – “cloned” – particle. This particle is entangled with Alice, and we call it Bob. We need them to have this relationship:

(5) Alice.a = Bob.a, or generally: corr(Alice.a, Bob.a) = 1
Alice.b = Bob.b, or generally: corr(Alice.b, Bob.b) = 1
Alice.c = Bob.c, or generally: corr(Alice.c, Bob.c) = 1

This is NOT an assumption! This is an experimental requirement because we need there to be a demonstrable relationship if we are to perform such an indirect test. There must be entanglement and (5) must be demonstrated positively and unambiguously. If it could not, then there would be no purpose to continuing as we don’t have a way make an indirect observation of Alice using Bob.

Note: It may be still be objected that QM does not permit us to learn more about Alice using this indirect technique because it would violate the Heisenberg Uncertainty Principle. However, we have not so far used any element of QM in our proof and we will not do so now either. The technique of this proof follows standard concepts for Bell tests.

Assuming we could prove (5) experimentally, we would measure one desired observable property on Alice and another property on Bob. These results will allow us to test (3) indirectly using the following form, deduced from substitutions from (5):

(6) corr(Alice.a, Bob.b) + noncorr(Alice.a, Bob.c) – corr(Alice.b, Bob.c) ) / 2 >= 0

To rule out realistic theories by experiment, we need to prove (5) true and (6) false.
 
  • #48
DrChinese said:
To rule out realistic theories by experiment, we need to prove (5) true and (6) false.

You're getting closer. But one can make the same objection to this "indirect" method of testing your inequality, as we made originally to the "direct" test: you can't assume that the result of the Alice.a measurement is the same as it would have been had you not first made the Bob.b measurement (or whatever). Right? *In principle*, just like measuring Alice.a might "screw up" a subsequent measurement of Alice.b (in the sense that that second measurement won't now give the same outcome as it would have given if you hadn't first made the first measurement), so Bob's measurement over there might "screw up" the value that is obtained from Alice's measurement over here. Right? In principle, this is possible.

Of course, some sort of *locality* assumption would eliminate this objection. But then it'd be clear that what's being tested by a "Bell test" is a conjunction of hidden variable and locality assumptions.

...and please also don't forget that the hidden variables *follow* from locality (EPR).
 
  • #49
ttn said:
You're getting closer. But one can make the same objection to this "indirect" method of testing your inequality, as we made originally to the "direct" test: you can't assume that the result of the Alice.a measurement is the same as it would have been had you not first made the Bob.b measurement (or whatever). Right?

I was hoping you would appreciate that (5) must be true, before we continue to prove (6) false. So I think that point is clear - in fact you mentioned (5) in your own words in an earlier post today.

Now, here is the rub... (6) is fine unless you have some reason to suspect there is a problem in the experimental execution of it. After all, a confirmation of (5) also demonstrates there is no improper skewing. Unless of course, the improper skewing only shows up when we test at specific different angle settings in a very specific way, and disappears completely when the form is (5). But we will save that to the last part. It is clear that if we can prove (5) true then the last step is to prove (6) false. Agreeing that we are at this point is not the same as conceding anything at this point. Of course, we are bound to come to a point at which we will disagree, but this shouldn't be it. The test setup as I have it is fine if it can be executed to your satisfaction.

Please note how far we have come without any reference to locality. Considering, it's a pretty long way if that is all EPR & Bell is about. We really have Bell's Inequality at this point, even if we haven't tested it. And all I have assumed to get here is a very specific form of realism.
 
  • #50
DrChinese said:
I was hoping you would appreciate that (5) must be true, before we continue to prove (6) false. So I think that point is clear - in fact you mentioned (5) in your own words in an earlier post today.

Yes, I agree that (5) is an empirically verified fact. When alice and bob measure along the same axis, the results are perfectly correlated.


Now, here is the rub... (6) is fine unless you have some reason to suspect there is a problem in the experimental execution of it. After all, a confirmation of (5) also demonstrates there is no improper skewing. Unless of course, the improper skewing only shows up when we test at specific different angle settings in a very specific way, and disappears completely when the form is (5). But we will save that to the last part. It is clear that if we can prove (5) true then the last step is to prove (6) false. Agreeing that we are at this point is not the same as conceding anything at this point. Of course, we are bound to come to a point at which we will disagree, but this shouldn't be it. The test setup as I have it is fine if it can be executed to your satisfaction.

How does (5) being empirically proved, in any way "demonstrate that there is no improper skewing?" The objection was that Bob's measurement might change Alice's outcome (from what it would have been to something else). I don't see that (5) has any bearing on that at all -- for all we know, the only reason (5) is *true* is that the earlier measurement "skews" the later one in such a way that we observe perfect correlation.

In short, you seem to be saying that what I would call the locality assumption (the outcome on one side is independent of what's done on the other) is somehow a consequence of (5) or something like it. I don't see that *at all*.


Please note how far we have come without any reference to locality. Considering, it's a pretty long way if that is all EPR & Bell is about. We really have Bell's Inequality at this point, even if we haven't tested it. And all I have assumed to get here is a very specific form of realism.

Well, you've written a lot of words. (So have I!) But I really don't think we've gotten anywhere here. You wrote an inequality that is untestable and called it a "Bell inequality". Then you said that you can get from your inequality to the Bell inequality if one makes the test "indirect" by measuring each of two entangled particles once (rather than measuring a single particle twice). But this only *works* as an indirect measurement of the thing your original inequality was about, if you assume that the one measurement doesn't affect the other. And the only way to *justify* such an assumption is to cite the locality principle. So what have we really got? As far as I can tell, we have nothing new: either it's just Bell's Theorem revisited (not that that's a bad thing) or it's some empty and pointless and untestable thing that nobody cares about.
 
Back
Top