# B Simple proof of Bell's theorem

Tags:
1. Nov 22, 2016

### jeremyfiennes

The thread I wanted to post my question on got closed. Recapitulating:

The best (simplest) account I have found to date for the Bell inequality (SPOT stands for Single Photon Orientation Tester):
Imagine that each random sequence that comes out of the SPOT detectors is a coded message. When both SPOT detectors are aligned, these messages are exactly the same. When the detectors are misaligned, "errors" are generated and the sequences contain a certain number of mismatches. How these "errors" might be generated is the gist of this proof. Step One: Start by aligning both SPOT detectors. No errors are observed. Step Two: Tilt the A detector till errors reach 25%. This occurs at a mutual misalignment of 30 degrees. Step Three: Return A detector to its original position (100% match). Now tilt the B detector in the opposite direction till errors reach 25%. This occurs at a mutual misalignment of -30 degrees. Step Four: Return B detector to its original position (100% match). Now tilt detector A by +30 degrees and detector B by -30 degrees so that the combined angle between them is 60 degrees. What is now the expected mismatch between the two binary code sequences? We assume, following John Bell's lead, that REALITY IS LOCAL. Assuming a local reality means that, for each A photon, whatever hidden mechanism determines the output of Miss A's SPOT detector, the operation of that mechanism cannot depend on the setting of Mr B's distant detector. In other words, in a local world, any changes that occur in Miss A's coded message when she rotates her SPOT detector are caused by her actions alone. And the same goes for Mr B. The locality assumption means that any changes that appear in the coded sequence B when Mr B rotates his SPOT detector are caused only by his actions and have nothing to do with how Miss A decided to rotate her SPOT detector. So with this restriction in place (the assumption that reality is local), let's calculate the expected mismatch at 60 degrees. Starting with two completely identical binary messages, if A's 30 degree turn introduces a 25% mismatch and B's 30 degree turn introduces a 25% mismatch, then the total mismatch (when both are turned) can be at most 50%. In fact the mismatch should be less than 50% because if the two errors happen to occur on the same photon, a mismatch is converted to a match. Thus, simple arithmetic and the assumption that Reality is Local leads one to confidently predict that the code mismatch at 60 degrees must be less than 50%. However both theory and experiment show that the mismatch at 60 degrees is 75%. The code mismatch is 25% greater than can be accounted for by independent error generation in each detector. Therefore the locality assumption is false. Reality must be non-local.

Great. Finally an explanation of Bell's theorem that even I can understand! My question relates to the following part: "Imagine that each random sequence that comes out of the SPOT detectors is a coded message. When both SPOT detectors are aligned, these messages are exactly the same. When the detectors are misaligned, "errors" are generated and the sequences contain a certain number of mismatches." A "mismatch" however would be a mismatch with respect to the code emitted by the other detector, implying a communication between the two. Does not this violate their independence?
Thanks.

2. Nov 22, 2016

### Staff: Mentor

The hypothesis is that each detector produces its output by randomly flipping bits in the the input that it receives from the central source, without communicating with the other detector. Then, after the fact, we take the output of the two detectors and compare them - only then are we counting the mismatches.

3. Nov 23, 2016

### jeremyfiennes

Thanks. I hadn't got the "central source" bit. But there still seems to be a connection. It step 2 A tilts his detector, and as he does so creates errors by randomly flipping bits. But these are "errors" based on the assumption that detector B is still receiving the same input as he is from the central source. The two detectors are effectively linked via the central source, and are not completely independent.

4. Nov 23, 2016

### Staff: Mentor

That's the whole point of the exercise. We're showing that:

IF
1) The two detectors receive the same inputs; AND
2) The number of bits flipped by a detector depends only on the angle of that detector relative to the source but not the angle of the other detector;
THEN
there will be a limit on how different their outputs can be.

#2 is the independence requirement.

5. Nov 23, 2016

### jeremyfiennes

Thanks. That seems clear. I was just about to say that had resolved it, but a further point arose in my mind. Local theory predicts a maximum mismatch of 50%. Whereas measurement gives 75%, i.e. a 25% correlation. But shouldn't entanglement cause a higher correlation than expected on a local theory, and not a lower one?

6. Nov 23, 2016

### Staff: Mentor

That depends on whether you've initially entangled the particles in such a way that measurements on the same axis are expected to produce the same result or opposite results. In the simplified model used in this "proof" (it's not really a proof, it's an pedagogical example) that would correspond to whether the two messages are exactly identical or exact inverses ("one's complement", in computer science terms) of one another, and it's easier to explain if we choose the two messages being identical.

In actual experiments, we most often use photons entangled in such a way that both photons will always pass through filters that are 90 degrees apart, only one will ever pass through filters that are 0 degrees or 180 degrees apart, and it's 50-50 random at both filters when they are 45 degrees apart. For spin-entangled particles (which are seldom used because they're harder to work with than photons) they are usually entangled in such a way that measurements along the same axis always disagree, and if the detectors are 90 degrees apart they will be be 50-50 random. We have observed near-infinite confusion when people switch from one model to the another in mid-explanation

7. Nov 23, 2016

### DrChinese

It doesn't work that way. The entangled statistics can predict either higher OR lower correlations than a local realistic theory might. At some points they can even be equal. It is strictly a function of the difference [theta] in the choice of measurement angles, the formula being cos^2(theta). That formula won't work for a local realistic theory.

8. Nov 23, 2016

### jeremyfiennes

Thanks both of you. That fills up my brain capacity for the moment. I need time to brood on it. I like the maxim of not changing models in mid explanation.

9. Nov 24, 2016

### zonde

Mismatch of 50% is no correlation at all i.e. half of pairs are the same and the other half are different. 75% mismatch is 50% anticorrelation i.e. half of pairs can be viewed as random and half as being opposite. So 75% mismatch gives more certainty than 50% mismatch.
Let me illustrate this with binary strings:
50% mismatch:
A: 10101100
B: 11001010
C: sxxssxxs (the same number of matches "s" as mismatches "x")

75% mismatch:
A: 1010 0101
B: 1100 1010
C: sxxs xxxx (the same number of matches "s" as mismatches "x" in first half and only mismatches "x" in second half)

10. Nov 24, 2016

### jeremyfiennes

Ok, so on this simplified Bell model, at the +/- 30o position each detector sends out a signal with 25% random mismatches generated by it, and receives a signal with 25% random mismatches generated by the other. Giving an expected overall mismatch at each detector of at the most 50%. In fact 75% is measured, which the local realist theory cannot explain. How does a non-realist non-local theory explain it?

11. Nov 24, 2016

### Staff: Mentor

We derived the 50% limit by making two assumptions.
1) The two detectors receive the same inputs; AND
2) The number of bits flipped by a detector depends only on the angle of that detector relative to the source but not the angle of the other detector.

When we observe a 75% mismatch, we know that at least one of those two assumptions must be false. #1 is (under the conditions of this toy model) realism, and #2 is locality, so we know that any theory that accurately describes this situation must be either non-local or non-realistic (or both).

Bell's theorem is not intended to explain the results. It's not giving us a theory that explains the experimental observations, it's telling us that any theory that correctly predicts these results cannot be local and realistic.

Last edited: Nov 24, 2016
12. Nov 24, 2016

### zonde

QM formalism describes entangled pairs with single mathematical object but it does not give predictions for individual detections.
So if you want to speak about individual detections then you have to examine interpretations of QM. For non-local realist interpretation you can look at Bohmian mechanics.
I am unsure about non-realist explanations. Such explanation would have to take measurements as not being factual (rather extreme for me). I would suggest looking at quantum decoherence (it is not considered interpretation however).

13. Nov 27, 2016

### jeremyfiennes

Thanks. A small one to be going on with while I read up: are counterfactual definiteness and realism the same thing?

14. Nov 27, 2016

### jeremyfiennes

And in the simple Bell model, the common source for both detectors corresponds to locality; and the fixed relation between the detector angle and the mismatches generated corresponds to realism?

15. Nov 28, 2016

### zonde

It seems that "counterfactual definiteness" and "realism" in Quantum mechanics contexts is used with the same meaning that properties of particles exist independently from measurements.
But outside QM "counterfactual reasoning" generally means asking "what if?" type of questions while "realism" means that there exists reality independent of our observations and models. With that philosophical meaning "realism" justifies scientific method as we can test our models against reality (by performing experiments).
No. You can take locality assumption given by Nugatory in post #11: "2) The number of bits flipped by a detector depends only on the angle of that detector relative to the source but not the angle of the other detector." Alternatively we can say that measurements of Alice and Bob are independent.
Common source is more like a given. Basically everything you need to get the right answer for the step one you have to take as given. And without some source of entangled particles you fail at step one.

Returning to assumption that properties of particles exist independently from measurements. I would like to point out that in that simple proof properties of particles do not appear anywhere in the argument. However it relies on "counterfactual reasoning" in it's common sense as it asks "what if?" type of questions (in steps two to four).

16. Nov 28, 2016

### jeremyfiennes

Thanks. I'm realizing that my main problem is to really "get" the meaning of the terms. The worst, "counterfactual definitiveness", has thankfully now gone (the guy who invented it should be shot). So locality in this simplified case is, in Nugatory's words, that "the number of bits flipped by a detector depends only on the angle of that detector relative to the source, but not the angle of the other detector". It is called "locality" because the other detector could be so far away that any dependence effect would have to travel faster than light (?). "Realism" would then be that the total mismatch cannot exceed the sum of the mismatches of the individual detectors?

17. Nov 28, 2016

### forcefield

To argue about locality, don't you need the complementary experiment, where angles are changed while the particles are assumed to be on their way to the detectors ?

18. Nov 28, 2016

### DrChinese

Realism can be considered several different things. In the Bell proof, usually it is the idea that observer Alice, by her choice of measurement setting, does not influence the outcome that Bob sees (and vice versa). Mathematically, that is usually expressed as the independence of the functions that determine the outcomes for Alice and Bob. Therefore you have a Product state with settings a and b. See Bell's (2). If it is independent, then settings of {a,c} or {b,c} would likewise be independent Product states. That allows one to consider combinations of {a,b,c} even though all 3 could not be measured simultaneously. All of that together is realism.

19. Nov 28, 2016

### Staff: Mentor

Yes, but see below.
In the context of this simplified toy model, "realism" is an assumption so basic that it almost goes unstated.

We could reason as if the messages are actual physical pieces of paper, the source is a printer connected to a random number generator (and the the detectors might be slightly buggy character-recognition scanners). The key point is that when the messages leave the source, each bit in the message is established by physical properties (in this case, where on the paper the toner is deposited) of the message; without an assumption of that sort there's no original/unflipped value so no sensible way of interpreting the mismatches as the result of random bit flips. That assumption is (loosely speaking, and within the context of this toy model) "realism"; you need it to derive the conclusion that the total mismatch cannot exceed the sum of the individual mismatches.

But there's only so far you should carry this line of thinking and this toy model. You can drive yourself mad trying to define precisely what you mean by "realism" and "locality", as these are natural language words and they get all squishy under pressure. Bell's actual proof is done with mathematical statements about the probability distributions of the factors affecting the measurement results, and these are much less squishy. No matter how you interpret words like "locality" and "realism", there is an entire class of theories that are excluded by Bell's theorem because they imply probability distributions that cannot produce the observed experimental results.

Last edited: Nov 28, 2016
20. Nov 28, 2016

### DrChinese

That's done if you want to test whether there is some (currently) unknown signaling or other mechanism in place (often called the locality loophole). This test has been performed a number of times, and there is no sign of such a mechanism. If there is such a mechanism, it must be FTL.