Entanglement spooky action at a distance

  • Thread starter Thread starter Dragonfall
  • Start date Start date
  • Tags Tags
    Entanglement
Click For Summary
Entanglement, often referred to as "spooky action at a distance," is explained through Bell's Theorem, which shows that the outcomes of measurements on entangled particles are correlated in a way that defies the notion of independent randomness. The correlation follows a specific formula supported by experimental evidence, rejecting simpler models that suggest random outcomes. The design of EPR-Bell tests involves synchronized detection events that create interdependencies between measurements, which do not imply faster-than-light (FTL) communication. Discussions also highlight that the correlations arise from shared properties at the quantum level rather than any FTL influence or random pairing. Overall, the consensus is that entanglement does not facilitate instantaneous information transfer, as no physical evidence supports FTL transmissions.
  • #31


vanesch said:
If they were analysing the same thing, Bell's arithmetic inequalities should hold.
The inequalities are violated because they require specific (pre-filtration) values for the properties that are being related. However, such specific values aren't supplied by the quantum theory because they aren't supplied via experiment. Anyway, such specific values are superfluous and irrelevant with regard to calculating the probable results of EPR-Bell tests, since the values of these common properties are assumed to be varying randomly from emitted pair to emitted pair.

So, a common emission cause can be assumed (and is of course necessary for a local understanding of the results) as long as no specific value is given to the identical properties (the hidden variables) incident on the polarizer.

To reiterate, it doesn't matter what the value of the common property is, just that it be the same at both ends wrt any given coincidence interval.
 
Physics news on Phys.org
  • #32


Thomas, recently certain non-local models have been ruled out as well...
 
  • #33


"Superdeterminism" has been mentioned a few times, also called "conspiracy", particularly by Bell in "Free variables and local causality". Note that both are pejorative terms. Note also, however, that only deterministic evolution of probabilities and correlations, etc. is required, not deterministic evolution of particle properties. The trouble with ruling out classical models with this sort of property is that quantum theory applied to a whole experimental apparatus --- instead of only to the two particle quantum state that is supposed to cause the apparatus to register synchronized discrete events --- predicts the same type of deterministic evolution of probabilities and correlations of the whole experimental apparatus as a classical model requires. So, if superdeterminism or conspiracy is not acceptable, quantum theory is not acceptable (but of course quantum theory is acceptable, ...). See my "Bell inequalities for random fields", arXiv:cond-mat/0403692, published as J. Phys. A: Math. Gen. 39 (2006) 7441-7455, for a discussion.

What we have to let go of is the idea that the individual events are caused by two particles and their (pre-existing) properties. The no-go theorems require assumptions that are very hard to let go of for classical particles but do not hold at all for a classical field theory (more is needed, however; probabilities over field theories are mathematically not easily well-defined, requiring us to use what are called continuous random fields; see the above reference).

Although this is superficially a conceptually simple way to think about experiments that violate Bell inequalities, a great deal of subtlety is required. Bell inequalities are not confusing for no reason. In particular, the experimental apparatus has to be modeled to allow this point of view, not just a pair of particles. This is a heavy enough cost that for practical purposes it's best to come to an understanding with quantum theory. Furthermore, we have to think of an experimental apparatus that violates Bell inequalities as in a coarse-grained equilibrium (despite the discrete events that are clearly a signature of fine-grained nonequilibrium). For other experiments, at least, we also have to explicitly model quantum fluctuations in a classical formalism, but quantum fluctuations are not essential to understanding the violation of Bell inequalities. If we adopt all these qualifications, measurement of the experimental settings of polarization direction and the timings of measurement events (in experiments such as that by Wiehs, arXiv:quant-ph/9810080) are compatible, so they as much form the basis for classical models as for quantum mechanical models.

There are, however, very many other ways of reconciling oneself to the violation of Bell inequalities in the extensive literature. Pick one you like if at all possible, the alternative of rolling your own won't be easy.
 
  • #34


Sadly I really think these qualitative discussions are not very helpful. This is one of those instances where it makes sense only to talk about the mathematics.

Mathematically, Bell's theorem assumes that the probability of detection at A is independent of the probability of detection at B. QM violates his theorem, and therefore violates this assumption. Any interpretive framework of QM must therefore account for the fact that the probability of detection at A is, in fact, dependent on detection at B, and vice versa. One explanation is a common cause, another is non-local communication. Both are equally plausible at this juncture.
 
  • #35


ThomasT said:
Bell's theorem would have us expecting that the intensity of light transmitted via crossed polarizers should vary as a linear function of the angular difference between the polarizers.

Not at all. Bell's theorem states that a set of correlations between measurements of which the correlations are due to a common origin, must satisfy certain arithmetic relationships.
Bell's theorem doesn't state anything about photons, polarizations, optics, quantum mechanics or whatever. It simply states something about the possible correlations that can be caused by a common cause. It could have been stated even if there were never any quantum mechanics. Only, with classical physics, it would have sounded as almost trivial.

In other words, it is a property of a set of correlations, wherever they come from, if they are assumed to come from a common cause. Bell's theorem is hence something that applies to sets of correlations. It can be formulated more generally, but in its simplest form, applied to a specific set of correlations, it goes like this:

Suppose that we can ask 6 questions about something. The questions are called A, B, C, D, E and F. To each question, one can have an answer "yes", or "no".
As I said, Bell's theorem is more general than this, but this can be a specific application of it.

We assume that we have a series of such objets, and we are going to consider that to each of the potential questions, each object has a potential answer.

Now, consider that we pick one question from the set {A,B,C}, and one question from the set {D,E,F}. That means that we have two questions, and hence two answers.

There are 9 possible sets of questions:

(A,D)
(A,E)
(A,F)
(B,D)
(B,E)
(B,F)
(C,D)
(C,E)
(C,F)

Let us call a generic set of questions: (X,Y) (here, X stands for A, B or C, and Y stands for D,E or F).

We are going to look at the statistics of the answers to the question (X,Y) for our series of objets, and we want to determine:
the average fraction of time that we have "yes,yes",
the average fraction of time that we have "yes,no"
the average fraction of time that we have "no,yes"
the average fraction of time that we have "no,no"

We take it that that average is the "population average".

In fact, we even look at a simpler statistic: the average fraction of time that we have the SAME answer (yesyes, or nono). We call this: the correlation of (X,Y) of the population.

We write it as: C(X,Y). If C(X,Y) = 1, that means that the answers to the questions X and Y are ALL of the kind: yesyes, or nono. Never we find a yesno or a noyes.
If C(X,Y) = 0, then it's the opposite: we never find yesyes, or nono.

If C(X,Y) = 0.5, that means that we have as many "equal" as "unequal" answers.

We have 9 possible combinations (X,Y), and hence we have 9 different values C(X,Y):
we have a C(A,D), we have a C(A,E), etc...

Suppose now that C(A,D) = 1, C(B,E) = 1, and C(C,F) = 1.

That means that each time we ask A, and we have yes, then if we measure D, we also have yes, and each time we ask A and we have no, then if we ask D, we have no.

So the answer to the question A is the same as the answer to the question D.
Same for B and E, and same for C and F.

So in a way, you can say that D is the same question as A, and E is the same question as B.

We can hence consider the six remaining combinations, and rewrite A for D etc...

C(A,B)
C(A,C)
C(B,A)
C(B,C)
C(C,A)
C(C,B)

We also suppose a symmetrical situation: C(A,B) = C(B,A).

We now have 3 numbers left: C(A,B), C(B,C) and C(A,C).

Well, Bell's theorem asserts: C(A,B) + C(B,C) + C(A,C) >= 1.

Another way of stating this is that if you look at it as:

C(A,B) + C(B,C) > = 1 - C(A,C)

It means that you cannot find a property B which is sufficiently anti-correlated with as well A as C, without A and C being somehow correlated. In other words, there's a lower limit to which 3 properties can be anti-correlated amongst themselves. It's almost trivial if you think about it: if A is the opposite of B, and B is the opposite of C, then A cannot be the opposite of C.

Let's indeed take A is not B, and B is not C: then C(A,B) = 0, and C(B,C) = 0. We then find that 0 + 0 must be larger than 1 - C(A,C), hence C(A,C) = 1. A must be the same as C.

Let's take A and B uncorrelated, and B and C uncorrelated. That means that C(A,B) = 0.5 and C(B,C) = 0.5. In this case, there's no requirement on C(A,C) which can go from 0 to 1.

So this is a "trivial" property of the correlations of the answers to the questions A, B and C one can ask about something.

Well, it is this which is violated in quantum mechanics.

C(X,Y) is given by cos(theta_x - theta_y) ^2, and so we have:

cos^2(th_xy) + cos^2(th_yz) + cos^2(th_xy + th_yz) is not always > 1.

Indeed, take angles 1 rad, 1 rad and 2 rad:

The sum is 0.757...
 
  • #36


peter0302 said:
Mathematically, Bell's theorem assumes that the probability of detection at A is independent of the probability of detection at B. QM violates his theorem, and therefore violates this assumption. Any interpretive framework of QM must therefore account for the fact that the probability of detection at A is, in fact, dependent on detection at B, and vice versa. One explanation is a common cause, another is non-local communication. Both are equally plausible at this juncture.
In the EPR-Bell experiments, the individual probabilities at A and B aren't being considered, are they? That is, it's not the relationship between A and B that's being considered, but the relationship between (A,B) and Theta (the angular difference between the crossed polarizers) that's being considered. So, any assumptions about the relationship between A and B are irrelevant in the global experimental context.

The rate of individual detection at A and at B remains the same, and the data sequences are always random. So, viewed individually, the probability of detection at A is always independent of the probability of detection at B.

Bell's theorem assumes statistical independence between A and B. Viewed individually this is correct. Viewed globally it's incorrect, because a detection at A affects the sample space at B.

We know that the assumption of statistical independence in the EPR-Bell global experimental context is incorrect -- whether it's A wrt B, or (A,B) wrt Theta.

There are two common causes for the correlated data, (1) the global experimental design, and (2) whatever is happening in the submicroscopic quantum world (common cause interactions or superluminal transmissions?).

I think that the assumption of common cause interactions in the submicroscopic quantum world as the deep cause of quantum entanglement and EPR-Bell correlations makes more sense because there's simply no physical evidence for superluminal transmissions in the history of quantum or classical experimentation.

Classical entanglement can be used as a basis for understanding quantum entanglement. Otherwise, there's no real understanding -- just some preparations and some data and how they're related.

If the deep cause of the correlations is due to common cause interactions, then what's wrong with Bell's ansatz?

If there's nothing wrong with Bell's ansatz, then I don't see any alternative but to accept superluminal transmissions as a fact of nature. The problem with this is that it's a fact that we'll never be able to physically detect or verify.
 
  • #37


vanesch said:
Not at all. Bell's theorem states that a set of correlations between measurements of which the correlations are due to a common origin, must satisfy certain arithmetic relationships.
Bell's theorem doesn't state anything about photons, polarizations, optics, quantum mechanics or whatever. It simply states something about the possible correlations that can be caused by a common cause. It could have been stated even if there were never any quantum mechanics. Only, with classical physics, it would have sounded as almost trivial.

In other words, it is a property of a set of correlations, wherever they come from, if they are assumed to come from a common cause. Bell's theorem is hence something that applies to sets of correlations.
OK, I've got to think about this some more. :smile:
 
  • #38


In the EPR-Bell experiments, the individual probabilities at A and B aren't being considered, are they? That is, it's not the relationship between A and B that's being considered, but the relationship between (A,B) and Theta (the angular difference between the crossed polarizers) that's being considered. So, any assumptions about the relationship between A and B are irrelevant in the global experimental context.

The rate of individual detection at A and at B remains the same, and the data sequences are always random. So, viewed individually, the probability of detection at A is always independent of the probability of detection at B.

Bell's theorem assumes statistical independence between A and B. Viewed individually this is correct. Viewed globally it's incorrect, because a detection at A affects the sample space at B.
It doesn't just assume statistical independence. It assumes causal independence as well.

It's in the _derivation_ of Bell's theorem that this is obvious (not the application).

Here is a very crude derivation.

Let A, B, and C represent the probabilities of three respective spatially separated events. Let a, b, and c mean the probability that that event does not happen.

If local realism is true, then the outcome of A, B, and C are causally independent. They may indeed share a common cause, but strictly speaking, they do not depend on one another.

Mathematically, we write this as:

A = AB + Ab

In other words, the odds of A happening ae the odds of A and B happening, plus the odds of A and not B happening. A does not depend on B. A cannot be a non-trivial function of B. They might of course both be a function of some third event - the root cause, but there should be no non-trivial way to write A in terms of just B.

In other words:
A = AB + Ab
b = 1-B
A = AB + A(1-B)
A = AB + A - AB
A = A

Ok?

So if local reality holds, that general axiom (A=AB + Ab) also holds. This is the key assumption of Bell's theorem. Really it's the only important one.

Now, let's add a third event and we can make some statements:

aB = aBC + aBc
aC = aBC + abC
bC = AbC + abC <= see what we're doing? We're taking any two arbitrary events and using the local realism assumption to make additional statements

Now we can write these as:
aBC = aB - aBc
abC = bC - AbC

aC = aB - aBc + abC
aC = aB - aBc + bC - AbC

or

aC < aB + bC


That's the easiest way to derive the inequality. So make A, B, and C the odds of a photon being detected at different polarizer angles. Run through the experiment several times and note independently the number of hits. They'll violate the inequality.

Ok? So in order to understand what Bell's theorem actually says qualitatively, you need to understand what it says mathematically, or rather, what it _assumes_ mathematically. So the assumption that:

A = AB + Ab

Does not hold true for QM. That assumption was that the outcome of B is mathematically independent of the outcome of A. Again, they can have a common root cause, but what happens at A should have no bearing on what happens at B. You should not _need_ to know what happened at A in order to guess the odds of something happening at B.

Clear now? :) Now interpret away.

Incidentally, does anyone else think it's cool how Bell's theorem mirrors the triangle theorem that says no side can be greater than the sum of the other two sides?
 
Last edited:
  • #39


peter0302 said:
Mathematically, Bell's theorem assumes that the probability of detection at A is independent of the probability of detection at B. QM violates his theorem, and therefore violates this assumption. Any interpretive framework of QM must therefore account for the fact that the probability of detection at A is, in fact, dependent on detection at B, and vice versa.

I want to add to what Vanesch and ThomasT have said on this. Primarily, and not meaning to be blunt, the statement is wrong. What Bell's Theorem says is that:

IF you have any theory that respects both locality and realism, THEN you cannot end up with predictions identical to QM.

There is nothing WHATSOEVER that Bell's Theorem states about QM itself. Therefore, there is no added burden on an interpretive framework for QM. This is a common error in the understanding of Bell's Theorem.

In addition, there is no experimental evidence whatsoever that the detection at A is in any way dependent on the detection at B. In a sufficiently large sample, the probability of coincidence matching is related to the relative alignment of polarizing apparati. There is not much more you can deduce from experiment, and this exactly matches what QM predicts. It is not possible to determine if detection of A alters the results at B, or vice versa.
 
  • #40


peter0302 said:
You should not _need_ to know what happened at A in order to guess the odds of something happening at B.


You don't. It is constant at 50%. :)
 
  • #41


I'm not sure what you're trying to argue here.

You don't. It is constant at 50%. :)
No it's not constant. All we know is the average is 50%. I'm talking about the probability of a particular photon being detected. There's a difference.

Despite the fact that the joint correlations can be written as a function of the difference in the polarizer angles - itself a sign of mutual dependence - the detection probability for an inidivdual photon can be written as a non-trivial function of whether or not the entangled twin passed its polarizer. And so Bell's main assumption (A = AB + Ab) doesn't hold for QM.

I want to add to what Vanesch and ThomasT have said on this. Primarily, and not meaning to be blunt, the statement is wrong. What Bell's Theorem says is that:

IF you have any theory that respects both locality and realism, THEN you cannot end up with predictions identical to QM.
How is that different from what I said?

If local realism is true then you get certain results.
If you don't get those results then you don't have local realism.

It's 9th grade logic. I'm not sure what your disagreement is.
 
  • #42


peter0302 said:
I'm not sure what you're trying to argue here.


No it's not constant. All we know is the average is 50%. I'm talking about the probability of a particular photon being detected. There's a difference.

Despite the fact that the joint correlations can be written as a function of the difference in the polarizer angles - itself a sign of mutual dependence - the detection probability for an inidivdual photon can be written as a non-trivial function of whether or not the entangled twin passed its polarizer. And so Bell's main assumption (A = AB + Ab) doesn't hold for QM.


How is that different from what I said?

If local realism is true then you get certain results.
If you don't get those results then you don't have local realism.

It's 9th grade logic. I'm not sure what your disagreement is.

Not trying to argue or get into semantics. The point us that Bell's Theorem does not put anything on QM. So QM has nothing requiring explanation due to Bell.

Actually, the logic of the assumption (A = AB + Ab) does seem to hold for QM, at least on the surface. But it cannot be generalized to include simultaneous C, D, E, etc. However, there is a problem when you detect the AB case and try to infer that each particle is in an identical or symmetric state at that time. Clearly, they are no longer in an entangled or symmetric state (as Alice is only in state A and Bob is only in state B). This definitely calls into question the idea that the measurement of one changes the other. Which is the point I think you were trying to make and I objected to.

The QM mystery comes back to the collapse of the wavefunction. What is that? Is it physical? That is the only thing which I believe can be truly said to have a non-local component.
 
  • #43


Ah, ok we do agree then.

The QM mystery comes back to the collapse of the wavefunction. What is that? Is it physical? That is the only thing which I believe can be truly said to have a non-local component.
Yep.

This definitely calls into question the idea that the measurement of one changes the other. Which is the point I think you were trying to make and I objected to.
Right. I wouldn't say the measurement of one changes the other. What I would say is that the measurement of one changes the odds of detecting the other - and that's exactly what QM says should happen.
 
  • #44


peter0302 said:
It doesn't just assume statistical independence. It assumes causal independence as well.

... assumption was that the outcome of B is mathematically independent of the outcome of A. Again, they can have a common root cause, but what happens at A should have no bearing on what happens at B.
Thanks for trying to help me understand Bell's theorem (also thanks to vanesch and Dr. Chinese et al for their efforts) -- but I must say that I still don't understand it's meaning.

You say that A and B can have a common root cause (Does this include the idea that the attributes assigned at A and B for a given coincidence interval are associated with optical disturbances that were emitted by the same atom at the same time -- as in the 1984 Aspect, et al. experiment. -- so that during that interval what's incident on the polarizer at A is the same as what's incident on the polarizer at B?).

vanesch says that violations of Bell inequalities mean that the incident disturbances associated with paired detection attributes cannot have a common origin. This would seem to mean that being emitted from the same atom at the same time does not impart to the opposite-moving disturbances identical properties.

And yet, in the hallmark 1984 Aspect experiment using time-varying analyzers, experimenters were very careful to ensure that they were pairing detection attributes associated with photons emitted simultaneously by the same atom.

I was remembering last night something written by the late Heinz Pagels about Bell's theorem where he concludes that nonlocality (ie. FTL transmissions) can't be what is producing the correlations.

So, they can't be produced by common cause at emission, and they can't be produced nonlocally via FTL transmissions. That doesn't seem to leave much to consider.

Yes, I'm confused. :smile:
 
  • #45


Here's what I mean by common cause:

Let's say A and B are events, and c is their common cause - i.e. the entire history of the universe.

Let A = f1(c) and B = f2(c).

If they share a common cause, but do not depend on one another, we can write each only in terms of -c- without having to reference the other.

In QM, though, we cannot do that. Instead, we get results like:

A = f3(c,B)
B = f4(c,A)

which, if B and A are spacelike separated, are not consistent with local realism.

Now, the thought was perhaps statements like
A = f3(c,B)
could be simplified back down to
A = f1(c)
and thus would turn out to be trivial if we understood exactly what f1 and c really were (here they're just gigantic oversimplifications).

What Bell showed was that if that were true - if A=f3(c,B) really was a trivial restatement of A = f1(c) - certain patterns would emerge in the correlations. Those predicted patterns are violated by QM. Therefore we conclude that A = f3(c,B) is not trivial, and A really does somehow depend on B.
 
  • #46


peter0302 said:
Ah, ok we do agree then.

Yes, I think we did all along.
 
  • #47


ThomasT said:
So, they can't be produced by common cause at emission, and they can't be produced nonlocally via FTL transmissions. That doesn't seem to leave much to consider.

Yes, I'm confused. :smile:

Yes, it is confusing. The first thing to do is to go back to traditional QM. Don't try to escape it by positing that a classical explanation will be discovered that saves us. According to Bell's Theorem, that won't happen.

That leaves us with such "paradoxes" as: the Heisenberg Uncertainty Principle (which denies reality to non-commuting operators); wavefunction collapse (which appears to be non-local); virtual particles (where do they come from, and where do they go); and conservation laws (which apply to "real" particles, even space-like separated entangled ones).

Clearly, trying to get a common sense picture of these is essentially impossible as we are no closer after 80 years of trying. So we must be content, for now, with the mathematical apparatus. And that remains a solid victory for physical science.
 
  • #48


ThomasT said:
Thanks for trying to help me understand Bell's theorem (also thanks to vanesch and Dr. Chinese et al for their efforts) -- but I must say that I still don't understand it's meaning.

You say that A and B can have a common root cause (Does this include the idea that the attributes assigned at A and B for a given coincidence interval are associated with optical disturbances that were emitted by the same atom at the same time -- as in the 1984 Aspect, et al. experiment. -- so that during that interval what's incident on the polarizer at A is the same as what's incident on the polarizer at B?).

vanesch says that violations of Bell inequalities mean that the incident disturbances associated with paired detection attributes cannot have a common origin. This would seem to mean that being emitted from the same atom at the same time does not impart to the opposite-moving disturbances identical properties.

And yet, in the hallmark 1984 Aspect experiment using time-varying analyzers, experimenters were very careful to ensure that they were pairing detection attributes associated with photons emitted simultaneously by the same atom.

I was remembering last night something written by the late Heinz Pagels about Bell's theorem where he concludes that nonlocality (ie. FTL transmissions) can't be what is producing the correlations.

So, they can't be produced by common cause at emission, and they can't be produced nonlocally via FTL transmissions. That doesn't seem to leave much to consider.

Yes, I'm confused. :smile:


<< I was remembering last night something written by the late Heinz Pagels about Bell's theorem where he concludes that nonlocality (ie. FTL transmissions) can't be what is producing the correlations. >>

What? Pagel is certainly wrong about that. There already exists a formulation of QM that utilizes FTL transmissions to get the nonlocal correlations (pilot wave theory) and without wavefunction collapse. Bell even shows this with generic nonlocal HV models in his original papers.


<< So, they can't be produced by common cause at emission...That doesn't seem to leave much to consider. >>

Well recall that the assumptions in Bell's theorem are that

1) Kolmogorov classical probability axioms are valid.
2) locality is valid (no causal influences can propagate faster than c between two events).
3) causality is valid (future measurement settings are "free" or random variables).

One could only reject locality as is often done, and get a nonlocal HV theory such as the pilot wave theory of de Broglie and Bohm. One could also reject only causality, and get a causally symmetric HV model that does the trick (Huw Price and Rod Sutherland are among the researchers who have successfully done this), or, even more implausibly, posit a common past before the emission and detection events. One could also get more esoteric and reject or add axioms to Kolmogorov's classical probability theory, and therefore construct a fully local account of EPRB, as Itamar Pitowsky has done.

Notice that "realism" is not at all the issue in Bell's theorem, despite the common claim that it is.
 
  • #49


ThomasT said:
Thanks for trying to help me understand Bell's theorem (also thanks to vanesch and Dr. Chinese et al for their efforts) -- but I must say that I still don't understand it's meaning.

You say that A and B can have a common root cause (Does this include the idea that the attributes assigned at A and B for a given coincidence interval are associated with optical disturbances that were emitted by the same atom at the same time -- as in the 1984 Aspect, et al. experiment. -- so that during that interval what's incident on the polarizer at A is the same as what's incident on the polarizer at B?).

vanesch says that violations of Bell inequalities mean that the incident disturbances associated with paired detection attributes cannot have a common origin. This would seem to mean that being emitted from the same atom at the same time does not impart to the opposite-moving disturbances identical properties.

And yet, in the hallmark 1984 Aspect experiment using time-varying analyzers, experimenters were very careful to ensure that they were pairing detection attributes associated with photons emitted simultaneously by the same atom.

I was remembering last night something written by the late Heinz Pagels about Bell's theorem where he concludes that nonlocality (ie. FTL transmissions) can't be what is producing the correlations.

So, they can't be produced by common cause at emission, and they can't be produced nonlocally via FTL transmissions. That doesn't seem to leave much to consider.

Yes, I'm confused. :smile:



The easiest thing you can do is read Bell's original papers, namely, "On the EPR Paradox", "La Nouvelle Cuisine", and "Free Variables and Local Causality".
 
  • #50


Maaneli said:
<< I was remembering last night something written by the late Heinz Pagels about Bell's theorem where he concludes that nonlocality (ie. FTL transmissions) can't be what is producing the correlations. >>

What? Pagel is certainly wrong about that. There already exists a formulation of QM that utilizes FTL transmissions to get the nonlocal correlations (pilot wave theory) and without wavefunction collapse. Bell even shows this with generic nonlocal HV models in his original papers.


<< So, they can't be produced by common cause at emission...That doesn't seem to leave much to consider. >>

Well recall that the assumptions in Bell's theorem are that

1) Kolmogorov classical probability axioms are valid.
2) locality is valid (no causal influences can propagate faster than c between two events).
3) causality is valid (future measurement settings are "free" or random variables).

One could only reject locality as is often done, and get a nonlocal HV theory such as the pilot wave theory of de Broglie and Bohm. One could also reject only causality, and get a causally symmetric HV model that does the trick (Huw Price and Rod Sutherland are among the researchers who have successfully done this), or, even more implausibly, posit a common past before the emission and detection events. One could also get more esoteric and reject or add axioms to Kolmogorov's classical probability theory, and therefore construct a fully local account of EPRB, as Itamar Pitowsky has done.

Notice that "realism" is not at all the issue in Bell's theorem, despite the common claim that it is.

Thanks for the input. I paraphrased Pagels incorrectly I think. Here's what he actually concluded:

We conclude that even if we accept the objectivity [realism, etc.] of the microworld then Bell's experiment does not imply actual nonlocal influences. It does imply that one can instantaneously change the cross-correlation of two random sequences of events on other sides of the galaxy. But the cross-correlation of two sets of widely separated events is not a local object and the information it may contain cannot be used to violate the principle of local causality.

So, is an understanding of the entangled data (correlations) produced in EPR-Bell tests in terms of a common cause produced at emission possible. Also, what do you think of the analogy with the simplest optical Bell tests with a polariscope? Of course, if the deep physical origin of Malus' Law is a mystery, then quantum entanglement is still a mystery, but at least we'd have a classical analog.
 
  • #51


DrChinese said:
Yes, it is confusing. The first thing to do is to go back to traditional QM. Don't try to escape it by positing that a classical explanation will be discovered that saves us. According to Bell's Theorem, that won't happen.

That leaves us with such "paradoxes" as: the Heisenberg Uncertainty Principle (which denies reality to non-commuting operators); wavefunction collapse (which appears to be non-local); virtual particles (where do they come from, and where do they go); and conservation laws (which apply to "real" particles, even space-like separated entangled ones).

Clearly, trying to get a common sense picture of these is essentially impossible as we are no closer after 80 years of trying. So we must be content, for now, with the mathematical apparatus. And that remains a solid victory for physical science.
Thanks DrChinese -- I don't view the uncertainty relations, or wavefunction collapse, or virtual particles, or the application of the law of conservation of angular momentum in certain Bell tests as paradoxical.

I think I should reread what's been written in these forums, Bell's papers, lots of other papers I've been putting off, your page, etc. and then get my thoughts in order. By the way, I'm still hoping for some sort of classically analogous way of understanding quantum entanglement and the EPR-Bell correlations. :smile:
 
  • #52


ThomasT said:
Thanks DrChinese -- I don't view the uncertainty relations, or wavefunction collapse, or virtual particles, or the application of the law of conservation of angular momentum in certain Bell tests as paradoxical.

I think I should reread what's been written in these forums, Bell's papers, lots of other papers I've been putting off, your page, etc. and then get my thoughts in order. By the way, I'm still hoping for some sort of classically analogous way of understanding quantum entanglement and the EPR-Bell correlations. :smile:

Thomas,

There are plenty of misleading accounts of Bell's theorem and the current state of affairs out there. I have spent several year going through all of them and finding the diamonds in the rough. So, from my experiences, I also strongly recommend, in addition to those specific Bell papers, these two books to you on QM nonlocality - they are by far the best around:

"Quantum Nonlocality and Relativity"
Tim Maudlin
https://www.amazon.com/dp/0631232214/?tag=pfamazon01-20

"Time's Arrow and Archimedes Point"
Huw Price
https://www.amazon.com/dp/0195117980/?tag=pfamazon01-20

~M
 
  • #53


ThomasT said:
Thanks for the input. I paraphrased Pagels incorrectly I think. Here's what he actually concluded:

We conclude that even if we accept the objectivity [realism, etc.] of the microworld then Bell's experiment does not imply actual nonlocal influences. It does imply that one can instantaneously change the cross-correlation of two random sequences of events on other sides of the galaxy. But the cross-correlation of two sets of widely separated events is not a local object and the information it may contain cannot be used to violate the principle of local causality.

So, is an understanding of the entangled data (correlations) produced in EPR-Bell tests in terms of a common cause produced at emission possible. Also, what do you think of the analogy with the simplest optical Bell tests with a polariscope? Of course, if the deep physical origin of Malus' Law is a mystery, then quantum entanglement is still a mystery, but at least we'd have a classical analog.


I have a hard time understanding how Pagel could possibly have reached that conclusion. Indeed it even contradicts Bell's own conclusions. It looks confused. But, can you give me the reference?

<< So, is an understanding of the entangled data (correlations) produced in EPR-Bell tests in terms of a common cause produced at emission possible. >>

No. But, as I said earlier, there is the common past hypothesis (that the emission and detection events share a common past) that is logically possible, although extremely implausible. Bell talks about this in his paper "Free Variables and Local Causality". More plausible and successful have been the nonlocal explanations, as well as the causally symmetric explanations.

If you would like a classical analogue of Bell's inequality and theorem, read the first chapter of Tim Maudlin's book. He gives a perfectly clear and accurate classical analogue.
 
  • #54


Maaneli said:
Notice that "realism" is not at all the issue in Bell's theorem, despite the common claim that it is.

I claim it is. When Bell says that there is a simultaneous A, B and C (circa his [14] in the original), he is invoking realism. He says "It follows that c is another unit vector...". His meaning is that there if there is an a, b and c simultaneously then there must be internal consistency and there must be an outcome table that yields probabilities for all permutations of outcomes a, b and c that are non-negative.

Bell's is a reference to Einstein's realism condition, which Einstein claimed was a reasonable assumption. Bell saw this would not work and that there could not be internal consistency if there were pre-determined outcomes at all possible measurement settings. Of course, that would violate the HUP anyway but Einstein believed the HUP was not a description of reality. He said so in EPR. He assumed that at the most, the HUP was a limitation on our observational powers but not representative of reality. He said that the moon was there even when it was not being observed...
 
  • #55


DrChinese said:
I claim it is. When Bell says that there is a simultaneous A, B and C (circa his [14] in the original), he is invoking realism. He says "It follows that c is another unit vector...". His meaning is that there if there is an a, b and c simultaneously then there must be internal consistency and there must be an outcome table that yields probabilities for all permutations of outcomes a, b and c that are non-negative.

Bell's is a reference to Einstein's realism condition, which Einstein claimed was a reasonable assumption. Bell saw this would not work and that there could not be internal consistency if there were pre-determined outcomes at all possible measurement settings. Of course, that would violate the HUP anyway but Einstein believed the HUP was not a description of reality. He said so in EPR. He assumed that at the most, the HUP was a limitation on our observational powers but not representative of reality. He said that the moon was there even when it was not being observed...


I know you claim it is but it contradicts Bell's understanding of his own theorem (which should give you pause). Let me challenge you to try and come up with a logically coherent prediction in terms of an inequality, without the realism assumption. My claims is that the whole theorem falls apart into an incoherent mess if you remove realism. Whereas, you could remove locality or causality or modify Kolmogorov probability axioms, you can still construct a well-defined inequality that can be empirically tested. Let me also recommend having a look at Bell's paper "La Nouvelle Cuisine" in Speakable and Unspeakable in QM and ttn's paper "Against Realism":

Against `Realism'
Authors: Travis Norsen
Foundations of Physics, Vol. 37 No. 3, 311-340 (March 2007)
http://arxiv.org/abs/quant-ph/0607057
 
Last edited:
  • #56


<< Bell's is a reference to Einstein's realism condition, which Einstein claimed was a reasonable assumption. Bell saw this would not work and that there could not be internal consistency if there were pre-determined outcomes at all possible measurement settings. >>

No that's completely incorrect (if I correctly understand what you're trying to say). Realism is just fine even if you give up locality or causality or Kolmogorov axioms of probability. Seriously, have a look at La Nouvelle Cuisine and Travis' paper.

<< Of course, that would violate the HUP anyway but Einstein believed the HUP was not a description of reality. He said so in EPR. He assumed that at the most, the HUP was a limitation on our observational powers but not representative of reality. >>

Dude that's the point. Einstein's generic notion of realism was tested against Heisenberg's (quite frankly incoherent) positivist interpretation of the UP in QM, and was shown to be perfectly OK so long as you gave up either locality or causality. By the way, the UP was actually discovered first by Fourier in relation to classical waves, so I would prefer to call it the FUP (Fourier Uncertainty Principle).
 
Last edited:
  • #57


Maaneli said:
I know you claim it is but it contradicts Bell's understanding of his own theorem (which should give you pause). Let me challenge you to try and come up with a logically coherent prediction in terms of an inequality, without the realism assumption. My claims is that the whole theorem falls apart into an incoherent mess if you remove realism. Whereas, you could remove locality or causality or modify Kolmogorov probability axioms, you can still construct a well-defined inequality that can be empirically tested. Let me also recommend having a look at Bell's paper "La Nouvelle Cuisine" in Speakable and Unspeakable in QM and ttn's paper "Against Realism":

Against `Realism'
Authors: Travis Norsen
Foundations of Physics, Vol. 37 No. 3, 311-340 (March 2007)
http://arxiv.org/abs/quant-ph/0607057

Well, Travis and I have had a long-standing disagreement on this subject in these forums - and I am well aware of his paper (and the others like it). Norsen bends the history of EPR and Bell to suit his objective, which is obviously to push non-locality as the only viable possibility. He also bends semantics, as far as I am concerned.

You do not need Bell's additional editorial comment either (he said a lot of things afterwards), when his original paper stands fine as is. So no, it does not give me pause. Einstein was not always right, either, and if he were alive today I think he would acknowledge Bell's insight for what it was.

The situation is quite simple really:

a) If particles have no simultaneous A, B and C polarizations independent of the act of observation (as is implied, but not required, by the HUP), then there is no Bell's Theorem (per Bell's [14]). This is the realism requirement as I mentioned, and this is NECESSARY to construct the inequality. Without it, there is nothing - so your challenge is impossible as far as I am concerned.

b) Separately from Bell, the GHZ Theorem comes to an anti-realistic conclusion which does not require the locality condition. As I see it, this is fully consistent with Bell while non-local explanations are not. However, many reject GHZ and other anti-realism proofs (I'm sure you know the ones) for philosophical reasons.

c) Bell's paper was a brilliant answer to EPR's "conclusion" (completely unjustified) that realism was reasonable as an assumption. Bell showed that either Einstein's realism or his beloved locality (or both) would need to be rejected. Bell was obviously aware of Bohmian Mechanics at the time (since he mentions it), but I would hardly call that part of Bell's paper's conclusion itself.

I happen to believe that there is a causality condition implied in the Bell proof. In other words: if the future can influence the past, then that should allow a mechanism for Bell test results to be explained without resorting to a non-local or a non-realistic solution. If time is symmetric (as theory seems to suggest), then this should be possible. On the other hand, a lot of people would probably equate such a possibility to either a non-local or non-realistic solution anyway.

At any rate, failure to explicitly acknowledge the anti-realism viewpoint does a great disservice to the readers of this board. My viewpoint is mainstream opinion and Norsen's is not. As best I recall, most of the influential researchers in the area - Zeilinger, Aspect, etc. - all adopt this position: namely, that realism and locality assumptions are embedded in the Bell paper, and (given experimental results) at least one must be rejected.
 
  • #58


Maaneli said:
By the way, the UP was actually discovered first by Fourier in relation to classical waves, so I would prefer to call it the FUP (Fourier Uncertainty Principle).

That is a fairly strange way of thinking, and certainly puts you in a very small group. Even Fourier would have been surprised to find that he was the true discoverer of the HUP a hundred years before Heisenberg (and long before the existence of atoms was consensus). Do you just not like Heisenberg for some reason?
 
  • #59


Agree with everything Dr. Chinese said. I'd also add that the interpretive difficulties always arise when we stray from the math and delve into philosophy using cushy terms like realism, determinism, superdeterminism, and the like. Of course I'm guilty of it too. :)

We have to see Bell's inequality for what it is: the consequence of an assumption which Aspect and others have proven wrong. While we all agree on what that asusmption is mathematically, we can't agree on what it means physically. But at the very least, we should be focusing on the assumption, and not any author's (including Bell's own) editorial comments or beliefs regarding it.
 
  • #60
DrChinese said:
Well, Travis and I have had a long-standing disagreement on this subject in these forums - and I am well aware of his paper (and the others like it). Norsen bends the history of EPR and Bell to suit his objective, which is obviously to push non-locality as the only viable possibility. He also bends semantics, as far as I am concerned.


Well I disagree with your assessement of his work. Travis is quite accurate in his characterization of Bell's theorem, even though I have some disagreements with him about what conclusions we can draw about it today. Also, he doesn't bend semantics - he's just very meticulous and high on philosophical and logical rigor, which is something everyone should strive for in discussing QM foundations.




DrChinese said:
You do not need Bell's additional editorial comment either (he said a lot of things afterwards), when his original paper stands fine as is. So no, it does not give me pause.


Yes you do need Bell's additional commentaries from his other papers. There are lot's of subtle and implicit assumptions in his original paper that he made much more explicit and tried to justify in other papers like "La Nouvelle Cuisine", where he clarifies his definition of local causality, and "Free Variables and Local Causality", where he justifies his assumption of causality but also emphasizes the additional possibilities involved in giving up the causality assumption.




DrChinese said:
Einstein was not always right, either, and if he were alive today I think he would acknowledge Bell's insight for what it was.


I agree Einstein was not always right and that he would probably acknowledge Bell's theorem; but I suspect we have different opinions about what exactly Bell's insight is.



DrChinese said:
The situation is quite simple really:

a) If particles have no simultaneous A, B and C polarizations independent of the act of observation (as is implied, but not required, by the HUP), then there is no Bell's Theorem (per Bell's [14]). This is the realism requirement as I mentioned, and this is NECESSARY to construct the inequality. Without it, there is nothing - so your challenge is impossible as far as I am concerned.


Yes, this was exactly my point. I think you misunderstood me before. Indeed the form of realism you generally suggest is an absolutely necessary pin in the logic of the theorem (or any physics theorem for that matter; in fact, that realism assumption is no different than the realism assumptions in, say, the fluctuation-dissipation theorem or Earnshaw's theorem, both of which are theorems in classical physics). But it is completely false to say that realism is necessarily falsified by a violation of the Bell inequalities. There are other assumptions in Bell's theorem, if you recall, which can be varied without making the general mathematical logic of the inequality derivation inconsistent. They are, once again,

1) Kolmogorov classical probability axioms are valid.
2) locality is valid (the propagation speed for causal influences between two events is bounded by the speed of light, c).
3) causality is valid ("future" or final measurement settings are "free" or random variables).

One can drop anyone of these assumptions and it wouldn't falsify realism. Well, if you drop 3) and replace it with a common past hypothesis or a form of backwards causation as Huw Price and others have suggested, then you just have to modify your notion of realism in a particular way (there is a literature on this you know). That's not the same however as saying that realism gets falsified.




DrChinese said:
b) Separately from Bell, the GHZ Theorem comes to an anti-realistic conclusion which does not require the locality condition. As I see it, this is fully consistent with Bell while non-local explanations are not. However, many reject GHZ and other anti-realism proofs (I'm sure you know the ones) for philosophical reasons.


What are you talking about? Of course the GHZ theorem assumes a locality condition, just as Bell does. And no it doesn't come to any anti-realistic conclusion whatsoever. That's a very serious error. If you don't understand any of that, then you have to return to some basics. In particular, have a read of this recent article by Zeilinger and Aspelmeyer.

http://physicsworld.com/cws/article/print/34774;jsessionid=B55E9395A8ED10334930389C70494F9B

So far, all tests of both Bell’s inequalities and on three entangled particles (known as GHZ experiments) (see “GHZ experiments”) confirm the predictions of quantum theory, and hence are in conflict with the joint assumption of locality and realism as underlying working hypotheses for any physical theory that wants to explain the features of entangled particles.

Yes, they do talk about GHZ as if it puts constraints on "local realism"; but, again, I have shown that realism is a complete red herring in the context of Bell or GHZ. And of course I am not the only person with this view. It is quite well understood by the top philosophers of physics and physicists in QM foundations like David Albert, Tim Maudlin, Huw Price, Sheldon Goldstein, Guido Bacciagaluppi, Jeff Bub, David Wallace, Harvey Brown, Simon Saunders, etc., etc.. Zeilinger and Apelmeyer are quite in the minority in that understanding among QM foundations specialists, and that should give you pause for concern on that particular issue. But to make this even more clear to you, the deBB theory (a nonlocal realist contextual HV theory) perfectly explains the results of GHZ, which Zeilinger also acknowledges himself (because he understands deBB involves a joint assumption of realism and nonlocality). So there is no refutation of realism on its own at all in GHZ.

Also, it just occurred to me that you might be confusing the Leggett inequality (which that article also discusses) with the GHZ inequality. I highly recommend getting clear on those differences.



DrChinese said:
c) Bell's paper was a brilliant answer to EPR's "conclusion" (completely unjustified) that realism was reasonable as an assumption. Bell showed that either Einstein's realism or his beloved locality (or both) would need to be rejected. Bell was obviously aware of Bohmian Mechanics at the time (since he mentions it), but I would hardly call that part of Bell's paper's conclusion itself.


That's a total mischaracterization of the EPRB conclusion and of Bell's theorem. Bell showed that Either locality or causality would need to be rejected. By the way, even though deBB was not a part of Bell's original paper, in his other papers he mentions it as a counterexample to the flawed misunderstandings physicists had (and still have) that his theorem refutes the possibility of Einsteinian realism in QM.




DrChinese said:
I happen to believe that there is a causality condition implied in the Bell proof. In other words: if the future can influence the past, then that should allow a mechanism for Bell test results to be explained without resorting to a non-local or a non-realistic solution. If time is symmetric (as theory seems to suggest), then this should be possible. On the other hand, a lot of people would probably equate such a possibility to either a non-local or non-realistic solution anyway.


Yes of course the causality condition is in Bell's theorem. That's not controversial or new. He discusses it in more detail in "La Nouvelle Cuisine" and "Free Variables and Local Causality" (see why it's a good idea to read his other papers?) and leaves open the possibility of some form of "superdeterminism", even though he himself regards it as very implausible. Later people like O. Costa de Beauregard, Huw Price, and others since have advanced the idea of using backwards causation to save locality and show how Bell and GHZ inequalities could be violated. Price discusses this at length in his book

"Time's Arrow and Archimedes Point"
http://www.usyd.edu.au/time/price/TAAP.html

and his papers:

Backward causation, hidden variables, and the meaning of completeness. PRAMANA - Journal of Physics (Indian Academy of Sciences), 56(2001) 199—209.
http://www.usyd.edu.au/time/price/preprints/QT7.pdf

Time symmetry in microphysics. Philosophy of Science 64(1997) S235-244.
http://www.usyd.edu.au/time/price/preprints/PSA96.html

Toy models for retrocausality. Forthcoming in Studies in History and Philosophy of Modern Physics, 39(2008).
http://arxiv.org/abs/0802.3230

You may also be interested to know that there exists a deBB model developed by Sutherland that implements backwards causation, is completely local, and reproduces the empirical predictions of standard QM:

Causally Symmetric Bohm Model
Authors: Rod Sutherland
http://arxiv.org/abs/quant-ph/0601095
http://www.usyd.edu.au/time/conferences/qm2005.htm#sutherland
http://www.usyd.edu.au/time/people/sutherland.htm

and his older work:

Sutherland R.I., 'A Corollary to Bell's Theorem', Il Nuovo Cimento B 88, 114-18 (1985).

Sutherland R.I., 'Bell's Theorem and Backwards-in-Time Causality', International Journal of Theoretical Physics 22, 377-84 (1983).

And just to emphasize, all these backwards causation models involve some form of realism.



DrChinese said:
At any rate, failure to explicitly acknowledge the anti-realism viewpoint does a great disservice to the readers of this board. My viewpoint is mainstream opinion and Norsen's is not. As best I recall, most of the influential researchers in the area - Zeilinger, Aspect, etc. - all adopt this position: namely, that realism and locality assumptions are embedded in the Bell paper, and (given experimental results) at least one must be rejected.


Whether your viewpoint is "mainstream" (and you still have to define what "mainstream" means to make it meaningful) or not is completely irrelevant. All that is relevant is the logical validity and factual accuracy of your understanding of these issues. But, I could tell you that among QM foundations specialists, such as people who participate in the annual APS conference on foundations of physics (which I have done so for the past 3 consecutive years):

New Directions in the Foundations of Physics
American Center for Physics, College Park, April 25 - 27, 2008
http://carnap.umd.edu/philphysics/conference.html

your opinion is quite the minority. Furthermore, I didn't imply that locality isn't embedded in Bell's theorem or that realism isn't embedded in Bell's theorem. I just said that the crucial conclusion of Bell's theorem (and Bell's own explicitly stated conclusion) is that QM is not a locally causal theory, not that it is not a locally real theory, whatever that would mean.

Let me also emphasize that unlike what you seem to be doing in characterizing Bell's theorem as a refutation of realism, Zeilinger acknolwedges that nonlocal hidden variable theories like deBB are compatible with experiments, even if he himself is an 'anti-realist'. By the way, anti-realists such as yourself or Zeilinger still have the challenge to come up with a solution to the measurement problem and derive the quantum-classical limit. Please don't try to invoke decoherence, since the major developers and proponents of decoherence theory like Zurek, Zeh, Joos, etc., are actually realists themselves - and even they admit that decoherence theory has not and probably will never on its own solves the measurement problem or account for the quantum-classical limit. On the other hand, it is well acknolwedged that nonlocal realist theories like deBB plus decoherence do already solve the problem of measurement and already accurately (even if not yet perfectly) describe the quantum-classical limit. So by my assessment, it is the anti-realist crowd that is in the minority and has much to prove.
 
Last edited by a moderator:

Similar threads

  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 18 ·
Replies
18
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
483
Replies
9
Views
2K
  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 112 ·
4
Replies
112
Views
12K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
8
Views
2K