Bell's theorem claimed to be refuted - paper published by EPL

In summary, the paper "On a contextual model refuting Bell's theorem" presents a contextual realistic model that accurately predicts measurements with entangled particles. The reason for this is the indistinguishability of these particles. Bell's theorem was refuted because he ignored contextual models in his reasoning, and this also applies to other theorems that claim no local realistic model for quantum effects is possible. The assumption of superluminal non-local interactions, as proposed by Bell, has been proven to be unfounded, as the correlations can be explained locally. However, there is still debate over whether Bell truly ignored contextual models in his work.
  • #1
emuc
31
2
TL;DR Summary
The Paper “On a contextual model refuting Bell’s theorem” has now been published by the journal EPL (Europhysics Letters)
The Paper “On a contextual model refuting Bell’s theorem” has now been published by the journal EPL (Europhysics Letters) and is available under

https://iopscience.iop.org/article/10.1209/0295-5075/134/10004

In this paper a contextual realistic model is presented which correctly predicts measurement results with entangled photons or spin ½ particles. Contextual models can have properties which are correlated with the setting of the measurement instruments. The reason for this is the indistinguishability of entangled particles.

Bell's theorem was refuted because he ignored contextual models in his reasoning. This also applies to any other theorem that claims that no local realistic model for quantum effects is possible, if they fail to rule out contextual models. These include, for example, the theorems of CHSH, GHZ and Hardy.

For over 55 years John Bell has misled the physicists community and made us believe that nature does show superluminal non-local interactions. This could have been proven experimentally, since the correlations of quantum physics violate Bell's inequality. But so far nobody has found the slightest hint of how those non-local interaction work. Now we know that the assumption of spooky action at a distance as Einstein called it, is unfounded. The correlations can be explained locally.
 
  • Skeptical
Likes weirdoguy, PeroK and Motore
Physics news on Phys.org
  • #2
I don't think it's true that Bell ignored contextual models. First, he was inspired by Bohmian mechanics, which is a contextual theory. Second, he proved that QM is contextual before Kocken and Specker. Third, contextuality is in fact at the heart of his theorem: He proves that values of spins could not have been preexisting, so they must change by the act of measurement (which is contextuality by definition), but since they are correlated and yet spatially separated, nonlocality follows from contextuality.
 
  • Like
Likes mattt and vanhees71
  • #3
emuc said:
But so far nobody has found the slightest hint of how those non-local interaction work.
I think Bohm did.
 
  • Like
Likes PeroK
  • #4
Measurement outcomes in Bell's 1964 paper are of the form A(a, lambda) and B(b, lambda) which is clearly non contextual as A doesn't depend on b and B does not depend on a.
 
  • #5
I find titles like this one very strange. If a theorem is proven, then that's that, and it cannot be refuted. If you prove that under different assumptions the conclusion doesn't hold, you are not refuting the theorem, you are proving a different theorem. So, why the provocative title?
emuc said:
For over 55 years John Bell has misled the physicists community and made us believe that nature does show superluminal non-local interactions. This could have been proven experimentally, since the correlations of quantum physics violate Bell's inequality. But so far nobody has found the slightest hint of how those non-local interaction work. Now we know that the assumption of spooky action at a distance as Einstein called it, is unfounded. The correlations can be explained locally.
I don't think that is true. I think that most physicists didn't think that Bell showed that there are superluminal interactions. Can quote someone who claims that?
 
  • Like
Likes lowlize, aaroman, PeroK and 1 other person
  • #6
Just for clarification:
“In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.”
This is what Bell claimed in his 1964 paper. This is also known as Bell's theorem, but not supported by its argumentation.
 
  • #7
Demystifier said:
I think Bohm did.
I think standard quantum theory did too, including local (sic) relativistic QFT, which is consistent with both (micro)causality and inseparability (which is usually called "non-locality" leading to much unnecessary confusion).
 
  • #8
emuc said:
Just for clarification:
“In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.”
This is what Bell claimed in his 1964 paper. This is also known as Bell's theorem, but not supported by its argumentation.
The point is that Bell derived his inequality, which is violated by standard Q(F)T. The experimental observation of this violation an confirmation of the Q(F)T predictions show that there are no hidden variables (parameters) which determine the results of individual measurements. There's just Q(F)T with its probabilistic meaning of "states".
 
  • #9
emuc said:
Just for clarification:
“In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.”
This is what Bell claimed in his 1964 paper. This is also known as Bell's theorem, but not supported by its argumentation.
That is different. First you said that he made us believe that that nature shows superluminal interactions. The quote says that if you add parameters to QM to complete it, in the sense of EPR, then something will propagate faster than light.
 
  • #10
My first inclination is to not believe this result. It's not that maybe Bell made a mistake in his paper from years ago, but since then, there have been dozens, if not hundreds of articles proving Bell-like results and investigating his inequalities from all sorts of angles. I have a hard time believing that if an approach along the lines of this article could prove Bell wrong, this wouldn't have been discovered long ago.

Somebody (maybe Scott Aaronson?) created a challenge for those who would prove Bell wrong. So either the approach described in the paper can win the challenge, or else there is some reason that it can't.

The challenge is basically this: We have three computers (laptops, or whatever) that are running three different programs, called "A", "B" and "C". They only interact through message-passing (implemented however you like, maybe just email, although that's too slow). They play a game that consists of many rounds (enough for good statistics---maybe 100 is enough?) Each round, the following actions take place:
  1. Machine C sends two different messages, one to A and one to B.
  2. On machine A, the operator picks a setting for that round. In the general case, the setting might be a direction in 3-D space, but I think that it's enough to have just three choices: (1) The x-axis. (2) The axis that is 120 degrees counterclockwise from the x-axis, in the x-y plane. (3) The axis that is 240 degrees counterclockwise from the x-axis in the x-y plane.
  3. Similarly, on machine B, the operator makes a choice.
  4. Then, the program on machine A uses the setting chosen by the operator, together with the message from machine C, to produce an output, either +1 or -1.
  5. The program on machine B similarly uses its setting and message to produce an output.
The challenge is to design the programs for machines A, B, and C so that, no matter what choices are made by the operators of A and B (so long as there are enough rounds for each setting choice to get good statistics), the statistics satisfy:

On those rounds where the setting of A is ##\vec{\alpha}## and the setting of B is ##\vec{\beta}##, then the fraction in which A and B produce the same output is given by ##sin^2(\theta/2)##, where ##\theta## is the angle between ##\vec{\alpha}## and ##\vec{\beta}##.

So can the contextual model that is claimed to refute Bell win this challenge?
 
  • #11
emuc said:
Measurement outcomes in Bell's 1964 paper are of the form A(a, lambda) and B(b, lambda) which is clearly non contextual as A doesn't depend on b and B does not depend on a.
That's not what "non-contextual" means. What you are describing here is Bell's locality assumption.
 
  • Like
Likes Demystifier
  • #12
emuc said:
Measurement outcomes in Bell's 1964 paper are of the form A(a, lambda) and B(b, lambda) which is clearly non contextual as A doesn't depend on b and B does not depend on a.
It's true that Bell's 1964 paper basically just disproved non-contextual hidden variable theories. However, he has later given derivations of his inequality where no non-contextuality assumption is made (e.g. The Theory of local Beables, 1975). But contextuality alone won't save you. A local hidden-variable theory will still suffer from either non-locality, superdeterminism or other undesirable features.
 
  • #13
@ stevendaryl: To me this seems to be an implementation of a non-contextual model. According to the Kochen-Specker theorem those models cannot reproduce the quantum correlations.
 
  • #15
Nullstein said:
It's true that Bell's 1964 paper basically just disproved non-contextual hidden variable theories. However, he has later given derivations of his inequality where no non-contextuality assumption is made (e.g. The Theory of local Beables, 1975). But contextuality alone won't save you. A local hidden-variable theory will still suffer from either non-locality, superdeterminism or other undesirable features.
One should read the paper in order to judge the model
 
  • #16
emuc said:
One should read the paper in order to judge the model
Well, a theorem is a theorem. We can conclude without reading the paper that if your model reproduces the QM predictions and does so by introducing hidden variables, then it must take one of the known outs, such as non-locality or superdeterminism.
 
  • Like
Likes PeroK
  • #17
emuc said:
@ stevendaryl: To me this seems to be an implementation of a non-contextual model.
@emuc, if you are talking about post #10, @stevendaryl is not describing a model in that post. He is describing a test procedure that any model that claims to violate the Bell inequalities must pass. Can the model in the paper pass that test?
 
Last edited:
  • #18
emuc said:
Measurement outcomes in Bell's 1964 paper are of the form A(a, lambda) and B(b, lambda) which is clearly non contextual as A doesn't depend on b and B does not depend on a.
My reply is similar to that of @PeterDonis. You seem to be missing a big picture of the Bell proof. His proof is really a reductio ad absurdum, with the following structure. Let as assume locality. Then locality implies non-contextuality, in the form you wrote above. Hence non-contextuality is derived (not assumed and certainly not ignored) from locality. But then from this non-contextuality he derives a contradiction with quantum mechanics, implying that the initial assumption, that is locality, was wrong. This proves non-locality. With locality assumption being wrong, non-contextuality does not longer follow. Hence contextuality is perfectly consistent with the Bell's final conclusion, but to understand this you must understand his proof as a whole. You took one step in the proof without understanding its role in the whole proof, without understanding it's context (pun intended).
 
  • Like
Likes mattt, PeroK, Doc Al and 1 other person
  • #19
stevendaryl said:
It's not that maybe Bell made a mistake in his paper from years ago, but since then, there have been dozens, if not hundreds of articles proving Bell-like results and investigating his inequalities from all sorts of angles.
Exactly! The author looks like a crackpot who attempts to disprove the theory of relativity by finding a small error in the Einstein's 1905 paper.
 
  • Like
Likes mattt, Motore, Vanadium 50 and 1 other person
  • #20
Demystifier said:
My reply is similar to that of @PeterDonis. You seem to be missing a big picture of the Bell proof. His proof is really a reductio ad absurdum, with the following structure. Let as assume locality. Then locality implies non-contextuality, in the form you wrote above. Hence non-contextuality is derived (not assumed and certainly not ignored) from locality. But then from this non-contextuality he derives a contradiction with quantum mechanics, implying that the initial assumption, that is locality, was wrong. This proves non-locality. With locality assumption being wrong, non-contextuality does not longer follow. Hence contextuality is perfectly consistent with the Bell's final conclusion, but to understand this you must understand his proof as a whole. You took one step in the proof without understanding its role in the whole proof, without understanding it's context (pun intended).
Non-contextuality doesn't follow from locality, because one can have a contextual, local and superdeterministic model. It is fair to say that Bell's original paper didn't consider this possibility, but his later papers did.
 
  • #21
Nullstein said:
one can have a contextual, local and superdeterministic model
Can you give a specific example?
 
  • #22
PeterDonis said:
Can you give a specific example?
Well, you just determine the measurement settings in advance such a way that the statistics comes out right. It's well known that Bell's inequality can be violated if you don't exclude e.g. superdeterminism (see the Wood and Spekkens paper). Sabine Hossenfelder has recently proposed a more elaborate model of this kind (https://arxiv.org/abs/2010.01327), but I haven't bothered to study it. (You can check her paper for more references to superdeterministic models. But I haven't studied them either.)
 
  • #23
They
Nullstein said:
Well, you just determine the measurement settings in advance such a way that the statistics comes out right. It's well known that Bell's inequality can be violated if you don't exclude e.g. superdeterminism (see the Wood and Spekkens paper). Sabine Hossenfelder has recently proposed a more elaborate model of this kind (https://arxiv.org/abs/2010.01327), but I haven't bothered to study it. (You can check her paper for more references to superdeterministic models. But I haven't studied them either.)
don’t mention superdeterminism explicitly, but there was an interesting experiment in which quasars were used to decide the setting for an EPR type measurement.
https://astronomy.com/news/2018/08/distant-quasars-confirm-quantum-entanglement
 
  • #24
stevendaryl said:
They

don’t mention superdeterminism explicitly, but there was an interesting experiment in which quasars were used to decide the setting for an EPR type measurement.
https://astronomy.com/news/2018/08/distant-quasars-confirm-quantum-entanglement
Yes, I know this experiment. Its very existence already suggests that the exclusion of superdeterminism is a necessary assumption in a proof of Bell's theorem. Otherwise, we wouldn't have to design experiments to test it. There are many more possible local scenarios. All of them must be excluded by a "no fine-tuning" condition. Their exclusion certainly doesn't follow from locality alone. (See Wood and Spekkens for details.)
 
  • #25
Nullstein said:
the exclusion of superdeterminism is a necessary assumption in a proof of Bell's theorem.
Which equation in Bell's paper corresponds to this assumption?
 
  • #26
PeterDonis said:
Which equation in Bell's paper corresponds to this assumption?
There are two theorems by Bell (https://arxiv.org/abs/1402.0351). The first one in his 1964 paper assumes non-contextuality (basically by assuming that the angles are external parameters that can be freely chosen). (He claims that he derives non-contextuality from locality, but he is wrong about that as shown in his later work and by the counterexamples provided by numerous authors that I have cited above.) His paper from 1976 (The theory of local beables) contains the derivation that applies to contextual theories. However, it implicitly assumes the "no fine-tuning" assumption in eq. (6). In his later paper "La nouvelle cuisine" (contained in the book "Speakable and Unspeakable in QM"), he repeated his proof from 1976 and makes the assumption explicit in eq. (12) and discusses its importance.

Let me quote him here:
"An essential element in the reasoning here is that a and b are free
variables. One can envisage then theories in which there just are no free
variables for the polarizer angles to be coupled to. In such ‘superdeterministic’
theories the apparent free will of experimenters, and any other
apparent randomness, would be illusory. Perhaps such a theory could be
both locally causal and in agreement with quantum mechanical predictions.
However I do not expect to see a serious theory of this kind. I
would expect a serious theory to permit ‘deterministic chaos’ or
‘pseudorandomness’, for complicated subsystems (e.g. computers)
which would provide variables sufficiently free for the purpose at hand.
But I do not have a theorem about that."
 
Last edited:
  • #27
Nullstein said:
The first one in his 1964 paper assumes non-contextuality (basically by assuming that the angles are external parameters that can be freely chosen).
This looks like assuming "no fine tuning", i.e., "no superdeterminism" (which is indeed the assumption I asked for), not non-contextuality. But it's a different assumption than the one you point out from the later papers; see further comments below.

Nullstein said:
He claims that he derives non-contextuality from locality
Bell himself claimed no such thing. He derived the factorizability assumption (that A's result doesn't depend on B's settings, and vice versa) from locality, but he never said that assumption was "non-contextuality".

I can see how, for an entangled quantum system whose parts are spatially separated, non-contextuality implies the factorizability assumption (since the two observables in question, the measurements on A and B, commute); but that's not the argument Bell himself made.

Nullstein said:
His paper from 1976 (The theory of local beables) contains the derivation that applies to contextual theories. However, it implicitly assumes the "no fine-tuning" assumption in eq. (6).
Eq. (6) in that paper is the factorizability assumption, i.e., the "locality" assumption--it's the same basic idea as equation (2) in Bell's original (1964) paper, but in the form that makes sense for the formulation in this paper. (Note that Bell derives Eq. (6) in the "local beables" paper using Eq. (2) in the same paper, which he explicitly says is for a "locally causal" theory.)

But, as I pointed out above, this is a different assumption from "the angles are external parameters that can be freely chosen", which is the "no fine tuning" or "no superdeterminism" assumption. So Eq. (6) in the "local beables" paper is not the latter assumption.
 
  • #28
PeterDonis said:
This looks like assuming "no fine tuning", i.e., "no superdeterminism" (which is indeed the assumption I asked for), not non-contextuality. But it's a different assumption than the one you point out from the later papers; see further comments below.
The first paper is only about non-contextual theories in the first place, i.e. all observables are pre-determined, even those that aren't measured. This is assumed by leaving the angles to be external parameters that aren't determined by the theory. The angles are not functions of ##\lambda## in his original paper. In order to formulate a "no superdeterminism" condition, you first need determinism.

Bell's original theorem is really a different theorem with different assumptions than his latter theorem (as Wiseman points out). Non-contextuality is independent, as an assumption, from locality + "no superdeterminism." The theorems are complementary in this sense.

PeterDonis said:
Bell himself claimed no such thing. He derived the factorizability assumption (that A's result doesn't depend on B's settings, and vice versa) from locality, but he never said that assumption was "non-contextuality".
He claimed it in the first paragraph of section II, where he references the EPR argument: "Since we can predict in advance the result of measuring any chosen component of ##\sigma_2## , by previously measuring the same component of ##\sigma_1## , it follows that the result of any such measurement must actually be predetermined."

He didn't _call_ it non-contextuality, because that's the modern terminology that wasn't used back then. He speaks about predetermined values, which is today referred to as non-contextuality.

PeterDonis said:
Eq. (6) in that paper is the factorizability assumption, i.e., the "locality" assumption--it's the same basic idea as equation (2) in Bell's original (1964) paper, but in the form that makes sense for the formulation in this paper. (Note that Bell derives Eq. (6) in the "local beables" paper using Eq. (2) in the same paper, which he explicitly says is for a "locally causal" theory.)
The factorizability assumption in that paper is the joint assumption of locality and "no superdeterminism." You asked me in what equation he assumes it, so I told you the equation. I didn't say that this is _the_ "no superdeterminism" assumption, only that this is where it is implicitly assumed. If you read "La nouvelle cuisine", he breaks down the factorizability assumption in two steps and openly admits that the critique of his collegues (citation 10) about this tacit assumption is valid.

PeterDonis said:
But, as I pointed out above, this is a different assumption from "the angles are external parameters that can be freely chosen", which is the "no fine tuning" or "no superdeterminism" assumption. So Eq. (6) in the "local beables" paper is not the latter assumption.
The "no superdeterminism" assumption is eq. (12) in "La nouvelle cuisine," which says exactly that the angles can be freely chosen. This is implicitly assumed in eq. (6) in "The theory of local beables" and explicitly assumed in eq. (12) in "La nouvelle cuisine." Since it's implicitly assumed in "The theory of local beables," it is impossible to point to an equation in that paper that assumes _just_ "no superdeterminism." As I said, in that paper, only a joint assumption of two independent assumptions is written down. That's why I'm referring to his very last paper "La nouvelle cuisine," where he makes everything explicit. You can lay "The theory of local beables" and "La nouvelle cuisine" right next to each other and compare, because the only difference in these two derivations is that the "no superdeterminism" assumption is made explicit.

In "La nouvelle cuisine," Bell himself (not one of his critics) cites the paper by his collegues who pointed out the hidden "no superdeterminism" assumption to him and openly agrees with them that they were right. I don't think it can get any more credible than that.
 
  • #29
Nullstein said:
The first paper is only about non-contextual theories in the first place, i.e. all observables are pre-determined, even those that aren't measured. This is assumed by leaving the angles to be external parameters that aren't determined by the theory.
I thought it was assumed in the hidden variables, ##\lambda##. Those are supposed to contain whatever variables, other than the angles at which the two measurements are made, affect the measurement results.

Leaving the angles to be external parameters not determined by the theory is the "no superdeterminism" assumption.

Nullstein said:
Bell's original theorem is really a different theorem with different assumptions than his latter theorem (as Wiseman points out). Non-contextuality is independent, as an assumption, from locality + "no superdeterminism."
I'll take a look at the Wiseman paper; evidently you are working from a fairly detailed context that I'm not familiar with.
 
  • #30
Nullstein said:
Sabine Hossenfelder has recently proposed a more elaborate model of this kind (https://arxiv.org/abs/2010.01327), but I haven't bothered to study it.
I bothered to study it in some detail few mounts ago and even to discuss it with her. It turned out that her model is not local.
 
  • Like
Likes PeroK
  • #31
PeterDonis said:
@emuc, if you are talking about post #10, @stevendaryl is not describing a model in that post. He is describing a test procedure that any model that claims to violate the Bell inequalities must pass. Can the model in the paper pass that test?
There is no computer program without a model behind
 
  • #32
emuc said:
There is no computer program without a model behind

Obviously. The model is reflected in the choices of
  1. Who is allowed to send messages to whom.
  2. What messages are sent
  3. What algorithms are used to compute the messages
  4. What algorithms are used to compute the output
The idea in the setup is that the outputs from A and B are to be done without communicating with themselves or with C. You can think of them as being "spacelike separated".

If you allow FTL communication, then that could be taken into account in the setup by allowing A and B to exchange messages before producing their outputs.

If you allow superdeterminism, then that could be taken into account by letting the choices made at A and B be predetermined. For example, you could introduce yet another machine, D, which produces messages sent to A and B telling them what choice to make, and sending copies to C.

Obviously, making those changes (FTL or superdeterminism) would easily allow for the predictions of QM to be simulated. Bell's theorem really amounts to the claim that without those additions, it is impossible to simulate the predictions of QM.
 
  • Like
Likes Nugatory
  • #33
stevendaryl said:
Obviously. The model is reflected in the choices of
  1. Who is allowed to send messages to whom.
  2. What messages are sent
  3. What algorithms are used to compute the messages
  4. What algorithms are used to compute the output
The idea in the setup is that the outputs from A and B are to be done without communicating with themselves or with C. You can think of them as being "spacelike separated".

If you allow FTL communication, then that could be taken into account in the setup by allowing A and B to exchange messages before producing their outputs.

If you allow superdeterminism, then that could be taken into account by letting the choices made at A and B be predetermined. For example, you could introduce yet another machine, D, which produces messages sent to A and B telling them what choice to make, and sending copies to C.

Obviously, making those changes (FTL or superdeterminism) would easily allow for the predictions of QM to be simulated. Bell's theorem really amounts to the claim that without those additions, it is impossible to simulate the predictions of QM.
In order to implement the contextual model from the paper in a computer program you have to implement the contextual condition that all photons selected by a polarizer set to beta at side B have the polarization beta before selection. Plus from the initial context that photons which pass a polarizer set to alpha at side A have peers which definitely would pass a polarizer set to alpha +pi/2 at side B.

If you do that the program will reproduce the QM prediction. But for this you don’t need a program, you can calculate the outcome manually.
 
  • #34
stevendaryl said:
Somebody (maybe Scott Aaronson?) created a challenge for those who would prove Bell wrong.
Maybe, you mean the computer challenge, called the “quantum Randi challenge”, which was proposed in 2011 by Sascha Vongehr (Science2.0: QRC).
 
  • Like
Likes Nugatory
  • #35
emuc said:
you have to implement the contextual condition that all photons selected by a polarizer set to beta at side B have the polarization beta before selection
That’s not a contextual condition, that’s introducing unfair sampling, so isn’t a counter example to Bell’s theorem.

It’s difficult to close the fair sampling loophole in experiments with photons (although there is no plausible concrete suggestion for how fair sampling might be violated) but it has been closed in experiments using electron spin.
 

Similar threads

  • Quantum Interpretations and Foundations
Replies
19
Views
1K
Replies
55
Views
6K
  • Quantum Physics
2
Replies
48
Views
5K
  • Quantum Interpretations and Foundations
10
Replies
333
Views
11K
  • Quantum Physics
3
Replies
76
Views
6K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Physics
2
Replies
47
Views
4K
Replies
4
Views
2K
  • Quantum Physics
Replies
12
Views
2K
  • Quantum Interpretations and Foundations
Replies
3
Views
885
Back
Top