I Bell's theorem claimed to be refuted - paper published by EPL

Click For Summary
A recent paper published in EPL claims to refute Bell's theorem by presenting a contextual model that accurately predicts measurement results for entangled particles. The authors argue that Bell's original reasoning overlooked contextual models, which could explain quantum correlations without invoking superluminal non-local interactions. Critics contend that Bell's theorem remains valid, as it has been supported by numerous experiments and further developments in quantum theory. The discussion highlights the ongoing debate about the implications of contextuality and locality in quantum mechanics. Overall, the paper's assertions challenge established interpretations of quantum phenomena.
  • #31
PeterDonis said:
@emuc, if you are talking about post #10, @stevendaryl is not describing a model in that post. He is describing a test procedure that any model that claims to violate the Bell inequalities must pass. Can the model in the paper pass that test?
There is no computer program without a model behind
 
Physics news on Phys.org
  • #32
emuc said:
There is no computer program without a model behind

Obviously. The model is reflected in the choices of
  1. Who is allowed to send messages to whom.
  2. What messages are sent
  3. What algorithms are used to compute the messages
  4. What algorithms are used to compute the output
The idea in the setup is that the outputs from A and B are to be done without communicating with themselves or with C. You can think of them as being "spacelike separated".

If you allow FTL communication, then that could be taken into account in the setup by allowing A and B to exchange messages before producing their outputs.

If you allow superdeterminism, then that could be taken into account by letting the choices made at A and B be predetermined. For example, you could introduce yet another machine, D, which produces messages sent to A and B telling them what choice to make, and sending copies to C.

Obviously, making those changes (FTL or superdeterminism) would easily allow for the predictions of QM to be simulated. Bell's theorem really amounts to the claim that without those additions, it is impossible to simulate the predictions of QM.
 
  • Like
Likes Nugatory
  • #33
stevendaryl said:
Obviously. The model is reflected in the choices of
  1. Who is allowed to send messages to whom.
  2. What messages are sent
  3. What algorithms are used to compute the messages
  4. What algorithms are used to compute the output
The idea in the setup is that the outputs from A and B are to be done without communicating with themselves or with C. You can think of them as being "spacelike separated".

If you allow FTL communication, then that could be taken into account in the setup by allowing A and B to exchange messages before producing their outputs.

If you allow superdeterminism, then that could be taken into account by letting the choices made at A and B be predetermined. For example, you could introduce yet another machine, D, which produces messages sent to A and B telling them what choice to make, and sending copies to C.

Obviously, making those changes (FTL or superdeterminism) would easily allow for the predictions of QM to be simulated. Bell's theorem really amounts to the claim that without those additions, it is impossible to simulate the predictions of QM.
In order to implement the contextual model from the paper in a computer program you have to implement the contextual condition that all photons selected by a polarizer set to beta at side B have the polarization beta before selection. Plus from the initial context that photons which pass a polarizer set to alpha at side A have peers which definitely would pass a polarizer set to alpha +pi/2 at side B.

If you do that the program will reproduce the QM prediction. But for this you don’t need a program, you can calculate the outcome manually.
 
  • #34
stevendaryl said:
Somebody (maybe Scott Aaronson?) created a challenge for those who would prove Bell wrong.
Maybe, you mean the computer challenge, called the “quantum Randi challenge”, which was proposed in 2011 by Sascha Vongehr (Science2.0: QRC).
 
  • Like
Likes Nugatory
  • #35
emuc said:
you have to implement the contextual condition that all photons selected by a polarizer set to beta at side B have the polarization beta before selection
That’s not a contextual condition, that’s introducing unfair sampling, so isn’t a counter example to Bell’s theorem.

It’s difficult to close the fair sampling loophole in experiments with photons (although there is no plausible concrete suggestion for how fair sampling might be violated) but it has been closed in experiments using electron spin.
 
  • #36
The paper says:
Model assumption MA3: Selected photons from each wing of the singlet state which would take a polarizer exit α have polarization
epl20561ieqn44.gif
. With a selection other than the initial context, all information about the origin from the initial context is lost.

A selection comprises all photons which take the same polarizer exit. Photons with polarization α and
epl20561ieqn46.gif
come in equal shares, due to symmetry. MA3 accounts for the fact that the polarization of photons from the singlet state is undefined (due to indistinguishability) but changed and redefined by entanglement. Thus, the photons of a selection cannot be distinguished by their polarization. For a selection of the initial states 0° or 90°, the polarization is not changed as it is already equal to the selected state.
MA3 is a contextual assumption, as the polarization of a selection coincides with the setting of a polarizer. It does not imply any restriction on the free choice of the experimenter, nor the dependence of the hidden variable λ on the setting of the measurement instruments. However, it is a local realistic assumption, as it assigns a real value to the physical quantity polarization.

What is unfair with this?
 
  • #37
I don’t completely understand how this selection is supposed to work. But that’s why I think that the computer simulation clarifies things.

Computer C will simulate the creation of an entangled pair of particles.

It sends a message to A giving all the relevant information about the first particle. It sends a message to B giving all the relevant information about the second particle.

The operators of A and B choose detector settings without consulting each other or the messages from C.

Finally, the program on A computes a result (spin up/ spin down or pass/no-pass or whatever) based on its message and setting, and the program on B analogously computes its result.

So what is the twist that allows the results to match the quantum predictions?

Your specification was
you have to implement the contextual condition that all photons selected by a polarizer set to beta at side B have the polarization beta before selection

I don’t understand what that means. The polarization, along with any other properties of the particle, is specified by computer C. But the choice of detector setting is made by the operator of B. How can the program on B insure that the particle had polarization beta before the selection?
 
  • #38
Read Model assumption MA3 carefully. This is an assumption how nature is supposed to work. And it is physically justified by the indistinguishability. With this assumption the QM prediction can be reproduced. Entangled photons are not marbles with fixed properties. We have to adapt our assumption about nature in such a way that it can explain the measurement results.
This was also the case with the Einstein-Bode condensate.
 
  • #39
stevendaryl said:
I don’t understand what that means. The polarization, along with any other properties of the particle, is specified by computer C. But the choice of detector setting is made by the operator of B. How can the program on B insure that the particle had polarization beta before the selection?
Now, I am not sure whether “fair sampling” means what I think it means, but for EPR there is a loophole to Bell’s inequality having to do with failed detections. I’m not sure how it would work with photons, so let me discuss the electron/positron pair version of EPR.

You have a source of entangled pairs. For each pair, you measure the spin of each particle. In practice, some fraction of the measurements will fail, because one detector or the other will fail to detect any particle at all. In the simplest way of handling these failures, we just ignore the results from rounds where only one particle is detected.

However, it might be that the failures are not completely random, but that for certain combinations of detector setting plus hidden variable, failures are more or less likely. We could reproduce the predictions of quantum mechanics by fiddling with the failure probability.
 
  • #40
I'm not dealing with loopholes.
The polarization is not specified by Computer C.
You asked: "How can the program on B insure that the particle had polarization beta before the selection?"
This is given by the polarizer setting beta. See MA3 above.
Matching events occur for all photons 2 with polarization beta which would hit a polarizer on side B set to alpha+pi/2. Those photons 2 have a peer witch would hit polarizer A set to alpha. That can easily be implemented in a computer system.
 
  • #41
emuc said:
I'm not dealing with loopholes.
The polarization is not specified by Computer C.
You asked: "How can the program on B insure that the particle had polarization beta before the selection?"
This is given by the polarizer setting beta. See MA3 above.
Matching events occur for all photons 2 with polarization beta which would hit a polarizer on side B set to alpha+pi/2. Those photons 2 have a peer witch would hit polarizer A set to alpha. That can easily be implemented in a computer system.
Sorry, I can’t make any sense of what you are saying. Maybe you could just go through a few example rounds of EPR, and for each round, say what the hidden variable is for that round, what the detector settings are, and what the results are?
 
  • #42
stevendaryl said:
I can’t make any sense of what you are saying.
You're not the only one.

@emuc, you do not appear to be responding at all to the actual question @stevendaryl is asking. He is asking whether a set of computer programs constructed as described in his post #10 can reproduce the predictions of your model. That is a simple yes or no question which should have a simple yes or no answer.

If your model can produce predictions which violate the Bell inequalities, the answer to the above question should be "no". But if the answer is "no", then your model does not satisfy the assumptions of Bell's Theorem, so the existence of your model does not "refute" Bell's Theorem as you claim it does.

If, OTOH, you claim that your model does satisfy the assumptions of Bell's Theorem (which it would have to for your model's predictions to "refute" Bell's Theorem by violating the Bell inequalities), then the answer to the above question should be "yes"--and you should be able to describe to us how a set of computer programs constructed as described in post #10 can reproduce the predictions of your model, which you claim violate the Bell inequalities.

So which is it? Yes or no?
 
  • Like
Likes mattt and Vanadium 50
  • #43
PeterDonis said:
I thought it was assumed in the hidden variables, ##\lambda##. Those are supposed to contain whatever variables, other than the angles at which the two measurements are made, affect the measurement results.

Leaving the angles to be external parameters not determined by the theory is the "no superdeterminism" assumption.
No, there's a subtle difference. If you leave the angles as external parameters, the model has pre-determined values for all observables, including those that aren't measured. This is enough to enforce Bell's inequality, independent of whether the angles are actually determined or chosen freely.

However, you can relax this condition and only require those observables to be pre-determined that are actually measured. It is only in this case, where an additional "no superdeterminism" assumption is necessary to enforce the inequality. I.e. contextual theories can escape Bell's inequality in principle if no additional assumptions are made ("no superdeterminism").

Demystifier said:
I bothered to study it in some detail few mounts ago and even to discuss it with her. It turned out that her model is not local.
Did she agree with you? It seems like she still advocates her paper.

But anyway, there can't be any doubt that the "no superdeterminism" assumption is necessary. Even Bell has no problem openly admitting it and people like Spekkens or Zeilinger agree. One can litterally point to an equation in his paper, where he assumes it (eq. 12 in La nouvelle cuisine) and explains that it is necessary, agreeing that his collegues were right. If that can't convince you, I don't know what could.
 
  • #44
Nullstein said:
you can relax this condition and only require those observables to be pre-determined that are actually measured. It is only in this case, where an additional "no superdeterminism" assumption is necessary to enforce the inequality.
I'm not sure I understand: only requiring predetermined values for observables that are actually measured is superdeterminism, isn't it? You're basically fine-tuning the model so that it's impossible for any measurements to occur other than the ones that actually occur--i.e., you're predetermining which measurements occur.
 
  • #45
PeterDonis said:
You're not the only one.

@emuc, you do not appear to be responding at all to the actual question @stevendaryl is asking. He is asking whether a set of computer programs constructed as described in his post #10 can reproduce the predictions of your model. That is a simple yes or no question which should have a simple yes or no answer.

If your model can produce predictions which violate the Bell inequalities, the answer to the above question should be "no". But if the answer is "no", then your model does not satisfy the assumptions of Bell's Theorem, so the existence of your model does not "refute" Bell's Theorem as you claim it does.

If, OTOH, you claim that your model does satisfy the assumptions of Bell's Theorem (which it would have to for your model's predictions to "refute" Bell's Theorem by violating the Bell inequalities), then the answer to the above question should be "yes"--and you should be able to describe to us how a set of computer programs constructed as described in post #10 can reproduce the predictions of your model, which you claim violate the Bell inequalities.

So which is it? Yes or no?
The answer is no. Bell's theorem is refuted because his assumptions about which models are possible are incomplete. He only regarded non-contextual models but didn't take into account contextual models. With his incomplete assumptions he concluded local models were impossible.
 
  • #46
stevendaryl said:
Sorry, I can’t make any sense of what you are saying. Maybe you could just go through a few example rounds of EPR, and for each round, say what the hidden variable is for that round, what the detector settings are, and what the results are?

I think that's asking a little too much now. You can calculate all of this yourself if you use the publication as a guide.
 
  • #47
I suspect that if classical physics predicted there are a finite number of prime numbers and QM predicted an infinite set of primes, then we'd be arguing about it. And the elementary proof of the infinite set of primes would "refuted" by some local realist or other.
 
  • #48
Nullstein said:
But anyway, there can't be any doubt that the "no superdeterminism" assumption is necessary.
We can easily generate a local hidden superdeterministic hidden variable theory that reproduces the QM predictions of EPR. In the spin-1/2 version,

Let ##\alpha_j## and ##\beta_j## be sequences of detector orientations. Let ##\theta_n## be the angle between ##/alpha_j## and ##/beta_j##. Let ##A_n## be a random sequence of ##\pm 1##. Let ##B_j## be a sequence of values chosen so that with probability ##sin^2(\theta_j/2)##, it is equal to ##A_j## and with the complementary probability, is the negative of that.

Then we just let the hidden variable for the first particle be ##A_j## and the hidden variable for the second variable be ##B_j##. The result is just the value of the hidden variable.

Then if the choices for the two detector settings just happen to be equal to the sequences ##\alpha## and ##\beta##, then the statistics will work out as predicted by QM. The hard part is knowing those sequences ahead of time.
 
  • Like
Likes MrRobotoToo and Nullstein
  • #49
PeterDonis said:
I'm not sure I understand: only requiring predetermined values for observables that are actually measured is superdeterminism, isn't it? You're basically fine-tuning the model so that it's impossible for any measurements to occur other than the ones that actually occur--i.e., you're predetermining which measurements occur.
No, by that definition, Bohmian mechanics would also be superdeterministic. Just think of a Stern-Gerlach apparatus for example. You can only align it along one axis and measure the spin along that axis. When the apparatus is aligned along that axis, it is impossible to measure the spin along any other axis. Now the question is: Is the spin along the other axis determined nevertheless by the model? If yes, then the model is said to be non-contextual. If no, this just means that the result of the measurement is a composite property of the particle and the measurement apparatus, which is perfectly sensible. But still, in an ordinary world, we would not expect violations of Bell's inequality by such a model. However, it is in principle possible to design the theory in such a way that the composite property (particle information + detector alignment) violates Bell's inequality, by making the detector alignment and the particle information depend on each other in a fine-tuned way. This is what has to be excluded and this exclusion is only relevant if we are talking about a contextual model in the first place.
 
  • #50
emuc said:
The answer is no.
Ok. But then:

emuc said:
Bell's theorem is refuted because his assumptions about which models are possible are incomplete.
This is not what "refuted" means. "Refuted" means the theorem's conclusions do not follow from its assumptions. You are not saying that. So you are not refuting his theorem.
 
  • Like
Likes Demystifier
  • #51
The claim made in the title of this thread has been admitted by the OP to be false. Thread closed.
 

Similar threads

  • · Replies 19 ·
Replies
19
Views
2K
  • · Replies 55 ·
2
Replies
55
Views
8K
  • · Replies 48 ·
2
Replies
48
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 333 ·
12
Replies
333
Views
18K
  • · Replies 1 ·
Replies
1
Views
547
  • · Replies 76 ·
3
Replies
76
Views
8K
  • · Replies 75 ·
3
Replies
75
Views
11K
  • · Replies 82 ·
3
Replies
82
Views
10K