A classical challenge to Bell's Theorem?

Click For Summary
The discussion centers on the implications of Bell's Theorem and the nature of randomness in quantum mechanics (QM) versus classical systems. Participants explore a scenario where classical correlations replace quantum entanglement in a Bell-test setup, questioning whether classical sources can yield results consistent with Bell's inequalities. The maximum value achievable for the CHSH inequality is debated, with assertions that it remains +2 under classical conditions, while emphasizing the necessity of specific functions for accurate calculations. The conversation also touches on the fundamental nature of quantum events, suggesting that they may lack upstream causes, which complicates the understanding of measurement outcomes. Ultimately, the discussion highlights the complexities of reconciling classical and quantum interpretations in the context of Bell's Theorem.
  • #151
DrChinese said:
By definition, a non-realistic dataset does NOT have 3 simultaneous values.
Does a non-realistic dataset have any values at all? :smile:
 
Physics news on Phys.org
  • #152
billschnieder said:
Does a non-realistic dataset have any values at all? :smile:

How is this:

a b c
+ - *
- * +
* + +
+ - *

Where * is undefined, and the other 2 map to actual observations. Now, where is yours big talker? Howsa 'bout just the dataset.
 
  • #153
DrChinese said:
How is this:

a b c
+ - *
- * +
* + +
+ - *

Where * is undefined, and the other 2 map to actual observations.

So I guess according to you this is also a non-realistic dataset:

a b c d
+ - + -
- * + -
* + + +
+ - * +

and this too:
a b c
+ - *
- + *
+ + *
+ - *

Is this one a realistic dataset?:
a b
+ -
- +
+ +
+ -

What about this one?:
a
+
-
+
+

Cleary in your mind, you believe it is impossible for an experiment to produce a realistic dataset. So according to you, by definition all experiments produce non-realist datasets. I wonder then why you need Bell at all?

Now let us not waste the time of these fine folks reading this thread with such rubbish. We went through this exercise already right here: https://www.physicsforums.com/showthread.php?t=499002&page=4 and I presented several datasets on page 6. Anybody who is interested to see how nonsensical your challenge is can check this thread and see the datasets I presented and your bobbing and weaving.
 
  • #154
DrChinese said:
How is this:

a b c
+ - *
- * +
* + +
+ - *

Where * is undefined, and the other 2 map to actual observations. Now, where is yours big talker? Howsa 'bout just the dataset.

This data set could have been generated by tossing two coins at a time. Right!
The number of mismatches are:
nab=2
nbc=0
nac=1

nbc+nac≥nab is violated. How do you explain the violation. Is coin tossing non-local?
 
  • #155
rlduncan said:
This data set could have been generated by tossing two coins at a time. Right!
The number of mismatches are:
nab=2
nbc=0
nac=1

nbc+nac≥nab is violated. How do you explain the violation. Is coin tossing non-local?

There are a lot of Bell inequalities. The one you used is not applicable in this case. I think you have discovered one of the points I am making. Bill often switches from one example to the other, throwing things around.

In my challenge, the Bell lower limit is 1/3 (matches). The quantum mechanical value is .25 - which is the cos^2(120 degrees). My example yields .25, which is fine because it is not realistic and so Bell does not apply.

What I am saying is that no realistic dataset will produce results below 1/3 once we have a suitably large sample. Bill or you can provide the dataset, I will select which 2 angles to choose from for each set of 3 values for a/b/c. That's the challenge.
 
  • #156
DrChinese said:
There are a lot of Bell inequalities. The one you used is not applicable in this case. I think you have discovered one of the points I am making. Bill often switches from one example to the other, throwing things around.

In my challenge, the Bell lower limit is 1/3 (matches). The quantum mechanical value is .25 - which is the cos^2(120 degrees). My example yields .25, which is fine because it is not realistic and so Bell does not apply.

What I am saying is that no realistic dataset will produce results below 1/3 once we have a suitably large sample. Bill or you can provide the dataset, I will select which 2 angles to choose from for each set of 3 values for a/b/c. That's the challenge.

I assume then you hand-picked your data set. And you may be correct about the inequality I chose to violate not being applicable in one sense. However, when all three data pieces (a,b,c) are used then I believe no matter how you pick the data pairs or write the inequality there can be no violation, it is a mathematical truth. It is equivalent to the triangle inequality where the sum of any two sides is greater than the third. Am I wrong on this point?
 
  • #157
billschnieder said:
EXPERIMENTAL:
We have 3 coins labelled "a","b","c", one of which is inside a special box. Only two of them can be outside the box at any given time because you need to insert a coin in order to release another. So experimentally we decide to perform the experiment by tossing pairs of coins at a time, each pair a very large number of times. In the first run, we toss "a" and "b" a large number times, in the second one we toss "a" and "c" a large number of times and in the third we toss "b" and "c". Even though the data appears random, we then calculate <ab>, <ac> and <bc> and substitute in our equation and find that the inequality is violated! We are baffled, does this mean therere is non-local causality involved? For example we find that <ab> = -1, <ac> = -1 and <bc> = -1 Therefore |-1 - 1| + 1 <= 1, or 3 <= 1 which violates the inequality. How can this be possible? Does this mean there is spooky action at a distance happening?

Bill, I've been trying to understand your coins example. Is this analogous to measuring a pair of entangled photons at one of three previously agreed angles? So tossing coins "a" and "b" means Alice measures her photon at angle "a", while Bob measures his photon at angle "b"?
 
  • #158
gill1109 said:
Quick response to Gordon. You said you could get half-way to the desired correlations, easily. I said "exactly", because half-way does not violate CHSH. Sorry, I have not found out exactly what you mean by Y, W, and I don't know what you mean by the classical OP experiment. My discussion was aimed at Aspect, done more recently, better still, by Weihs.

...

gill1109, here's the point that I was making; and why I see your "exactly" as missing the point:

1. We can run an experiment (i.e., W; the classical experiment defined in the OP; replacing Aspect's high-tech source with a low-cost classical one) AND obtain exactly half of the Aspect correlation over every (a, b) setting.

2. So why (then) should it be surprising that Aspect's high-tech source (in his experiment; identified here as Y) delivers a higher correlation? Is it not to be expected?

3. Is it not to be expected: That an expensive high-tech source of highly correlated particles (in the singlet-state) should out-perform a low-cost classical source whose particles are hardly correlated at all!?

4. If you want to say that "the surprise" relates to the breaching of the CHSH inequality, that (I suggest) we should happily discuss under another thread.

....

PS: The designations W, X, Y, Z are short-cut specifications of experimental conditions:

W (the classical OP experiment) is Y [= Aspect (2004)] with the source replaced by a classical one (the particles pair-wise correlated via identical linear-polarisations).

X (a classical experiment with spin-half particles) is Z [= EPRB/Bell (1964)] with the source replaced by a classical one (the particles pair-wise correlated via antiparallel spins).

Y = Aspect (2004).

Z = EPRB/Bell (1964).

Hope that helps.

...

NB: Do you see some good reason to replace Aspect (2004) here with Weihs? The questions here relate to some straight-forward classical analyses, with Aspect (2004) nicely explanatory of the quantum situation and readily available on-line at arxiv.org.

With best regards,

GW
 
  • #159
Usually we discuss hypothetical experiments where timing is fixed. Like: every second we send off two photons. They may or may not get measured at the measurement stations. Detection efficiency is then usually defined in terms of the proportion of photons lost in either wing of the experiment.

In real experiments, the times of the departure of the photons and times they are measured are not fixed externally. Photons leave spontaneously and get measured at times which are not controlled by us. No, the measurement process itself generates times of events in both wings of the experiment. We use a "coincidence window" to decide which events are to be thought of as belonging together.

This opens a new loophole a bit different and in fact more potentially harmful than the detector efficiency loophole. If a "photon" arrives at a detector with a plan in its mind what setting it wants to see, and what outcome it will generate, cleverly correlated with the plan of its partner in the other wing of the experiment, then this photon can arrange to arrive a bit earlier (ie the measurement process is faster) if it doesn't like the setting it sees. At the same time, its partner in the other wing of the experiment arranges to arrive a bit later (ie its measurement process is slower) if it doesn't like the setting it sees. If they both see "wrong" settings the time interval between their arrivals is extended so much that they no longer count as a pair in the statistics.

All the photons get measured, detector efficiency is 100%, but many events are unpaired.

I wrote about this with Jan-Ake Larsson some years ago:

arXiv:quant-ph/0312035

Bell's inequality and the coincidence-time loophole
Jan-Ake Larsson, Richard Gill

This paper analyzes effects of time-dependence in the Bell inequality. A generalized inequality is derived for the case when coincidence and non-coincidence [and hence whether or not a pair contributes to the actual data] is controlled by timing that depends on the detector settings. Needless to say, this inequality is violated by quantum mechanics and could be violated by experimental data provided that the loss of measurement pairs through failure of coincidence is small enough, but the quantitative bound is more restrictive in this case than in the previously analyzed "efficiency loophole."

Europhysics Letters, vol 67, pp. 707-713 (2004)
 
  • #160
gill1109 said:
Usually we discuss hypothetical experiments where timing is fixed. Like: every second we send off two photons. They may or may not get measured at the measurement stations. Detection efficiency is then usually defined in terms of the proportion of photons lost in either wing of the experiment.

In real experiments, the times of the departure of the photons and times they are measured are not fixed externally. Photons leave spontaneously and get measured at times which are not controlled by us. No, the measurement process itself generates times of events in both wings of the experiment. We use a "coincidence window" to decide which events are to be thought of as belonging together.

This opens a new loophole a bit different and in fact more potentially harmful than the detector efficiency loophole. If a "photon" arrives at a detector with a plan in its mind what setting it wants to see, and what outcome it will generate, cleverly correlated with the plan of its partner in the other wing of the experiment, then this photon can arrange to arrive a bit earlier (ie the measurement process is faster) if it doesn't like the setting it sees. At the same time, its partner in the other wing of the experiment arranges to arrive a bit later (ie its measurement process is slower) if it doesn't like the setting it sees. If they both see "wrong" settings the time interval between their arrivals is extended so much that they no longer count as a pair in the statistics.

All the photons get measured, detector efficiency is 100%, but many events are unpaired.

I wrote about this with Jan-Ake Larsson some years ago:

arXiv:quant-ph/0312035

Bell's inequality and the coincidence-time loophole
Jan-Ake Larsson, Richard Gill

This paper analyzes effects of time-dependence in the Bell inequality. A generalized inequality is derived for the case when coincidence and non-coincidence [and hence whether or not a pair contributes to the actual data] is controlled by timing that depends on the detector settings. Needless to say, this inequality is violated by quantum mechanics and could be violated by experimental data provided that the loss of measurement pairs through failure of coincidence is small enough, but the quantitative bound is more restrictive in this case than in the previously analyzed "efficiency loophole."

Europhysics Letters, vol 67, pp. 707-713 (2004)


gill1109, Thanks for this. However, I see nothing here that relates to anything that I've said or implied. Recall that we are discussing idealised experiments, like Bell (1964). So questions of detector-efficiencies, unpaired-events, loss-of-pairs, coincidence-timing, coincidence-counting, etc, do not arise: For there is neither wish nor need here to exploit any loop-hole.

GW
 
  • #161
Thanks GW

If indeed the experiment is a perfect idealized experiment ... as in Bell's "Bertlmann's socks" paper then there is no way to beat CHSH in a local realistic way. Bell's 1964 paper is not about experiments, whether idealized and/or perfect or not. There are very good reasons why Bell moved from his initial inequality to CHSH and why he rather carefully spelt out the details of an idealized CHSH-type experiment in his later work.
 
  • #162
gill1109 said:
Thanks GW

If indeed the experiment is a perfect idealized experiment ... as in Bell's "Bertlmann's socks" paper then there is no way to beat CHSH in a local realistic way. Bell's 1964 paper is not about experiments, whether idealized and/or perfect or not. There are very good reasons why Bell moved from his initial inequality to CHSH and why he rather carefully spelt out the details of an idealized CHSH-type experiment in his later work.

I took Bell (1964) to be about (idealised) EPR-Bohm (Bohm 1951), as cited in Bohm-Aharonov (1957). The result that Bell aims for [(his (3)] is the EPR-Bohm result E(A, B) -- Bell's P(a, b) -- = -a.b.

As suggested above, discussion of CHSH warrants another thread, imho.
 
  • #163
GW: the point of CHSH is that it gives us an easy way to see why local realist models can't generate E(A,B)=-a.b without recourse to trickery.
 
  • #164
gill1109 said:
GW: the point of CHSH is that it gives us an easy way to see why local realist models can't generate E(A,B)=-a.b without recourse to trickery.

But if E(A,B) is calculated in a local realistic manner and gives -a.b the way Gordon has done, and Joy Christian has done, and De Raedt has done, and Kracklauer etc. ..., then there has to be something wrong with your claim that it can't. It is up to you to point out the trickery then. The CHSH being therefore a red-herring for this particular discussion.
 
Last edited:
  • #165
Gordon has not supplied anything yet.
 
  • #166
DrChinese said:
Gordon has not supplied anything yet.

See post #102.
 
  • #167
billschnieder said:
See post #102.

Anyone can write a result. It is meaningless. His model does not produce this result. I thought we settled that.
 
  • #168
billschnieder said:
But if E(A,B) is calculated in a local realistic manner and gives -a.b the way Gordon has done, and Joy Christian has done, and De Raedt has done, and Kracklauer etc. ..., then there has to be something wrong with your claim that it can't. It is up to you to point out the trickery then. The CHSH being therefore a red-herring for this particular discussion.

Christian claims to have done this in the reference provided below, but at this point I cannot confirm his claim. (I am discussing the matter with him.) De Raedt et al created a computer simulation which violates a Bell Inequality (winning the DrC challenge in the process) but still failing to violate Bell's Theorem (since it no longer matches the predictions of QM).

http://arxiv.org/abs/0806.3078
 
  • #169
DrChinese said:
Gordon has not supplied anything yet.
To add to that: also Joy Christian has not really done so. It's now concluded by almost everyone that he simply messed up and tried in vain to undo the mess. As for the solutions of the remaining ones, those are not of the kind that Gordon is after (post #122).
 
  • #170
DrChinese said:
Christian claims to have done this in the reference provided below, but at this point I cannot confirm his claim. (I am discussing the matter with him.)
And your inability to confirm his claim is relevant in what way?
De Raedt et al created a computer simulation which violates a Bell Inequality (winning the DrC challenge in the process)
Puhleese :smile:! De Raedt et al will laugh at your so called "DrC Challenge".

but still failing to violate Bell's Theorem (since it no longer matches the predictions of QM).

Huh? It matched QM before but no longer does so? What has changed since December 2011
http://arxiv.org/pdf/1112.2629v1
Einstein-Podolsky-Rosen-Bohm laboratory experiments: Data analysis and simulation
H. De Raedt, K. Michielsen, F. Jin
(Submitted on 12 Dec 2011)

Data produced by laboratory Einstein-Podolsky-Rosen-Bohm (EPRB) experiments is tested against the hypothesis that the statistics of this data is given by quantum theory of this thought experiment. Statistical evidence is presented that the experimental data, while violating Bell inequalities, does not support this hypothesis. It is shown that an event-based simulation model, providing a cause-and-effect description of real EPRB experiments at a level of detail which is not covered by quantum theory, reproduces the results of quantum theory of this thought experiment, indicating that there is no fundamental obstacle for a real EPRB experiment to produce data that can be described by quantum theory.

http://arxiv.org/pdf/0712.3781v2
Event-by-event simulation of quantum phenomena: Application to Einstein-Podolosky-Rosen-Bohm experiments
H. De Raedt, K. De Raedt, K. Michielsen, K. Keimpema, S. Miyagarbagea
(Submitted on 21 Dec 2007 (v1), last revised 25 Dec 2007 (this version, v2))

We review the data gathering and analysis procedure used in real Einstein-Podolsky-Rosen-Bohm experiments with photons and we illustrate the procedure by analyzing experimental data. Based on this analysis, we construct event-based computer simulation models in which every essential element in the experiment has a counterpart. The data is analyzed by counting single-particle events and two-particle coincidences, using the same procedure as in experiments. The simulation models strictly satisfy Einstein's criteria of local causality, do not rely on any concept of quantum theory or probability theory, and reproduce all results of quantum theory for a quantum system of two $S=1/2$ particles. We present a rigorous analytical treatment of these models and show that they may yield results that are in exact agreement with quantum theory. The apparent conflict with the folklore on Bell's theorem, stating that such models are not supposed to exist, is resolved. Finally, starting from the principles of probable inference, we derive the probability distributions of quantum theory of the Einstein-Podolsky-Rosen-Bohm experiment without invoking concepts of quantum theory.
 
  • #171
harrylin said:
To add to that: also Joy Christian has not really done so.
You do not know that so why do you state it as though you do?

It's now concluded by almost everyone that he simply messed up and tried in vain to undo the mess. As for the solutions of the remaining ones, those are not of the kind that Gordon is after (post #122).
It is true that many people do not believe Joy Christian, but that is not a reason to state their opinion as fact, nor does it mean he is wrong. I recommend you follow the discussion on FQXi where he explains his program in more detail and his article in which he responds to gill1109's criticisms.

There is also a faq at FQXi (http://fqxi.org/data/forum-attachments/JoyChristian_FAQ.pdf)
 
  • #172
harrylin said:
To add to that: also Joy Christian has not really done so. It's now concluded by almost everyone that he simply messed up and tried in vain to undo the mess.

I am trying to sort through Joy's thinking at this point. His above referenced paper asserts that CHSH is flat wrong and proposes a macroscopic (classical) test to prove it. I really don't get where is headed with it (to be honest), but I will keep at it until I resolve one way or another in my own mind.
 
  • #173
billschnieder said:
De Raedt et al will laugh at your so called "DrC Challenge".

That's an interesting speculation on your part*.

However, in fact I worked closely with Kristel (and Hans) on theirs for about a month, and they were kind enough to devote substantial time and effort to the process. In the end we did not disagree on the operation of their simulation. It is fully local and realistic. And if you look at the spreadsheet, you will see for yourself what happens in their model. And it does not match QM for the full universe, thereby respecting Bell.

As to Joy Christian: I am trying to put together a similar challenge with him, not sure if it will be possible or not because he does not seem open to a computer simulation. But I am hopeful I can either either change his mind on that point or alternately conclude exactly why his model is not realistic.

*And typically wrong-headed. :smile:
 
  • #174
DrChinese said:
And it does not match QM for the full universe, thereby respecting Bell.
But this is a fundamental misunderstanding on your part. There is no such thing as full universe in QM. QM gives you correlations for the experimental outcome, the same thing they calculated. If you want to claim that the outcome is the full universe, then you can not use a different standard for their simulation, you must also look only at the outcome.

However, in fact I worked closely with Kristel (and Hans) on theirs for about a month, and they were kind enough to devote substantial time and effort to the process. In the end we did not disagree on the operation of their simulation. It is fully local and realistic.
BTW I do not doubt that Kristel and Hans might have spent a lot of their valuable time with you. Although I do doubt that, that time was spent on, let-alone winning, the "DrC challenge."
 
Last edited:
  • #175
billschnieder said:
But this is a fundamental misunderstanding on your part. There is no such thing as full universe in QM. QM gives you correlations for the experimental outcome, the same thing they calculated. If you want to claim that the outcome is the full universe, then you can not use a different standard for their simulation, you must also look only at the outcome.

If you look at the simulation, you can vary the size of the window. This is only the outcomes that are "visible". Since the model is realistic, we can also display the full universe (which of course never matches the QM expectation, respecting Bell).

For the visible outcomes: As you increase window size, the result clearly deviates from the QM predictions. So it is up to you to decide where to peg it. If you take a small window where pairs are clearly acting entangled*, then you see results that (more or less) match the QM expectation. But if you widen the window so there is more ambiguity in what should be called entangled*, you clearly approach the straight line boundary. And the model no longer matches the QM expectation or experiment. So your conclusion is somewhat dependent on your choice of cutoff.

I will try to take a couple of screenshots in a few days so you can see the effect. That might help everyone see what happens as k (window size) is varied. Clearly, there is nothing stopping you from looking at 100% of the pairs (the full universe), and that definitely does not match QM or experiment. So it looks fairly good as long as you pick settings that are favorable. But as you vary those settings, it does not seem to reproduce the dynamics of an actual experiment.

Again, some of this is in the eye of the beholder.

*This being a function of the % of perfect correlations. Anything which does not perfectly correlate when expected should be ignored as not qualifying. I did not eliminate those in my model nor did the De Raedt team. Again, there is no exact point of acceptance or rejection.
 
  • #176
billschnieder said:
BTW I do not doubt that Kristel and Hans might have spent a lot of their valuable time with you. Although I do doubt that, that time was spent on, let-alone winning, the "DrC challenge."

No, I did not write to them and ask them to take the DrChinese challenge. :smile: And there is no real winning or losing of the challenge anyway. The point of the exercise is to force everyone to a point where we strip away the words and focus on key elements that we can agree on.

For example, any model that cannot make counterfactual predictions should not, in my opinion, be called realistic. But if YOU define it as realistic, but we agree there are no counterfactual predictions, then we have accomplished something. Then it is up to each person to label as they see fit.

I was interested for the purpose of understanding if or how any algorithm could even begin to accomplish violation of a Bell inequality.
 
  • #177
Gordon Watson said:
GW Statement 1: I derive the results for both W (the classical OP experiment) and Y (the well-known Aspect (2004) experiment) in a classical way.
Delta Kilo said:
Delta Kilo Response 1: No, you don't. You did not provide classical derivation for Y. Instead you just "borrowed" the result from Aspect paper. Aspect makes it very clear eq(6) was derived using QM rather than classical model.

..
NB: I have edited the quotations from "past-posts" here for clarity in this post. No meanings have been changed; no new data added. GW
..

Dear Delta Kilo, there was no "borrowing" from Aspect (2004). I simply "took it" as an example. It was the ideal example because it's on-line AND because of the very point that you make: "Aspect makes it very clear eq(6) was derived using QM." Moreover, my classical analysis pre-dates the original Aspect (2000).

Further: There is no implication anywhere in my writings that Aspect or Bell used a classical model to derive the QM result.

Further: Contrary to your bald claim, "No, you don't," I DO derive the results for both W (the classical OP experiment) and Y (the well-known Aspect (2004) experiment) in a classical way!

This was explained in my reference to Malus, which I believe is central to addressing some of your concerns: https://www.physicsforums.com/showpost.php?p=3879566&postcount=112

Also see posts leading to, and including: https://www.physicsforums.com/showpost.php?p=3874480&postcount=102

With more to follow on the above matters, and as required,

GW

Gordon Watson said:
GW Statement 2: Analytically, via my way: Going the whole-way (100%, say, with Y) is as easy as going half-way (50%, with W).

Delta Kilo said:
Delta Kilo Response 2: No, it isn't. There is a big difference: one satisfies Bell's inequality, another violates it.

Dear Delta Kilo, you again make a bald statement that is contrary to facts: "No, it isn't."

Since I did the ANALYSES, I have some right to assess which was the easiest ANALYSIS for me. I assess Y to be ANALYTICALLY easier than W: because W requires some integration whereas the Y result falls out without such (i.e., from observation)!

For example (using Bill's generalised short-cut: https://www.physicsforums.com/showpost.php?p=3878616&postcount=110; recalling that V is the general conditional for the style of experiments that we are analysing):

E(AB)_{Aspect(2004)} = E(AB)_Y = \int d\lambda \rho (\lambda ) (AB)_Y <br /> <br /> = \int d\lambda \rho (\lambda )A(\textbf{a}, \lambda )B(\textbf{b}, \lambda )_Y

<br /> <br /> = \int d\lambda \rho (\lambda )[ 2 \cdot P(B^+|Y,\,A^+) - 1] <br /> <br /> = \int d\lambda \rho (\lambda)[2⋅cos^2(\textbf{a}, \textbf{b}) - 1]

<br /> <br /> = cos[2(\textbf{a}, \textbf{b})] = QED!


This result may be compared to the classical OP experiment W, Y's classical equivalent (as previously defined) with some integration not included to facilitate comparisons:

E(AB)_{OP} =(AB)_W = \int d\phi \rho (\phi ) (AB)_W <br /> <br /> = \int d\phi \rho (\phi )A(\textbf{a}, \phi )B(\textbf{b}, \phi )_W

=\int d\phi \rho (\phi )[ 2 \cdot P(B^+|W,\,A^+) - 1] = \int d\phi \rho (\phi)[(1/2)⋅cos[2(\textbf{a}, \textbf{b})] + 1 - 1]

= (1/2)⋅cos[2(\textbf{a}, \textbf{b})] = QED!
...


E(AB)_{Bell (1964)} = E(AB)_Z = \int d\lambda \rho (\lambda ) (AB)_Z <br /> <br /> = \int d\lambda \rho (\lambda )A(\textbf{a}, \lambda )B(\textbf{b}, \lambda&#039; )_Z<br /> <br /> = - \int d\lambda \rho (\lambda )A(\textbf{a}, \lambda )B(\textbf{b}, \lambda )_Z <br /> <br />

= -\int d\lambda \rho (\lambda )[ 2 \cdot P(B^+|Z,\,A^+) - 1]<br /> <br /> = -\int d\lambda \rho (\lambda)[2⋅cos^2[(\textbf{a}, \textbf{b})/2] - 1]

<br /> <br /> = -cos(\textbf{a}, \textbf{b}) = - \textbf{a.b} = QED!

PS: Since, in Z, λ' = -λ (from λ + λ' = 0; the pair-wise conservation of angular momentum). That is: The minus-sign is physically significant in the spin-half EPRB case (s = 1/2) because λ and λ' are pair-wise detectably anti-parallel. This is not significant in the spin-one Aspect case (s = 1) because λ and λ' are pair-wise indetectably anti-parallel.

These differences are made clear when the classical challenge in the OP is addressed. For it is then seen that, for ALL the subject experiments W, X, Y and Z, Bell's A and B contain cos[2s(a, x)] or cos[2s(b, x)], respectively: where x is the relevant hidden-variable \phi or λ, and s is the intrinsic spin.

This EPRB-Bell (1964) result may be compared to classical experiment X, Z's classical equivalent (as previously defined) with some integration not included to facilitate comparisons:

E(AB)_X = \int d\phi \rho (\phi ) (AB)_X <br /> <br /> = \int d\phi \rho (\phi )A(\textbf{a}, \phi )B(\textbf{b}, \phi&#039; )_W = - \int d\phi \rho (\phi )A(\textbf{a}, \phi )B(\textbf{b}, \phi )_W

= - \int d\phi \rho (\phi )[ 2 \cdot P(B^+|W,\,A^+) - 1] = - \int d\phi \rho (\phi)[(1/2)⋅cos(\textbf{a}, \textbf{b}) + 1 - 1]

= - (1/2)⋅cos(\textbf{a}, \textbf{b}) = - \textbf{a.b}/2 = QED!
...

Note in passing:

1. Einstein-locality is maintained through every step of the analysis.

2. The simple classical sources in W and X, delivering particles with pair-wise minimal correlations (with 1 part common orientation over 2-space) deliver one-half the correlation of the quantum sources in Y and Z. Yet these latter sources deliver particles which are pair-wise MUCH MORE highly correlated (over an infinity of common orientations in 3-space).
...

With more to follow, and as required,

GW
..
 
Last edited:
  • #178
DrChinese said:
De Raedt et al created a computer simulation which violates a Bell Inequality (winning the DrC challenge in the process) but still failing to violate Bell's Theorem (since it no longer matches the predictions of QM).
DrChinese, this seems to be interpreting Bell's theorem too broadly. Bell's theorem says nothing about whether local hidden variable theories can reproduce, say, the energy spectrum of the hydrogen atom; it only discusses whether theories can reproduce the specific correlations QM predicts for entangled particles. If a theory were to break the Bell inequality fair and square, Bell's theorem would put no further barriers to such a theory matching any other predictions of QM. So in judging a "challenge" to Bell's theorem, it seems to me that we should only focus on whether and how the model violates the Bell inequality. And in the case of de Raedt, all you need to say is that it exploits one of the experimental loopholes of currently practical Bell tests, and is thus not a valid counterexample to Bell's theorem, which after all is a rigorously proven theoretical result.
 
  • #179
DrChinese said:
If you look at the simulation, you can vary the size of the window. This is only the outcomes that are "visible". Since the model is realistic, we can also display the full universe (which of course never matches the QM expectation, respecting Bell).
This makes no sense. There is no such thing as full universe. Only the outcomes matter for Bell or QM.
 
  • #180
billschnieder said:
This makes no sense. There is no such thing as full universe. Only the outcomes matter for Bell or QM.
The "full universe" issue you're talking about concerns the existence of counterfactual outcomes. But the "full universe" issue that DrChinese is discussing in regard to de Raedt's model is that it exploits the fair sampling loophole: the model only reproduces the predictions of QM if we take a small coincidental detection window, but if we had better experiments that would detect ALL entangled pairs emitted by the source, then de Raedt's model would be in stark disagreement with the predictions of QM.
 

Similar threads

  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 93 ·
4
Replies
93
Views
7K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
80
Views
7K
  • · Replies 64 ·
3
Replies
64
Views
5K
  • · Replies 3 ·
Replies
3
Views
1K
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 115 ·
4
Replies
115
Views
12K