A classical challenge to Bell's Theorem?

Click For Summary
The discussion centers on the implications of Bell's Theorem and the nature of randomness in quantum mechanics (QM) versus classical systems. Participants explore a scenario where classical correlations replace quantum entanglement in a Bell-test setup, questioning whether classical sources can yield results consistent with Bell's inequalities. The maximum value achievable for the CHSH inequality is debated, with assertions that it remains +2 under classical conditions, while emphasizing the necessity of specific functions for accurate calculations. The conversation also touches on the fundamental nature of quantum events, suggesting that they may lack upstream causes, which complicates the understanding of measurement outcomes. Ultimately, the discussion highlights the complexities of reconciling classical and quantum interpretations in the context of Bell's Theorem.
  • #121
Gordon Watson said:
..
If our posts have crossed; pleased reconsider my requests. Thanks.

I can respond on this point:

Bell says, in essence: No physical theory of local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics.

So my position is simply that either there are no hidden variables, or locality is violated, or both. I think that most physicists would agree with both of these statements, which I also agree with:

a) We live in an observer dependent world, i.e. there is no observer-independent reality.
b) We live in a quantum non-local world, i.e. there are physical dependencies between some pairs of events which defy the normal limits of c. (Accepting this does not make you a Bohmian though.)
 
Physics news on Phys.org
  • #122
DrChinese said:
Gordon, it is not appropriate to take a classical example, run a few formulas and say "Voila! Bell is wrong." I have already provided the math to show you your example is classical and does not violate Bell. I believe zonde and Mark M both showed the same thing. Sadly, billschnieder is using you in some strange way and is egging you on. I do not know his reason, but again I implore you to go back to ground zero in learning about the math of Bell.

The idea of this thread - see title - is absurd. You have never done anything so far to show otherwise despite the time I have taken to assist you. Which is why I don't think further discussion here is warranted.

And I respect de Raedt and Michielsen too much to recommend you send your classical example to them so they can analyze your ground-breaking work. So perhaps you might send it to Joy Christian instead. :biggrin: [Emphasis added by GW.]

I'm happy to defend the thread title. It seems that I should have been clearer about the challenge for it was meant to be exactly what you now ask for: A return to basics!

To that end now: Some (sources not recalled) say that Bell presented no theorem. I take a different view: Bell's Theorem is one of the most succinct in history. Beginning with a neat protocol (my term), he concludes with an impossibility (de Broglie's dictum notwithstanding).

So the challenge (?) was there for me to learn how you (and others) applied Bell's protocol to a clearly classical (and therefore Einstein-local) experiment: for I had already done the sums (contrary to earlier counter-claims that I had not). As a supporter of Einstein-locality (EL), it follows for me that Bell's theorem reflects on Bell's realism assumption: NOT on EL.

The challenge (aimed at Herbert's Proof initially) also questions "classicist-types" like de Raedt, etc., whose cases thus far (it might surprise you) I do not support. So the chosen title was equally a challenge (?) to all who study Bell.

PS: I doubt BillS will ever egg me on or stir me up as much as you have in the past. Could it be unrequited love or jealousy on your part? :!) I love your being back here: AND I love Bill's entering into the maths here, leaving me personally with some formatting challenges to address! So please don't fret: as well as a good sharer I'm also a great lover. :smile:
 
Last edited:
  • #123
DrChinese said:
I can respond on this point:

Bell says, in essence: No physical theory of local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics.

So my position is simply that either there are no hidden variables, or locality is violated, or both. I think that most physicists would agree with both of these statements, which I also agree with:

a) We live in an observer dependent world, i.e. there is no observer-independent reality.
b) We live in a quantum non-local world, i.e. there are physical dependencies between some pairs of events which defy the normal limits of c. (Accepting this does not make you a Bohmian though.) [Emphasis added by GW.]
..

DrC, Many thanks. Defending Einstein-locality, I'll address the emphasised piece here. More anon.

Bell (1964) states (in essence): "In a theory in which parameters are added to QM to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another such device, however remote. ..."

Do I misrepresent Bell (above), or the challenge itself (this thread, as it has developed), if I write:

1. In the challenge, as it emerges here: Parameters are added to QM to determine the results of individual measurements; NO changes are made to the statistical predictions.

2. In the experiment W (wholly classical): The added parameters are the pair-wise identical linear-polarisations ∅. ∅ = ∅'.

3. In experiment Y (Aspect 2004): The added parameters are the pair-wise conserved total angular momenta λ. λ + λ' = 0.

4. In experiment Z (EPRB, Bell 1964): If it were given here, the added parameters would be the pair-wise conserved total angular momenta λ. λ + λ' = 0.

5. And so on, through GHZ, GHSZ, CRB, etc.

6. In every case, parameters ARE ADDED TO QM to determine the results of individual measurements; NO changes are made to the statistical predictions.

6. The deeper point of the challenge is this: Accepting the validity of Einstein-locality, and accepting that Bell-style experiments yield definite dichotomic outcomes (Bell's A and B, represented by +1 and -1), we classically proceed to physically significant conclusions. NO changes are made to the statistical predictions.

7. To put it another way, defending Einstein-locality: Where is the mystery in Aspect (2004) when I can deliver exactly half its correlation [over every (a, b) combination] with a simple classical W-source: made in an hour, for a few dollars?

8. Is it not clear (to be expected, and without mystery) that Aspect's expensive singlet-source should deliver a higher correlation than my few dollars?

Leaving no doubts about the validity of Einstein-locality?


GW
..
 
  • #124
bohm2 said:
Given the recent PBR theorem (see 2 of many links below), isn’t non-locality implied by any “realistic” model by PBR theorem itself, irrespective of Bell’s. For instance Leifer writes:

(A). The quantum state cannot be interpreted statistically
http://lanl.arxiv.org/abs/1111.3328

(B). Quantum Times Article on the PBR Theorem
http://mattleifer.info/2012/02/26/quantum-times-article-on-the-pbr-theorem/

bohm2, thanks for these. I trust Bill will address them in typical detail, my focus being elsewhere at the moment. But I wonder if your "Given" follows correctly? (Me finding no evidence to advance non-locality over Einstein-locality.)

From (A) abstract: "If the predictions of quantum theory are confirmed, such a test would show that distinct quantum states must correspond to physically distinct states of reality."

I expect quantum predictions to be confirmed. So it seems to me that their conclusion must follow. Which (to me) looks like a move toward the "Classical Quantum Mechanics (CQM)" that I'm interested in.

Question: Am I missing something, in such a view?

Re (B): Note by its author, via your link: "Due to the appearance of this paper [link not given here, GW], I would weaken some of the statements in this article if I were writing it again. The results of the paper imply that the factorization assumption is essential to obtain the PBR result, so this is an additional assumption that needs to be made if you want to prove things like Bell’s theorem directly from psi-ontology rather than using the traditional approach. When I wrote the article, I was optimistic that a proof of the PBR theorem that does not require factorization could be found, in which case teaching PBR first and then deriving other results like Bell as a consequence would have been an attractive pedagogical option. However, due to the necessity for stronger assumptions, I no longer think this."

Comment: As I see it, CQM operates with classical factoring.

Thus, overall (and at the moment), I'm not sure PBR impacts here in favour of non-locality?

With thanks again,

GW
 
  • #125
Gordon Watson said:
Re (B): Note by its author, via your link: "Due to the appearance of this paper [link not given here, GW], I would weaken some of the statements in this article if I were writing it again...Thus, overall (and at the moment), I'm not sure PBR impacts here in favour of non-locality?

Good point. I'm still a bit confused if PBR implies non-locality directly with respect to any "realistic" model but I've asked Matt and if he responds, I will post his response on here.
 
  • #126
bohm2 said:
Good point. I'm still a bit confused if PBR implies non-locality directly with respect to any "realistic" model but I've asked Matt and if he responds, I will post his response on here.

If you accept the basic premise of the PBR paper, they demonstrate there are NO realistic solutions (because the wave function itself is as "real" as it gets). Therefore, non-locality is NOT a deduction of PBR.

Keep in mind that a realistic model posits that there ARE definite values for observables at all times. If the wave function itself is real, by definition there are NOT definite values for observables until an observation context appears (at which point there is collapse to an eigenstate).
 
  • #127
I'm stil confused about the implications of PBR. But here's Matt's answer in his blog:
Do you still believe that PBR directly implies non-locality, without Bell’s as I think you argued in a section of Quantum Times article?

“It (PBR) provides a simple proof of many other known theorems, and it supercharges the EPR argument, converting it into a rigorous proof of nonlocality that has the same status as Bell’s theorem. ”
Matt's response:
mleifer | 25 April, 2012 at 5:20 am |
Yes, but this requires the factorization assumption used by PBR. At the time of writing, I was hopeful that we could prove the PBR theorem without factorization, but now I know that this is not possible. Therefore, the standard Bell-inequality arguments are still preferable as they involve one less assumption. BTW, this is not something I “believe”, but rather something that Spekkens and Harrigan have proved.
 
  • #128
Gordon Watson said:
..

7. To put it another way, defending Einstein-locality: Where is the mystery in Aspect (2004) when I can deliver exactly half its correlation [over every (a, b) combination] with a simple classical W-source: made in an hour, for a few dollars?

8. Is it not clear (to be expected, and without mystery) that Aspect's expensive singlet-source should deliver a higher correlation than my few dollars?

Leaving no doubts about the validity of Einstein-locality?


GW
..

Exactly. It is easy to get half way. Fix two directions a, a'; b, b'. Suppose that the choices to measure in direction a or a', and in direction b or b', are taken independently and completely randomly in both sides of the experiment, many times in succession.

Your "half-way" satisfies CHSH. Any set of four correlations rho(a,b), rho(a,b'), rho(a',b), rho(a'b') satisfying all the CHSH inequalities is easy to generate in a completely classical way.

But any set such that one of the CHSH inequalities is violated cannot be generated in a classical way.

Proof. Suppose we accept Einstein reality. In each of N runs, there exist alongside one another "out there in reality", potential outcomes A, A', B and B', each with value +/-1. In each run, the experimenter has essentially tossed two coins. Depending on the first coin he gets to see A or A'. Depending on the second coin he gets to see B or B'. And of course he knows which one he is seeing.

Arrange the 4N numbers +/-1 in an Nx4 table. Note that per row, AB+AB'+A'B-A'B'=A(B+B')+A'(B-B')=+/-2 since B and B' are either equal or different. Either B-B'=0 and B+B'=+/-2, or B-B'=+/-2 and B+B'=0. Since A and A' equal +/-1 the value, row-wise, of AB+AB'+A'B-A'B, is +/-2.

Therefore the average over all the rows of AB+AB'+A'B-A'B lies between -2 and +2 (inclusive). But the average of a sum is the sum of the averages. So

Ave(AB)+Ave(AB')+Ave(A'B)-Ave(A'B') lies between -2 and +2.

Now the experimenter does not get to see these averages, since per row of the table, he only gets to see A or A', and B or B'. His experimental correlations are computed from from four random samples from this table. With probability 1/4, on the n'th run, he measures in directions a and b, and only then gets to observe A and B. With probability 3/4 he gets to observe A and B', or A' and B, or A' and B'. Same thing for all the other rows, independently of one another. Unless he is very unlucky, the average of the values of A times B over the approximately N/4 measurements in which the directions chosen are a and b, will be close to the average of the values of A times B over all N measurements.

What about computer simulations like those of de Raedt and Michielsen? They exploit an easy trick called the detection loophole. It has been known since a well known paper by P. Pearle (1970). Bell later explained in more detail how to set up the experiment in such a way that this loophole cannot be invoked to explain what has happened (see especially the "Bertlmann's socks" paper.

Let me explain the detection loophole through an extreme example. Imagine two photons about to leave a source and fly to two detectors, where one will be measured in direction a or a' (but it doesn't know which it will be), and the other will be measured in direction b or b' (and similarly, doesn't know which it will be). Suppose these two photons want to contribute to generating correlations rho(a,b)=rho(a,b')=rho(a',b)=+1, rho(a',b')=-1. Going to be difficult, right?

But if they also have the option of "not being detected" they can do it easily.

Imagine the two photons start at the source by tossing three fair coins. One of them is their own preference to be measured in direction a or a', the second encodes their own preference for b or b', and the third encodes the outcome which they would generate, if they are both measured as they both want to be measured. Equal to one another and equally likely +1 or -1 for three pairs of settings, opposite to one another and each equally likely to be +1 or -1 for the fourth pair.

Now they fly to their respective measurement stations and see if they are about to be measured, on this particular run, in the way they want. Each one separately of course. If Alice's photon wants to be measured in direction a', but Alice's detector has been set to direction a, it chooses to vanish. Similarly on Bob's side. They only *both* get measured, when they are *both* measured how they *both* want to be measured. And in that case they have arranged using their third shared coin toss, whether to be both +1, or both -1, in the three cases ab, a'b,ab'; but whether to deliver the outcomes +1,-1 or -1,+1 in the fourth case a'b'.

Half the photons on each side of the experiment will fail to be detected. Only a quarter of the photon pairs will survive with both getting detected and measured. These ones will exhibit perfect correlation for three of the four pairs of measurement settings, and perfect anti-correlation for the fourth.

There are mathematical theorems that in a CHSH experiment one can achieve QM's "2 sqrt 2", quite some way above the CHSH local realism bound of 2 by such trickery, as long as at least 5% (or something like that) of the photons on each side of the experiment can go undetected. Weihs et al experiment actually only detected 5% of the photons on each side of the experiment, ie 1 only in 400 photon pairs got both measured. One has to assume that those 1 in 400 are representative of the whole collection, in order that the Weihs experiment proves something conclusive. If just a small proportion of the other pairs were undetected for reasons correlated with the hidden variables generating the A, A', B and B' values, they could easily reproduce 2 sqrt 2 in a completely locally realistic way.

Well, no one believes that nature is do devious, so most people are happy to take Weihs experiment as proof that Einstein realism is not valid.
 
  • #129
gill1109 said:
Exactly. It is easy to get half way. Fix two directions a, a'; b, b'. Suppose that the choices to measure in direction a or a', and in direction b or b', are taken independently and completely randomly in both sides of the experiment, many times in succession.

Your "half-way" satisfies CHSH. Any set of four correlations rho(a,b), rho(a,b'), rho(a',b), rho(a'b') satisfying all the CHSH inequalities is easy to generate in a completely classical way.

But any set such that one of the CHSH inequalities is violated cannot be generated in a classical way.

Proof. Suppose we accept Einstein reality. In each of N runs, there exist alongside one another "out there in reality", potential outcomes A, A', B and B', each with value +/-1. In each run, the experimenter has essentially tossed two coins. Depending on the first coin he gets to see A or A'. Depending on the second coin he gets to see B or B'. And of course he knows which one he is seeing.

Arrange the 4N numbers +/-1 in an Nx4 table. Note that per row, AB+AB'+A'B-A'B'=A(B+B')+A'(B-B')=+/-2 since B and B' are either equal or different. Either B-B'=0 and B+B'=+/-2, or B-B'=+/-2 and B+B'=0. Since A and A' equal +/-1 the value, row-wise, of AB+AB'+A'B-A'B, is +/-2.

Therefore the average over all the rows of AB+AB'+A'B-A'B lies between -2 and +2 (inclusive). But the average of a sum is the sum of the averages. So

Ave(AB)+Ave(AB')+Ave(A'B)-Ave(A'B') lies between -2 and +2.

Now the experimenter does not get to see these averages, since per row of the table, he only gets to see A or A', and B or B'. His experimental correlations are computed from from four random samples from this table. With probability 1/4, on the n'th run, he measures in directions a and b, and only then gets to observe A and B. With probability 3/4 he gets to observe A and B', or A' and B, or A' and B'. Same thing for all the other rows, independently of one another. Unless he is very unlucky, the average of the values of A times B over the approximately N/4 measurements in which the directions chosen are a and b, will be close to the average of the values of A times B over all N measurements.

What about computer simulations like those of de Raedt and Michielsen? They exploit an easy trick called the detection loophole. It has been known since a well known paper by P. Pearle (1970). Bell later explained in more detail how to set up the experiment in such a way that this loophole cannot be invoked to explain what has happened (see especially the "Bertlmann's socks" paper.

Let me explain the detection loophole through an extreme example. Imagine two photons about to leave a source and fly to two detectors, where one will be measured in direction a or a' (but it doesn't know which it will be), and the other will be measured in direction b or b' (and similarly, doesn't know which it will be). Suppose these two photons want to contribute to generating correlations rho(a,b)=rho(a,b')=rho(a',b)=+1, rho(a',b')=-1. Going to be difficult, right?

But if they also have the option of "not being detected" they can do it easily.

Imagine the two photons start at the source by tossing three fair coins. One of them is their own preference to be measured in direction a or a', the second encodes their own preference for b or b', and the third encodes the outcome which they would generate, if they are both measured as they both want to be measured. Equal to one another and equally likely +1 or -1 for three pairs of settings, opposite to one another and each equally likely to be +1 or -1 for the fourth pair.

Now they fly to their respective measurement stations and see if they are about to be measured, on this particular run, in the way they want. Each one separately of course. If Alice's photon wants to be measured in direction a', but Alice's detector has been set to direction a, it chooses to vanish. Similarly on Bob's side. They only *both* get measured, when they are *both* measured how they *both* want to be measured. And in that case they have arranged using their third shared coin toss, whether to be both +1, or both -1, in the three cases ab, a'b,ab'; but whether to deliver the outcomes +1,-1 or -1,+1 in the fourth case a'b'.

Half the photons on each side of the experiment will fail to be detected. Only a quarter of the photon pairs will survive with both getting detected and measured. These ones will exhibit perfect correlation for three of the four pairs of measurement settings, and perfect anti-correlation for the fourth.

There are mathematical theorems that in a CHSH experiment one can achieve QM's "2 sqrt 2", quite some way above the CHSH local realism bound of 2 by such trickery, as long as at least 5% (or something like that) of the photons on each side of the experiment can go undetected. Weihs et al experiment actually only detected 5% of the photons on each side of the experiment, ie 1 only in 400 photon pairs got both measured. One has to assume that those 1 in 400 are representative of the whole collection, in order that the Weihs experiment proves something conclusive. If just a small proportion of the other pairs were undetected for reasons correlated with the hidden variables generating the A, A', B and B' values, they could easily reproduce 2 sqrt 2 in a completely locally realistic way.

Well, no one believes that nature is do devious, so most people are happy to take Weihs experiment as proof that Einstein realism is not valid.
..

gill1109,

Many thanks for you response. But it seems to me that your first word -- "Exactly" -- does not to relate to the balance of your writing? "Exactly what, please?"

You write above: "But any set such that one of the CHSH inequalities is violated cannot be generated in a classical way."

So, please, to be clear, you do realize THAT:

1. I derive the results for both W (the classical OP experiment) and Y (the well-known Aspect (2004) experiment) in a classical way?

2. Analytically, via my way: Going the whole-way (100%, say, with Y) is as easy as going half-way (50%, with W)?

3. My analysis is based on idealised experiments, just like Bell (1964), so that NO "detection loop-hole", nor any other loop-hole, is invoked here?

Thanks,

GW
..
 
  • #130
gill1109 said:
[..] What about computer simulations like those of de Raedt and Michielsen? They exploit an easy trick called the detection loophole. [..]
At first sight your explanation is very good until that sentence (and I'll post a comment to Gordon about the foregoing).
If someone does a trick, it's reasonable to say that the one who designs the trick and does the operation, exploits it for creating an illusion. I find it a bit strange if you say that an onlooker who has seen the illusion and figures out how, perhaps, the illusionist has done the trick, is "exploiting" the trick; in any case, the performance isn't done by the onlooker.
[..] If Alice's photon wants to be measured in direction a', but Alice's detector has been set to direction a, it chooses to vanish. [...]
I may be mistaken of course but I'm nearly certain that their explanation is quite different, more related to where the experimenter chooses to look.
no one believes that nature is do devious [..] Einstein realism is not valid.
Experiments aren't designed by nature but by experimenters - and experimenters are not devious either, but by necessity they observe those things that they want to see, according to their expectations, and usually they suggest an interpretation that matches their prior thinking.
 
Last edited:
  • #131
Gordon Watson said:
..[..]
2. Analytically, via my way: Going the whole-way (100%, say, with Y) is as easy as going half-way (50%, with W) [..]
I'm sorry that I just couldn't spend the time to really follow it, but apparently you did not show that your way does produce the required result, and it isn't taken for granted either.

Also, if I see it correctly, the experimental conditions of Gill's reply closely match those of the presentation of Herbert. As you had in mind to challenge Herbert's proof, for me (and perhaps others) it would be very useful if you would explicitly present your mathematical argument in that context, which is simpler than Bell's and thus easier to follow (but in case you already did: please give the post number!).
 
Last edited:
  • #132
Quick response to Gordon. You said you could get half-way to the desired correlations, easily. I said "exactly", because half-way does not violate CHSH. Sorry, I have not found out exactly what you mean by Y, W, and I don't know what you mean by the classical OP experiment. My discussion was aimed at Aspect, done more recently, better still, by Weihs.

Quick response to harylin: I know that de Raedt et al does not think they are playing a trick on us. I call it a trick. I could easily write programs which do the same as theirs. Their simulation succeeds in reproducing the statistics of these famous experiments, but the point is that that is not difficult at all, because of the detection loophole. In effect they are using the detection loophole. If they would change the parameters of their simulation such that hardly any photons got lost any more, they would no longer be able to violate CHSH.
 
  • #133
Gordon Watson said:
1. I derive the results for both W (the classical OP experiment) and Y (the well-known Aspect (2004) experiment) in a classical way?
No, you don't. You did not provide classical derivation for Y. Instead you just "borrowed" the result from Aspect paper. Aspect makes it very clear eq(6) was derived using QM rather than classical model.

Gordon Watson said:
2. Analytically, via my way: Going the whole-way (100%, say, with Y) is as easy as going half-way (50%, with W)?
No, it isn't. There is a big difference: one satisfies Bell's inequality, another violates it.
 
  • #134
gill1109 said:
[..]I know that de Raedt et al does not think they are playing a trick on us. I call it a trick. I could easily write programs which do the same as theirs. Their simulation succeeds in reproducing the statistics of these famous experiments, but the point is that that is not difficult at all, because of the detection loophole. In effect they are using the detection loophole. If they would change the parameters of their simulation such that hardly any photons got lost any more, they would no longer be able to violate CHSH.
Hi gill, as I said, I doubt that that is correct as they do not explain the trick as a conspiracy of disappearing photons*. And sorry if I wasn't clear: I similarly don't think that Weihs tried to play a trick on us. However, the one who designed, performed and presented that experiment was Weihs and not De Raedt. To me it's a distortion to describe an observer who thinks that it's a trick and who explains how exactly the trick may have been done, as a person who "exploits a trick". As I see it, instead the one giving the performance with long sleeves (even if he didn't notice it himself) is the one exploiting the trick - not the one who sees those long sleeves and sketches how the trick may be done. Next the performer will say "No problem, I can do the trick without such long sleeves"; the question is if the performance can be done without any tricks. Let's hope so!

*ADDENDUM. I now checked and found that you indeed completely misunderstood their explanation, as they specify: "we consider ideal experiments only, meaning that we assume that detectors operate with 100% efficiency, clocks remain synchronized forever, the “fair sampling” assumption is satisfied and so on." Their explanation is discussed in the following thread: https://www.physicsforums.com/showthread.php?t=597171
 
Last edited:
  • #135
gill1109 said:
I know that de Raedt et al does not think they are playing a trick on us. I call it a trick. I could easily write programs which do the same as theirs. Their simulation succeeds in reproducing the statistics of these famous experiments, but the point is that that is not difficult at all, because of the detection loophole. In effect they are using the detection loophole. If they would change the parameters of their simulation such that hardly any photons got lost any more, they would no longer be able to violate CHSH.

I actually used their Fortran code to program (in Visual Basic) my own Excel simulation to mimic theirs. Sure enough, it worked exactly as they said. (I will be happy to share that if anyone wants it.) I have not yet dissected how their code pulls off this feat. I personally consider it is a pretty clever little algorithm (since I couldn't figure it out quickly - honestly didn't spend much time on it and I would have liked to). But of course ultimately you are correct; you always come back to what I call the "Unfair Sampling Assumption".

The Unfair Sampling Assumption is that there would need to exist a suppression mechanism (let's call this SupMech) whereby as more and more pairs are sampled as a % (let's called this detection efficiency, or DE), there is *progressively* more pairs suppressed unfairly. Because as DE has risen in actual experiments, the deviation from the local realistic bound (2) has increased! Therefore to have the true full universe respect Bell, and therefore contradict QM, SupMech works harder as DE rises. What a strange concept, that there is a mechanism tied to detection efficiency! You would absolutely expect - if you took the de Raedt et al argument seriously - that as DE rises, the results would approach 2 instead of the other way as has actually occurred. So the only way around it is the Unfair Sampling Assumption!

Of course, it is well known that once DE exceeds a certain point (perhaps 71% or something, there are a variety of papers on this) then there is no possible SupMech that could deliver the hypothesized local realistic outcome anyway. Further, no one has the slightest clue how SupMech could work (considering that is coupled to DE - this doesn't exist in the de Raedt et al simulation which clearly approaches product state statistics as more and more of the universe is presented which is contradicted by experiment). More importantly, how could QM be so completely wrong? And finally, it turns out that there are experiments in which DE=100% (although the locality loophole is open) and those don't support the local realistic hypothesis either.
 
  • #136
harrylin said:
...they specify: "we consider ideal experiments only, meaning that we assume that detectors operate with 100% efficiency, ...

Well yes and no. You can call it anything you want. They disappear (because are not matched), and it is a bit of a misnomer to say that detection efficiency is not the issue. The entire point is that the de Raedt detected sample looks different than the full universe and how that happens is irrelevant for a simulation.

Here is a specific example. At 30 degrees, 100,000 iterations (pairs), window size k=30, I get the following results in one typical run using Type II PDC (opposite polarizations):

Entangled state rule (which is the QM expectation) = .2500
Local realistic boundary = .3333
Product state rule = .3750

de Raedt detected sample = .3261 (violates Bell inequality but does not match QM)
de Raedt full universe = .3751 (closely matches product state)

So their simulation respects Bell, even though the full universe does not. Their sample does not match experiment however. Additionally, since it makes predictions different than QM, it is subject to verification/rejection on numerous other levels.
 
Last edited:
  • #137
DrChinese said:
I actually used their Fortran code to program (in Visual Basic) my own Excel simulation to mimic theirs. Sure enough, it worked exactly as they said. (I will be happy to share that if anyone wants it.) [..]
Yes please! It will be helpful for the discussion of different models. :smile:
DrChinese said:
Well yes and no. You can call it anything you want. They disappear, and it is a bit of a misnomer to say that detection efficiency is not the issue. Where are they if detector efficiency is 100%? The entire point is that the de Raedt detected sample looks different than the full universe. [..]
If the detector efficiency in your version is not 100%, obviously there must be an error somewhere -either in your version or already in theirs. That's certainly a point to discuss in the thread on that topic (I will now stop hijacking Gordon's thread).
 
Last edited:
  • #138
harrylin said:
Yes please! It will be helpful for the discussion of different models. :smile:

If the detector efficiency in your version is not 100%, obviously there must be an error somewhere...

See:

http://drchinese.com/David/DeRaedtComputerSimulation.EPRBwithPhotons.C.xls

Again it is a completely artificial mechanism, so what you call it is completely irrelevant. When talking about a suppression mechanism, I may call mention Detector Efficiency while they call it Coincidence Time Window. But nothing changes. There is no more one effect than the other. As you look at more of the universe, you get farther and farther away from the QM predictions and that never really happens in actual experiments. So the Suppression Mechanism must grow if you DO want it to match experiment! And THAT is the Unfair Sampling Assumption.
 
  • #140
The important measure of efficiency is determined empirically. And it is not just about what goes on at the detectors. It is: what proportion of the observed events in Alice's side of the experiment are linked to an observed event on Bob's side. And the same thing, the other way round. Both of these two proportions have to be at least something like 95%, before a violation CHSH at 2 sqrt 2 actually proves anything. In Weihs experiment they are both about 5%.

Particles don't just get lost at the detectors. They also get lost "in transmission". Some even get reabsorbed in the same crystal where they were "born" by being excited with a lazer. I'm afraid de Raedt and his colleagues are rather confused and don't understand these issues. So many things they say are rather misleading. The experiment as a whole has an efficiency and it is measured by the proportion of unpaired events on both sides of the experiment. Big proportion unpaired, low efficiency.
 
  • #141
gill1109 said:
I'm afraid de Raedt and his colleagues are rather confused and don't understand these issues. So many things they say are rather misleading. The experiment as a whole has an efficiency and it is measured by the proportion of unpaired events on both sides of the experiment. Big proportion unpaired, low efficiency.
Let us use your coins analogy. For this purpose we say heads = +1 tails = -1

THEORETICAL:
If we have 3 coins labelled "a", "b", "c" and we toss all three a very large number of times. It follows that the inequality |ab + ac| - bc <= 1 will never be violated for any individual case and therefore for averages |<ab> + <ac>| - <bc> <= 1 will also never be violated.

Proof:
a,b,c = (+1,+1,+1): |(+1) + (+1)| - (+1) <= 1, obeyed=True
a,b,c = (+1,+1,-1): |(+1) + (-1)| - (-1) <= 1, obeyed=True
a,b,c = (+1,-1,+1): |(-1) + (+1)| - (-1) <= 1, obeyed=True
a,b,c = (+1,-1,-1): |(-1) + (-1)| - (+1) <= 1, obeyed=True
a,b,c = (-1,+1,+1): |(-1) + (-1)| - (+1) <= 1, obeyed=True
a,b,c = (-1,+1,-1): |(-1) + (+1)| - (-1) <= 1, obeyed=True
a,b,c = (-1,-1,+1): |(+1) + (-1)| - (-1) <= 1, obeyed=True
a,b,c = (-1,-1,-1): |(+1) + (+1)| - (+1) <= 1, obeyed=True


EXPERIMENTAL:
We have 3 coins labelled "a","b","c", one of which is inside a special box. Only two of them can be outside the box at any given time because you need to insert a coin in order to release another. So experimentally we decide to perform the experiment by tossing pairs of coins at a time, each pair a very large number of times. In the first run, we toss "a" and "b" a large number times, in the second one we toss "a" and "c" a large number of times and in the third we toss "b" and "c". Even though the data appears random, we then calculate <ab>, <ac> and <bc> and substitute in our equation and find that the inequality is violated! We are baffled, does this mean therere is non-local causality involved? For example we find that <ab> = -1, <ac> = -1 and <bc> = -1 Therefore |-1 - 1| + 1 <= 1, or 3 <= 1 which violates the inequality. How can this be possible? Does this mean there is spooky action at a distance happening?

No. Consider the following: Each coin has a hidden mechanism inside [which the experimenters do not know of] which exhibits an oscillatory behaviour in time determined at the moment it leaves the box. Let us presume that the hidden behaviour of each coin is a function of some absolute time, "t" and follows the function sin(t).

The above scenario [<ab> = -1, <ac> = -1 and <bc> = -1 ] can easily be realized if:
- if sin(t) > 0: coin "a" always produces heads (+1), coin "c" always produces tails (-1) while coin "b" produces tails (-1) if it was released from the box using coin "a", but heads (+1) if it was released from the box using coin "c".
- if sin(t) <= 0: all the signs are reversed.


Therefore it can be clearly seen here that violation of the inequality is possible in a situation which is CLEARLY locally causal. We have defined in advance the rules by which the system operates and those rules do not violate any local causality, yet the inequality is violated. No mention of any detector efficiency or loopholes of any kind.
 
  • #142
gill1109 said:
The important measure of efficiency is determined empirically. And it is not just about what goes on at the detectors. It is: what proportion of the observed events in Alice's side of the experiment are linked to an observed event on Bob's side. And the same thing, the other way round. Both of these two proportions have to be at least something like 95%, before a violation CHSH at 2 sqrt 2 actually proves anything. In Weihs experiment they are both about 5%.

Particles don't just get lost at the detectors. They also get lost "in transmission". Some even get reabsorbed in the same crystal where they were "born" by being excited with a lazer.


I'm afraid de Raedt and his colleagues are rather confused and don't understand these issues. So many things they say are rather misleading. The experiment as a whole has an efficiency and it is measured by the proportion of unpaired events on both sides of the experiment. Big proportion unpaired, low efficiency.

Good points. For those who are wondering, please keep the following in mind:

First: Using the EPR definition of an element of reality, you must start with a stream of photon pairs that yield perfect correlations. I.e. You cannot test a source stream which is NOT entangled! The experimenter searches for this, and provides one with as much fidelity as possible. If it is 5% due to any number of factors or not, you must start there for executing your Bell test.

Next: you ask if there is something about your definition of entangled pairs that is somehow systemically biased. That is always possible with any experiment, and open to critique. And in almost any experiment, you ultimately conclude with the assumption that your sample is representative of the universe as a whole.

In a computer simulation such as that of de Raedt et al, you don't really have a physical model. It is just an ad hoc formula constructed with a specific purpose. In this case it is wrapped up as if there is a time coincidence window. But I could remap their formula to be day of week or changes in the stock market or whatever and it would work as well.

BTW, their model does not yield perfect correlations as the window size increases. This violates one of our initial requirements, which is to start with a source which meets the EPR requirement of providing an element of reality. Only when the stream is unambiguously entangled do we see perfect correlations. So we lose the validity of considering the unmatched events (of this stream of imperfect pairs) as being part of a full universe which does not violate a Bell inequality. You cannot mix in unentangled pairs!

Lastly: If you DID take the simulation seriously, you would then need to map it to a physical model that would then be subject to physical tests. No one really takes this that seriously because of the other issues present. Analysis of the Weihs coincidence time data does not match the de Raedt et al model in any way. The only connection is that the term "coincidence time window" is used.
 
  • #143
DrChinese said:
First: Using the EPR definition of an element of reality, you must start with a stream of photon pairs that yield perfect correlations. I.e. You cannot test a source stream which is NOT entangled!
Obviously it can be verified that for the coin counter-example in post #141 above which violates the inequality, there is perfect correlation between the pairs. There is no coincidence required, there is no time window, or detector efficiency involved. Yet violation occurs.
 
  • #144
billschnieder said:
The above scenario [<ab> = -1, <ac> = -1 and <bc> = -1 ] can easily be realized if:
- if sin(t) > 0: coin "a" always produces heads (+1), coin "c" always produces tails (-1) while coin "b" produces tails (-1) if it was released from the box using coin "a", but heads (+1) if it was released from the box using coin "c".
- if sin(t) <= 0: all the signs are reversed.

Sad Bill. Really sad. This is not realistic, it is contextual.
 
  • #145
...And obviously does NOT yield perfect correlations since the coin b outcome is dependent on what coin the observer uses to get it.
 
  • #146
DrChinese said:
...And obviously does NOT yield perfect correlations since the coin b outcome is dependent on what coin the observer uses to get it.

We are talking about perfect correlations of measured pairs! QM does not say one photon of one pair must be perfectly correlated with a different photon of a differnt pair does it? :rolleyes:

<ab> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair
<ac> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair
<bc> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair
 
  • #147
DrChinese said:
Sad Bill. Really sad. This is not realistic, it is contextual.
Duh, realistic does NOT conflict with contextual. I've explained this to you 1 million times and you still make silly mistakes like this.

You are now claiming that our coins and box mechanism is not realistic. Anyone else reading this shoud be able to see how absurd such a claim is. The coins each have definite properties, and behave according to definite rules all defined well ahead of time. What is not realistic about that! Yet we still get a violation.
 
  • #148
billschnieder said:
We are talking about perfect correlations of measured pairs! QM does not say one photon of one pair must be perfectly correlated with a different photon of a differnt pair does it? :rolleyes:

<ab> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair
<ac> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair
<bc> = -1 means perfect correlation, the two values are ALWAYS opposite each other for any pair

billschnieder said:
Duh, realistic does NOT conflict with contextual. I've explained this to you 1 million times and you still make silly mistakes like this.

You are now claiming that our coins and box mechanism is not realistic. Anyone else reading this shoud be able to see how absurd such a claim is. The coins each have definite properties, and behave according to definite rules all defined well ahead of time. What is not realistic about that! Yet we still get a violation.

Take the DrC challenge then and show me the realistic data set. Show me +/- values of a, b, c for each of 8 to 20 pairs. You use your own algorithm to generate so that way, there is no chance of me misinterpreting things.

The only model that has ever passed is the de Raedt et al computer simulation, and it has its own set of issues (i.e. does not match experiment).
 
  • #149
DrChinese said:
Take the DrC challenge then and show me the realistic data set. Show me +/- values of a, b, c for each of 8 to 20 pairs. You use your own algorithm to generate so that way, there is no chance of me misinterpreting things.

The only model that has ever passed is the de Raedt et al computer simulation, and it has its own set of issues (i.e. does not match experiment).
We already went though this and you gave up (https://www.physicsforums.com/showthread.php?t=499002&page=4). it is a nonsensical challenge. I turn it back on you to give me a non-realistic dataset or non-local dataset which violates the inequality.

BTW: If you want to repeat this again, start a new thread for it as this is getting off-topic.
 
Last edited:
  • #150
billschnieder said:
We already went though this and you gave up (https://www.physicsforums.com/showthread.php?t=499002&page=4). it is a nonsensical challenge. I turn it back on you to give me a non-realistic dataset or non-local dataset which violates the inequality.

BTW: If you want to repeat this again, start a new thread for it as this is getting off-topic.

All I can say is take the challenge or be quiet. :-p

By definition, a non-realistic dataset does NOT have 3 simultaneous values.

Here is as good a place to discuss as any, please see (er, I mean read and understand) the title.
 

Similar threads

  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 93 ·
4
Replies
93
Views
7K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
80
Views
7K
  • · Replies 64 ·
3
Replies
64
Views
5K
  • · Replies 3 ·
Replies
3
Views
1K
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 115 ·
4
Replies
115
Views
12K