Boole vs. Bell - the latest paper of De Raedt et al

  • Thread starter Thread starter harrylin
  • Start date Start date
  • Tags Tags
    Bell Paper
Click For Summary
De Raedt et al.'s latest paper expands on Bell's Theorem by introducing extended Boole-Bell inequalities that apply to both classical and quantum models. The authors argue that apparent contradictions in quantum theory stem from incomplete premises in the derivation of Bell's inequalities, suggesting that violations cannot be attributed to influences at a distance. They illustrate their points using examples, including a reinterpretation of Boole's patient-illness scenario, to show that similar inequalities can arise without invoking non-locality. Critics challenge the validity of these claims, emphasizing the need for realistic datasets and questioning the assumptions behind Boole's examples. The discussion highlights ongoing debates about realism and the interpretation of quantum mechanics in light of these new findings.
  • #91
harrylin said:
Anyway, perhaps because you were distracted by that minor issue, you did not notice the main issue which I brought up, an issue that probably relates to what Bill intended to show: How could you use a data set of 10 rows for tests with 3 liquids? I can only use the data in multiples of 3, in the way I showed. One row corresponds to the hidden possible experience of one tablet as well of its double (just like a pair of Bertlmann's socks). Thus only two experiences (with two liquids) are possible per row of data.

No problem, we can do that too. We will just draw randomly 2 from each triple. The issue there is that you need a sufficient sample size (a small one can potentially give results that do violate the inequality).

a, b, c
-----------
take ab from these
+1, -1, +1
-1, +1, -1
-1, +1, -1
-1, +1, -1
+1, -1, -1
-1, +1, +1
-1, +1, -1
+1, -1, +1
-1, -1, -1
-1, -1, -1
2 matches, 8 mismatches, your value is (2-8)/10 or -.6

take ac from these
-1, -1, +1
-1, +1, +1
+1, -1, -1
+1, -1, -1
-1, -1, +1
-1, -1, -1
+1, -1, +1
-1, +1, -1
-1, +1, -1
+1, +1, +1
5 matches, 5 mismatches, your value is (5-5)/10 or 0

take bc from these
+1, -1, -1
-1, -1, +1
+1, -1, +1
+1, -1, +1
+1, +1, +1
-1, -1, -1
+1, +1, +1
-1, -1, -1
-1, +1, -1
+1, -1, +1
5 matches, 5 mismatches, your value is (5-5)/10 or 0

So by my formula:

Matches(ab) + Mismatches(ac) - Matches(bc) >= 0
2+5-5 >= 0
Respected.

By bill's using your +/- notation (don't recall if that is correct or not):

1 + <bc> >= |<ab> - <ac>|

1+ 0 >= |-.6 - 0|
Respected.

This is a pointless exercise, as you will eventual realize, because it gives a result in accordance with realism precisely because it is realistic. Only if a sufficiently small or intentionally biased sample is presented will it be violated. No matter how you try, you won't make it work unless there is a conspiracy between the dataset values and the selection of when you get ab, ac or bc.

But none of this is the basis for my argument anyway. My argument goes in a different direction.
 
Last edited:
Physics news on Phys.org
  • #92
DrChinese said:
2. I used the entire universe of the following which you provided, it is the first 10 rows.

+1, -1, +1
-1, +1, -1
-1, +1, -1
-1, +1, -1
+1, -1, -1
-1, +1, +1
-1, +1, -1
+1, -1, +1
-1, -1, -1
-1, -1, -1
This is crap. Full-universe by what definition. What made you think the dataset I gave you is not the full universe? This is just further obfuscation, bobbing and weaving by you! You have not justified, nor can justify why you picked just those 10 values out of the 30 I gave you, other than an attempt to maybe rig the results. But this is not going to work and the more you pull such silly stunts the more you end up undoing yourself.

ab=.2 (2/10)
bc=.4 (4/10)
ac=.8 (8/10)
As harrylin has pointed out already, your convention is mixed up. Further obfuscation, bobbing and weaving again, instead of just sticking to the program we already agreed to? Common DrC, let us be intellectually honest adults here!

I can see that conventions put aside, you are using every triple to calculate <bc>, using every triple to calculate <ac> and doing the exact same thing for <ab>. It is clear by now to anyone following this thread that this can not be done in a feasible experiment. Remember what I said?

based on how you calculate <ab>, <bc> and <ac>, the inequality can be violated. If you calculate it in a way consistent with Bell test experiments, the inequality is violated. If you calculate it the way Bell intended, the inequality is satisfied but the corresponding experiment is impossible as it will require measuring tablets more than once.

3. Yes, the result is:

1+.4>=| .2 - .8 |
1.4 >= .6
Using the standard convention which Bell used, and we agreed to, where P(a,b) stands for the expectation value of the paired-product of outcomes at Alice and Bob where liquids (a,b) were tested respectively, I get the following values for the terms:

1) As Bell intended -- ie using every tablet in the dataset to calculate every term
ab = -0.266666666667
bc = -0.266666666667
ac = 0.2

Plug them into 1 + <bc> >= |<ab> - <ac>|
and get 1 - 0.267 >= | 0.267 -0.2 | (Inequality RESPECTED, but notice that the corresponding experiment, measuring two tablets in three liquids, is impossible to perform)

2) Similar to Bell test experiments with 3-runs with fixed settings. Firs run measures 10 pairs of tablets using liquids (a,b) to calculate <a1b1>, second run using liquids (b,c) for next 10 pairs of tablets to calculate <b2c2>, third run using liquids (a,c) for the remaining 10 pairs to calculate <a3c3>.

a1b1 =-0.4
b2c2 =-0.4
a3c3 = 0.4

Substitute in, 1 + <bc> >= |<ab> - <ac>|, to get 1 - 0.4 >= |-0.4-0.4|
0.6 >= 0.8 (Inequality VIOLATED, note however that as I have explained previously, the reason we get the violation is because the inequality expects expectation values of the type mentioned in (1), note also that in the De Raedt paper which started this thread, they have shown that starting with a expectation values of the type (2), Bell's inequality can not be derived, instead a different inequality is derived which is not violated by any experiments.)3) Similar to Bell test experiments with random switching. Tablet pairs are randomly chosen from the 30 pairs (WITHOUT REPLACEMENT!). First 10 random pairs of tablets are tested using liquids (a,b) to calculate <a1b1>, second run using liquids (b,c) to calculate <b2c2>, and the remaining 10 using (a,c) to calculate <a3c3>.

a1b1 =-0.6
b2c2 =-0.2
a3c3 = 0.4

Substitute in, 1 + <bc> >= |<ab> - <ac>|, to get 1 - 0.2 >= |-0.6-0.4|
0.8 >= 1 (Inequality VIOLATED for the same reason as explained in (2))

You can try to manipulate things, but the issue is whether the realistic condition - that out of 8 permutations (+++, ++-, etc.), all have a likelihood of 0 to 1. And they will, in any realistic dataset, by definition.
Huh? Who is talking about permutations. More obfuscation I see. Permutations don't come in at all, that is if you understand what is going on. Likelihood does not come in at all. Each term in Bell's inequality is an expectation value for the paired product of the outcomes!

You take each Alice's outcome, multiply with Bob's add them all up, for EVERY photon pair, and then take the average. That is what <ab> means. Bell defined this clearly at the very beginning of his original paper.
 
  • #93
DrChinese said:
No problem, we can do that too. We will just draw randomly 2 from each triple.
Now tell us clearly, did you randomly pick the two with or without replacement? In other words, did you use any of the triplets more than once? This is the crucial question you still refuse to answer.


The issue there is that you need a sufficient sample size (a small one can potentially give results that do violate the inequality).
Now you are contradicting yourself. Is it not you who said: Obviously, if it works for the single case you hand pick to violate it, it works also for all cases.


Besides, I already asked you to specify any number and I will generate the dataset containing that number of entries.

a, b, c
-----------
take ab from these
+1, -1, +1
-1, +1, -1
-1, +1, -1
-1, +1, -1
+1, -1, -1
-1, +1, +1
-1, +1, -1
+1, -1, +1
-1, -1, -1
-1, -1, -1
2 matches, 8 mismatches, your value is (2-8)/10 or -.6

take ac from these
-1, -1, +1
-1, +1, +1
+1, -1, -1
+1, -1, -1
-1, -1, +1
-1, -1, -1
+1, -1, +1
-1, +1, -1
-1, +1, -1
+1, +1, +1
5 matches, 5 mismatches, your value is (5-5)/10 or 0

take bc from these
+1, -1, -1
-1, -1, +1
+1, -1, +1
+1, -1, +1
+1, +1, +1
-1, -1, -1
+1, +1, +1
-1, -1, -1
-1, +1, -1
+1, -1, +1
5 matches, 5 mismatches, your value is (5-5)/10 or 0

So by my formula:

Matches(ab) + Mismatches(ac) - Matches(bc) >= 0
2+5-5 >= 0
Respected.

By bill's using your +/- notation (don't recall if that is correct or not):

1 + <bc> >= |<ab> - <ac>|

1+ 0 >= |-.6 - 0|
Respected.
Again, did you use any of the triplets in more than one group? If you did, then you haven't done anything different from what I explained in my previous post under Treatment (1) -- ie the one corresponding to the impossible experiment. Sure Bell's inequality will be respected in that Treatment, but so what, it doesn't mean squat as it is not equivalent to any performable experiment. My Treatments (2) and (3) are consistent with how Bell test experiments are performed but they violate the inequality.

Which goes to show that violation of Bell's inequality by actual experiments has no ramifications for "realism", or "locality" or "CFD" or anything. It simply points to the fact that the terms from experiment and QM are not the type of terms implicitly required by the inequalities.

What was the point of your challenge again? I hope the next time you think of proclaiming that "realism" or "locality" or "CFD" is untenable, you will think about this exchange.
 
  • #94
I find this example very useful, thanks. :smile:
billschnieder said:
[..]
1) As Bell intended -- ie using every tablet in the dataset to calculate every term
[...]
I don't think so: Bell clearly explained in his "Bertlmann's socks"* talk that it is simply impossible to use the dataset of one pair of socks to calculate all three terms of the inequality. That is now also clearly explained in this thread.

As I mentioned in post #64, the issue here is the averaging of results from different pairs for Bell's inequality, which Bell argued to be valid while de Raedt argues this to be invalid.

Harald

*http://cdsweb.cern.ch/record/142461?ln=en
 
  • #95
billschnieder said:
Now tell us clearly, did you randomly pick the two with or without replacement? In other words, did you use any of the triplets more than once? This is the crucial question you still refuse to answer. ...

Again, did you use any of the triplets in more than one group? If you did, then you haven't done anything different from what I explained in my previous post under Treatment (1) -- ie the one corresponding to the impossible experiment. Sure Bell's inequality will be respected in that Treatment, but so what, it doesn't mean squat as it is not equivalent to any performable experiment. My Treatments (2) and (3) are consistent with how Bell test experiments are performed but they violate the inequality.

Which goes to show that violation of Bell's inequality by actual experiments has no ramifications for "realism", or "locality" or "CFD" or anything...

I answered already that I used your 30 items. I answered that I used them without replacement. I answered already that this entire exercise is a waste of time.

You pretended to take my challenge but then stopped. I told you that you don't need the usual Bell inequality if you start with a realistic dataset. Well, we have both agreed now that since no realistic datasets are possible, that the usual definition (not yours mind you) of realism fails.

Given that you a) are a crabby fellow; b) are given to an excess of drama over what should be some friendly sparring; and worst of all c) cannot even spell your name correctly (and I should know): I have decided not to continue this discussion. You will, I am quite certain, proclaim the brilliance of your "victory" for your fringe viewpoints.

But don't expect me not to reply to your usual wrong statements as always. After all, I remain DrC. :biggrin:
 
  • #96
harrylin said:
As I mentioned in post #64, the issue here is the averaging of results from different pairs for Bell's inequality, which Bell argued to be valid while de Raedt argues this to be invalid.

Harald

*http://cdsweb.cern.ch/record/142461?ln=en

As mentioned, if you are a realist: where is a realistic dataset? De Raedt provided one (actually a simulation formula, but I used it to create a successful dataset), and his is the only team that has accepted this challenge. Keep in mind that it makes predictions different than QM, which is the entire point of Bell if anyone here is still listening.
 
  • #97
billschnieder;3346881 said:
[...]
3) Similar to Bell test experiments with random switching. Tablet pairs are randomly chosen from the 30 pairs (WITHOUT REPLACEMENT!). First 10 random pairs of tablets are tested using liquids (a,b) to calculate <a1b1>, second run using liquids (b,c) to calculate <b2c2>, and the remaining 10 using (a,c) to calculate <a3c3>.

a1b1 =-0.6
b2c2 =-0.2
a3c3 = 0.4

Substitute in, 1 + <bc> >= |<ab> - <ac>|, to get 1 - 0.2 >= |-0.6-0.4|
0.8 >= 1 (Inequality VIOLATED for the same reason as explained in (2)) [..]

Now this is an important point; and Bill, I must admit that I did not expect such an outcome when you started this simple example!

Even when I did the exercise for the first 9 data points I thought that if I did not make an error, it may still be just coincidence. But if a high amount of random input data indeed breaks Bell's inequality for this type of test, then you managed to provide just the kind of illustration that De Raedt's paper lacks and for which I made a wishful request in post #46.

When I find the time I'll also try your example for more random inputs. :smile:
 
Last edited:
  • #98
harrylin said:
...When I find the time I'll also try your example for more random inputs. :smile:

Bill is WRONG, as usual. If the data is realistic, this will not happen with anything other than a rigged sample. But hey, good luck with that! Usually, this exercise should serve to help you understand WHY Bill's assertions are incorrect. Just remember to keep in mind what the inequality is. I explain this at my site:

Bell's Theorem and Negative Probabilities

If you follow this derivation of Bell, you cannot go wrong.



----Table from the page showing realism requirement, 8 permutations----

Case --Outcomes-- Predicted likelihood of occurance
[1] A+ B+ C+ >=0
[2] A+ B+ C- >=0
[3] A+ B- C+ >=0
[4] A+ B- C- >=0
[5] A- B+ C+ >=0
[6] A- B+ C- >=0
[7] A- B- C+ >=0
[8] A- B- C- >=0
Where: A=0 degrees B=67.5 degrees C=45 degrees
 
  • #99
DrChinese said:
[..] where is a realistic dataset? Keep in mind that it makes predictions different than QM, which is the entire point of Bell if anyone here is still listening.

Hmmm... if that was "the entire point", then few people would be bothered by Bell's Theorem. :-p
Perhaps we should remind ourselves of Bell's entire point, which De Raedt challenges:
In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can [instantaneously] influence the reading of another instrument, however remote.
- On the EPR paradox

and even:
So the quantum correlations are locally inexplicable
- Bertlmann's socks

Thus Bell claimed to have proved a very weird fact that can never be explained by a "local" theory so that it needs to be explained by for example "spooky action at a distance".

Such a claim (theorem) can be invalidated in two ways:

1. By providing a counter example that the theorem claims to be impossible.
2. By showing that the theorem is based on at least one invalid assumption.

The topic of this thread is about De Raedt's attempt at way no.2, and not about way no.1.
 
  • #100
harrylin said:
Hmmm... if that was "the entire point", then few people would be bothered by Bell's Theorem. :-p

"In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions,..."

Good quote. And it means that IF the Inequality is respected, then the results do not agree with QM's predictions. People forget this critical point. Start with a realistic dataset and you won't end up with predictions that match the QM expectation for some angles. All the stuff about Bell Inequalities - how folks attempt to dissect them - does not change this fact. I have given you all the tools to understand this. But you are biased by the end result you want to achieve.

You can derive Bell yourself from scratch. I did, so it can't be THAT hard. Go through the logic from one of my pages and you will see that there are no weird assumptions needed. You start from ANY reasonable realistic requirement, use the Bell thought process knowing a few good angles to use, and voila.

So you want to supply the added parameters mentioned in the quote above. You will see quickly that the MALUS cos^2 relationship goes out the window. For ONE photon stream, not TWO. (That is precisely what happened to the De Raedt team, by the way, as they quite properly pointed out before I got started with their simulation.)
 
  • #101
DrChinese said:
"In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions,..."

Good quote. And it means that IF the Inequality is respected, then the results do not agree with QM's predictions. People forget this critical point. Start with a realistic dataset [...]
That means regretfully that you did not read or understand the next part of my reply:

"Such a claim (theorem) can be invalidated in two ways:

1. By providing a counter example that the theorem claims to be impossible.
2. By showing that the theorem is based on at least one invalid assumption.

The topic of this thread is about De Raedt's attempt at way no.2, and not about way no.1."
So you want to supply the added parameters mentioned in the quote above. [...]
No, why do you think that I have that ambition? As before you confuse me with someone else. However, De Raedt does have that ambition and as you know that is the topic of another thread, which you started:
https://www.physicsforums.com/showthread.php?t=369286
 
Last edited:
  • #102
harrylin said:
That means regretfully that you did not read or understand the next part of my reply... ... showing that the theorem is based on at least one invalid assumption...

You're funny. :biggrin: It really is humorous when folks believe they have found, in a few hours, the one thing that thousands of professional physicists have missed after years of study of this very subject. I would say that most of these professionals have a pretty strong understanding of the Bell assumptions. That is precisely why Bell is so well regarded. You might try reading a few hundred mainstream papers and see if that widens your perception any.

But hey, you are welcome to believe anything you want. Perhaps you believe in flying dogs too... here is a picture that "proves" it:

FlyingDog.jpg
 
  • #103
Just to be clear: If you are here to learn how/why Bell's Theorem works, you are at the right place. If you are here to tear down Bell, you are at the WRONG place. Bell is generally accepted science.

It is not incumbent on folks here to "prove" that the de Raedt team's latest paper is not applicable to Bell. If someone cares to ask why a certain particular element of their reasoning is invalid, I'm sure there will be discussion around that. But if you can't express their reasoning yourself, I don't plan to do your analysis for you.
 
  • #104
DrChinese said:
You're funny. :biggrin: It really is humorous when folks believe they have found, in a few hours, the one thing that thousands of professional physicists have missed after years of study of this very subject.
Yes indeed - for any onlookers: Dr.C misquoted me, making it appear nearly the contrary of what I wrote. :biggrin:
OK then I'll do the same here:
...flying dogs... here is a picture that proves it:
FlyingDog.jpg
It's really humorous that you also believe in flying dogs. :wink:

PS in case you really misunderstood what I wrote, see:
https://www.physicsforums.com/showthread.php?t=123652
 
Last edited:
  • #105
DrChinese said:
Just to be clear: [...] It is not incumbent on folks here to "prove" that the de Raedt team's latest paper is not applicable to Bell. [..]
It depends on what you mean with "here": this thread is not about how Bell's Theorem works nor about an attempt to construct a realistic model. in this thread we discuss the arguments of De Raedt's latest paper about Boole-Bell. Do you want me to follow your example and start discussing Boolean logic in the thread on De Raedt's latest computer simulations?
 
  • #106
In my post #29 I wrote:

"PS, there is another intriguing remark, not sure if it is on-topic:
in contrast to the EPR-Bohm state, one can really (as EPR claimed) associate with the original EPR state a single probability measure describing incompatible quantum observables (position and momentum).
Can someone here explain what Khrennikov meant?"

Now, I think that Dr.C's remarks in post #89 are helpful:
"use just 1 datapoint [..] Obviously, if it works for the single case"

For, although that remark is wrong for such observables as polarisation, it is correct for such observables as position and momentum.

Thus Khrennikov may have meant that such a single probability distribution is valid for the data set [position, momentum] of a single entangled electron pair. This also relates to the fact that the measurements of position and momentum are mutually exclusive.

Obviously the averaging issue will not appear for a Boole-Bell like inequality for position and momentum.
 
  • #107
harrylin said:
Now this is an important point; and Bill, I must admit that I did not expect such an outcome when you started this simple example![..]
if a high amount of random input data indeed breaks Bell's inequality for this type of test, then you managed to provide just the kind of illustration that De Raedt's paper lacks and for which I made a wishful request in post #46.

When I find the time I'll also try your example for more random inputs. :smile:

Well after all, it took only half an hour to try this on a spreadsheet. [CORRECTION: THIS WAS WRONG. See next!]
My result: with a randomly generated data set [+1 -1 +1] etc. I obtained (10 times out of 10 for 30 data points) that the Bell theorem holds for random input, as expected.

But after writing this it strikes me that I did did not exactly reproduce Bill's example. What I tested was perhaps closer to Bell's example which I already verified in the past. :rolleyes:

Bill, I thought that you gave a nearly random data set, but obviously your dataset is very non-random. I replaced random sampling by random input, but that's not the same thing and completely wrong if the input is not random... Interesting!

Is it easy to randomly sample a fixed data set in Excel or OpenOffice? I mean not by hand, but with an operator?

Thanks,
Harald
 
Last edited:
  • #108
harrylin said:
Well after all, it took only half an hour to try this on a spreadsheet. [CORRECTION: THIS WAS WRONG. See next!]
My result: with a randomly generated data set [+1 -1 +1] etc. I obtained (10 times out of 10 for 30 data points) that the Bell theorem holds for random input, as expected.

But after writing this it strikes me that I did did not exactly reproduce Bill's example. What I tested was perhaps closer to Bell's example which I already verified in the past. :rolleyes:

Bill, I thought that you gave a nearly random data set, but obviously your dataset is very non-random. I replaced random sampling by random input, but that's not the same thing and completely wrong if the input is not random... Interesting!

Is it easy to randomly sample a fixed data set in Excel or OpenOffice? I mean not by hand, but with an operator?

Thanks,
Harald

OK I fixed that and took Bill's set of 30 datapoints. And added random sampling.
I calculated with both the original equation of Bell and with the equation of Dr.C. :-p

The result was the same as before: sorry Bill, I got 10 times no violation of the Bell Inequality.
Bill, if you like I can send you the spreadsheet.

Thus I am still interested if anyone can come up with an example like the one of De Raedt with doctors and patients, but that not looks like a conspiracy. :rolleyes:

Harald
 
  • #109
harrylin said:
OK I fixed that and took Bill's set of 30 datapoints. And added random sampling.
I calculated with both the original equation of Bell and with the equation of Dr.C. :-p

The result was the same as before: sorry Bill, I got 10 times no violation of the Bell Inequality.
Bill, if you like I can send you the spreadsheet.

Thus I am still interested if anyone can come up with an example like the one of De Raedt with doctors and patients, but that not looks like a conspiracy. :rolleyes:

Harald

Good work! Yes, I have run simulations in the past and it is easy once you see the method in detail to see why you will not get a violation with any reasonable samples. Vice versa, any test of any physical law could be "tricked" up to give a different result than expected. There is nothing special about Bell in that regard.

And the thing to remember about the de Raedt program: A simulation that satisfies Bell (no easy feat, I assure you) will NOT give expectation values for other tests consistent with QM. For example, it will not match Malus. Now, it is important to make a distinction between the Malus cos^2 and the Bell test cos^2. They look to be the same thing but they are not really. The Bell cos^2 coincidentally applies to 2 particle states. But, for example, it does not apply to entangled states of more than 2 particles. The underlying principle is in fact the Malus rule for a single stream. And any simulation that satisfies Bell will not be able to have the Malus rule in place. Obviously, that is a big problem for such a model because Malus was discovered over 200 years ago and is bedrock.
 
  • #110
harrylin said:
Yes indeed - for any onlookers: Dr.C misquoted me, making it appear nearly the contrary of what I wrote. :biggrin:
OK then I'll do the same here:

It's really humorous that you also believe in flying dogs. :wink:

Sorry for any messing up of what you intended to say. I DO believe in flying dogs, however. because if I open my front door, I will see mine flying out similar to the picture.

:smile:
 
  • #111
Well, I looked at the paper that started the thread. I did not finish it (way too many words) but I can see a number of serious problems with it.

For a start, section III D "Relation to Bell's work" has no relation to Bell's work whatsoever :confused: None of the original Bell's assumptions are reflected, in particular, the crucial assumption of independence of results A from settings B and vice versa is nowhere to be found. Neither is the perfect anti-correllation for the same settings of A and B (which is used quite a lot in Bell's derivation). And then the authors confuse individual outcomes of measurement with expectation values and arrive at completely wrong conclusion about triplets of data sharing the same lambda, while there are no triplets of data at all in Bell's original work, only probabilities and expectation values. And it goes downhill from there.

DK
 
  • #112
harrylin said:
In my post #29 I wrote:

"PS, there is another intriguing remark, not sure if it is on-topic:

Can someone here explain what Khrennikov meant?"

Now, I think that Dr.C's remarks in post #89 are helpful:
"use just 1 datapoint [..] Obviously, if it works for the single case"

For, although that remark is wrong for such observables as polarisation, it is correct for such observables as position and momentum.

Thus Khrennikov may have meant that such a single probability distribution is valid for the data set [position, momentum] of a single entangled electron pair. This also relates to the fact that the measurements of position and momentum are mutually exclusive.

Obviously the averaging issue will not appear for a Boole-Bell like inequality for position and momentum.

Khrennikov's remark means simply that you can not have a single probability distribution which includes mutually exclusive parts.

if A and B are mutually exclusive, it means P(AB) = 0, everywhere, so there is no probability distribution. When applied to Bell measuring two particles at three angles, P(a,b,c) can not exist since one of those angles will be mutually exclusive to the other two (since only two angles can be measured for 2 particles), Therefore the probability distribution P(a,b,c) can not exist.
 
  • #113
harrylin said:
Well after all, it took only half an hour to try this on a spreadsheet. [CORRECTION: THIS WAS WRONG. See next!]
My result: with a randomly generated data set [+1 -1 +1] etc. I obtained (10 times out of 10 for 30 data points) that the Bell theorem holds for random input, as expected.

But after writing this it strikes me that I did did not exactly reproduce Bill's example. What I tested was perhaps closer to Bell's example which I already verified in the past. :rolleyes:

Bill, I thought that you gave a nearly random data set, but obviously your dataset is very non-random. I replaced random sampling by random input, but that's not the same thing and completely wrong if the input is not random... Interesting!
None of my datasets is random although they may appear to be.

Is it easy to randomly sample a fixed data set in Excel or OpenOffice? I mean not by hand, but with an operator?

Thanks,
Harald
I usually just write code to do that.
 
  • #114
harrylin said:
OK I fixed that and took Bill's set of 30 datapoints. And added random sampling.
I calculated with both the original equation of Bell and with the equation of Dr.C. :-p

The result was the same as before: sorry Bill, I got 10 times no violation of the Bell Inequality.
Bill, if you like I can send you the spreadsheet.
You will have to describe what you mean by random sampling, because you may have misunderstood something. Which of the treatments in my post #92 were you unable to reproduce?

Please send the spreadsheet, or better, attach it.

Thus I am still interested if anyone can come up with an example like the one of De Raedt with doctors and patients, but that not looks like a conspiracy. :rolleyes:
Harald
I'm surprised you think that my example looks like a conspiracy. Remember that DrC's challenge was supposed to show that even with conspiracy, it is impossible to violate Bell's inequality. I don't think DrC was asking for a random dataset -- it kinda defeats the purpose; why would he ask for one if anybody could generate one randomly? And if it is assumed that a physical process is producing the data, why can't the physical process have behavioural patterns that are non-random?
 
  • #115
The result was the same as before: sorry Bill, I got 10 times no violation of the Bell Inequality.
Bill, if you like I can send you the spreadsheet.

I did the calculations for the random sampling case I discussed as "Treatment (3)" in post #92.

For the following dataset,

a, b, c
-----------
+1, -1, +1
-1, +1, -1
-1, +1, -1
-1, +1, -1
+1, -1, -1
-1, +1, +1
-1, +1, -1
+1, -1, +1
-1, -1, -1
-1, -1, -1
-1, -1, +1
-1, +1, +1
+1, -1, -1
+1, -1, -1
-1, -1, +1
-1, -1, -1
+1, -1, +1
-1, +1, -1
-1, +1, -1
+1, +1, +1
+1, -1, -1
-1, -1, +1
+1, -1, +1
+1, -1, +1
+1, +1, +1
-1, -1, -1
+1, +1, +1
-1, -1, -1
-1, +1, -1
+1, -1, +1

Here is the method I use. REMEMBER - you must sample without replacement. This is equivalent to randomly shuffling the dataset and then picking the first 10 of the resulting dataset for calculating ab, the next 10 for bc and the last 10 for ac, in a manner similar to treatment (2). Again to be clear, the procedure is as follows

- Randomize the sequence by shuffling it
- select the first 10 of the resulting randomized sequence, and use for the ab term, the next 10 for the bc term and the last 10 for the ac term.

This way, we are sure that every row is used NOT MORE THAN ONCE.

I did the above 10 times, randomizing everytime from the previous random sequence and got the following results:
ab=-0.4000, bc=0.0000, ac=0.4000, Violated=False
ab=-0.4000, bc=0.0000, ac=0.2000, Violated=False
ab=0.0000, bc=-0.2000, ac=0.2000, Violated=False
ab=-0.6000, bc=-0.4000, ac=0.4000, Violated=True
ab=-0.2000, bc=-0.2000, ac=0.6000, Violated=False
ab=-0.2000, bc=-0.8000, ac=0.2000, Violated=True
ab=0.0000, bc=-0.2000, ac=0.4000, Violated=False
ab=0.0000, bc=-0.4000, ac=0.2000, Violated=False
ab=-0.6000, bc=-0.4000, ac=0.0000, Violated=False
ab=-0.4000, bc=0.2000, ac=0.8000, Violated=True

Repeating this 1000, consistently gives me a violation in about 20% of the randomly sampled (without replacement) pairs.

NOTE, 1 violation is enough.
 
  • #116
billschnieder said:
Repeating this 1000, consistently gives me a violation in about 20% of the randomly sampled (without replacement) pairs.

NOTE, 1 violation is enough.
That is incorrect. You are confusing expectation values with averages. Bell's inequality is stated in terms of expectation values and you are getting averages. Expectation value of 100 fair coin tosses is 50 but the average of a particular run can be anywhere between 0 and 100. It does not mean anything.

If you want to do the job properly, then please calculate not only the mean values but also standard deviation. You will then find that Bell's inequality actually holds quite well and the occasional deviations are well within the error bars.

If you can show how to beat Bell's inequalities consistently by at least a few standard deviations (which is the case with QM), then you have the case, otherwise you would have to try harder.
 
  • #117
Delta Kilo said:
That is incorrect. You are confusing expectation values with averages. Bell's inequality is stated in terms of expectation values and you are getting averages. Expectation value of 100 fair coin tosses is 50 but the average of a particular run can be anywhere between 0 and 100. It does not mean anything.
You have no clue what you are talking about. I doubt you have made an effort to understand what this thread is about. (see http://arxiv.org/pdf/quant-ph/0211031
Foundations of Physics Letters
Volume 15, Number 5, 473-486, DOI: 10.1023/A:1023920230595
)

Bell's inequality is equivalent to saying "The sum of any 3 sides of a die will never exceed 15".
But if you measure one side from three different dice, you will get violations some of the time, even though NO single die violates the rule. You therefore can not conclude from such an experiment (3 different dice), that a single die does not have well defined values for the sides. Violation by a single case, shows that there is something wrong between correspondence of the experiment and the rule. This is what De Raedt showed.

This is the crux of the issue, which you haven't understood.

If you want to do the job properly, then please calculate not only the mean values but also standard deviation. You will then find that Bell's inequality actually holds quite well and the occasional deviations are well within the error bars.

If you can show how to beat Bell's inequalities consistently by at least a few standard deviations (which is the case with QM), then you have the case, otherwise you would have to try harder.
Bell's inequality can never be violated even for a single data point, if the mathematical operation is valid. Violation of the inequality by a single data point tells you a mathematical error has been made.
 
Last edited:
  • #118
billschnieder said:

Oh, this paper is much better written than De Raedt and the cheating step is much more subtle. But it is similar.

billschnieder said:
Bell's inequality is equivalent to saying "The sum of any 3 sides of a die will never exceed 15".
No, this is not what Bell says at all. Your example is pure arithmetic and Bell's theorem is statistical. There is a big difference.

Bell basically has a triangle inequality for the expectation values of lengths. He specifically introduces P(a,b) as expectation value in (2)*

This is different from ordinary tirangle inequality for a single triangle. Ordinary triangle inequality is valid only when all three sides belong to the same triangle (obviously). By summing over a sequence of triangles one gets similar inequality for average triangle. But there is a constraint: the same set of triangles must used when calculating averages for each side.

The constraint is removed when one transitiones from sums and averages which vary from one experiment to the next, to expectation values which are a property of the stochastic process as a whole. Doing this requires certain assumptions about the process (eg. stationary). In Bell's paper these assumptions are encoded in the probability density \rho(\lambda) being function of \lambda and nothing else and A(a,\lambda) and B(b,\lambda) being fully deterministic. These extra assumptions about the stochastic process behind the data is what allows one to estimate expectations of each side of the triangle independently.

billschnieder said:
But if you measure one side from three different dice, you will get violations some of the time, even though NO single die violates the rule. You therefore can not conclude from such an experiment (3 different dice), that a single die does not have well defined values for the sides. Violation by a single case, shows that there is something wrong between correspondence of the experiment and the rule. This is what De Raedt showed.

Both De Raedt and Sica follow similar lines of logic:
First they note that Bell's inequality is probabilistic and they also notice similar inequality which is always true for single data point.
So they have this great idea how improve on Bell and to get rid of all uncertainities once and for all. So they start by extending formula for single data point to a sequence. While doing that they discover that they no longer need \rho(\lambda) and they happily get rid of it, thereby throwing baby out with the bathwater.

But then when it comes to the experiment, the absence of \rho(\lambda) assumption comes back and bites them in the nose: in Bell's paper factorization between (14) and (15) comes naturally from the math thanks to the shape of the formula (14).

De Raedt derives his "EBBI" (which he claims are equivalent to but better then Bell's) then discovers they cannot be easily factrorized because of non-commuting measurements, creates a big fuss out of it and claims it all to be Bell's fault.

Sica also claims to derive Bell's inequality in section 1.2. He says:
If the numerical correlation estimates in (11) approach ensemble average limits, as N\rightarrow\infty, then replacing the estimates in (11) with these limits results in the usual form of Bell's inequality.
However he does not actually do this step and so he does not spell the assumptions that have to be made. Later also gets in trouble with factorization and non-commuting masurements. Since he cannot get 3 datapoints from one pair of measurements, he resorts to a a bit of cheating: he re-arranges the data so that measirements of &lt;A_{i}B_{i}&gt; and &lt;A&#039;_{i}B_{i}&gt; with the same i have the same value for B_{i} and then calculates &lt;A_{i}A&#039;_{i}&gt; by going through &lt;A_{i}B_{i}&gt; and &lt;A&#039;_{i}B_{i}&gt;. Somehow he is not worried at all that (12) and (16) have different shape even though they must to be the same thing from symmetry point of view. Then of course he gets wrong result for (21), again different from (22) and (23) from which he claims that QM satisfies Bell's inequality.

SUMMARY: How to disprove Bell's inequality:
  1. Take Bell's inequality
  2. "Improve it" by throwing vital bits out
  3. Run into trouble
  4. Blame Bell
  5. ...?
  6. Publish!
billschnieder said:
Bell's inequality can never be violated even for a single data point, if the mathematical operation is valid. Violation of the inequality by a single data point tells you a mathematical error has been made.

If you refer to formulas with sums rather than expectations in them (those that are supposed to be true in arithmetic rather than statistical sense), please do not call them "Bell's inequalities". Call them "De Raedt" or "Sica" or "Bill's inequalities" if you wish.

Again, I stress, Bell's inequality is for expectations and expectations are not averages. One cannot simply plug experimental averages into the formula for expectations and expect it to work 100%. At the very least one has to compute std deviation and define error bars.

*) J.S.Bell. "On Enstein Podolsky Rosen paradox" 1964

DK
 
  • #119
Delta Kilo said:
Oh, this paper is much better written than De Raedt and the cheating step is much more subtle. But it is similar.


No, this is not what Bell says at all. Your example is pure arithmetic and Bell's theorem is statistical. There is a big difference.
I disagree, Bell's inequality is a pure arithmetic identity applied to statistics. You can not derive the inequality if you start from statistics. If you like, we can go through the derivation step by step, to convince you that the derivation is pure arithmetic.


This is different from ordinary tirangle inequality for a single triangle. Ordinary triangle inequality is valid only when all three sides belong to the same triangle (obviously). By summing over a sequence of triangles one gets similar inequality for average triangle. But there is a constraint: the same set of triangles must used when calculating averages for each side.

Interesting that you mention this. See post #152 in the related thread (https://www.physicsforums.com/showpost.php?p=3308861&postcount=152) where I discussed this. as follows

I suppose you know about the triangle inequality which says for any triangle with sides labeled x, y, z where x, y, z represents the lengths of the sides

z <= x + y

Note that this inequality applies to a single triangle. What if you could only measure one side at a time. Assume that for each measurement you set the label of the side your instrument should measure and it measured the length destroying the triangle in the process. So you performed a large number of measurements on different triangles. Measuring <z> for the first run, <x> for the next and <y> for the next.

Do you believe the inequality
<z> <= <x> + <y>

Is valid? In other words, you believe it is legitimate to use those averages in your inequality to verify its validity?


Funny thing, your statement in bold says, exactly the same set of triangles must be used to calculate the terms. Isn't it disingenuous then for you to suggest something different in Bell's case?

In other words -- Do you agree that for Bell's inequality to be guaranteed to be obeyed, the same set of photons must be used to calculate all three expectation values used in the inequalities ? Please I need a Yes/No answer here.

If you agree, then you have conceded Sica's point, and De Raedt's point and my point.
If you disagree, see the next point.

The constraint is removed when one transitiones from sums and averages which vary from one experiment to the next, to expectation values which are a property of the stochastic process as a whole. Doing this requires certain assumptions about the process (eg. stationary).

It is your claim therefore that you do not need to use the same set of particles, because the process generating the particles is stationary?
I need a Yes/No answer here.

If you agree, then I suppose you have proof that that it is stationary. If I were to provide evidence that process producing the photons is not stationary, will you concede therefore that expectation values from such a process is not compatible with Bell's inequality?? I need a Yes/No answer here.

In Bell's paper these assumptions are encoded in the probability density \rho(\lambda) being function of \lambda and nothing else and A(a,\lambda) and B(b,\lambda) being fully deterministic. These extra assumptions about the stochastic process behind the data is what allows one to estimate expectations of each side of the triangle independently.

I disagree, it is the factorization mentioned in equation (5) of Sica's paper above is the crucial step which introduces the assumption of stationary. That step is equivalent to going from the universally valid arithmetic inequality:

<z> <= <x + y>

To the statistical inequality

<x> <= <x> + <y>

Which is only obeyed when the process generating the triangles is stationary

Both De Raedt and Sica follow similar lines of logic:
First they note that Bell's inequality is probabilistic and they also notice similar inequality which is always true for single data point.
So they have this great idea how improve on Bell and to get rid of all uncertainities once and for all. So they start by extending formula for single data point to a sequence. While doing that they discover that they no longer need \rho(\lambda) and they happily get rid of it, thereby throwing baby out with the bathwater.
This is a mischaracterization of their work, which I don't think you have understood yet. They are not trying to improve Bell. They are explaining why datasets from experiments/QM are not compatible with Bell's inequality.

But then when it comes to the experiment, the absence of \rho(\lambda) assumption comes back and bites them in the nose: in Bell's paper factorization between (14) and (15) comes naturally from the math thanks to the shape of the formula (14).
I think you are just handwaving here. Experimenters do not know or care about lambda.

De Raedt derives his "EBBI" (which he claims are equivalent to but better then Bell's) then discovers they cannot be easily factrorized because of non-commuting measurements, creates a big fuss out of it and claims it all to be Bell's fault.
Non-commuting measurements are not compatible with stationarity as Sica explains. Therefore you can not use expectation values from QM/Experiments as valid terms for Bell's inequality. That is the point, you have admitted by stating that stationarity is a pre-requisite. If Bell introduces stationarity as a requirement without cause, that is is problem. If Bell fails to realize that the stationarity requirement is incompatible with QM to start with, then that is his fault.

Sica also claims to derive Bell's inequality in section 1.2. He says: However he does not actually do this step and so he does not spell the assumptions that have to be made. Later also gets in trouble with factorization and non-commuting masurements. Since he cannot get 3 datapoints from one pair of measurements, he resorts to a a bit of cheating: he re-arranges the data so that measirements of &lt;A_{i}B_{i}&gt; and &lt;A&#039;_{i}B_{i}&gt; with the same i have the same value for B_{i} and then calculates &lt;A_{i}A&#039;_{i}&gt; by going through &lt;A_{i}B_{i}&gt; and &lt;A&#039;_{i}B_{i}&gt;. Somehow he is not worried at all that (12) and (16) have different shape even though they must to be the same thing from symmetry point of view. Then of course he gets wrong result for (21), again different from (22) and (23) from which he claims that QM satisfies Bell's inequality.

You do not understand it at all. In Bell test experiments, 3 sub "experiments" are done in which the following measurements are made:

"a1 b1"
"b2 c2"
"a3 c3"

Sica essentially says if the process producing the data is stationary, it should be possible to sort the datasets such that the number and pattern of switching between +1 and -1 in b1 and b2 are identical in the first two runs, and after doing that, you could *factor* out the a1 list from run 1 and the c2 list from run 2, recombine them and create a counterfactual "a1 c2" run. Therefore you do not need to measure run 3 at all. You will have:

"a1 b1"
"b2 c2"
"a1 c2"

Which will never violate the inequality. This is exactly the type of factorization which Bell did in equations (15) to (16).

If however, it is not possible to sort the data from experiment runs 1 and 2 as outlined above, your stationarity assumption fails and the inequality is not applicable to the data. Do you agree?


SUMMARY: How to disprove Bell's inequality:
  1. Take Bell's inequality
  2. "Improve it" by throwing vital bits out
  3. Run into trouble
  4. Blame Bell
  5. ...?
  6. Publish!
You are not being serious.
 
Last edited:
  • #120
I also draw your attention to the follow-up paper by L. Sica which addresses the stationarity issue further.


Bell's inequality violation due to misidentification of spatially non stationary random processes
Journal of Modern Optics, 2003, Vol. 50, No. 15-17, 2465-2474
http://arxiv.org/abs/quant-ph/0305071v1

Correlations for the Bell gedankenexperiment are constructed using probabilities given by quantum mechanics, and nonlocal information. They satisfy Bell's inequality and exhibit spatial non stationarity in angle. Correlations for three successive local spin measurements on one particle are computed as well. These correlations also exhibit non stationarity, and satisfy the Bell inequality. In both cases, the mistaken assumption that the underlying process is wide-sense-stationary in angle results in violation of Bell's inequality. These results directly challenge the wide-spread belief that violation of Bell's inequality is a decisive test for nonlocality.
 

Similar threads

  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 333 ·
12
Replies
333
Views
18K
Replies
58
Views
4K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 82 ·
3
Replies
82
Views
11K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
6
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 75 ·
3
Replies
75
Views
11K
  • · Replies 47 ·
2
Replies
47
Views
5K