Can Numerical Simulations Reveal Violations of CHSH Inequalities?

  • Thread starter Thread starter jk22
  • Start date Start date
  • Tags Tags
    Chsh Numerical
jk22
Messages
732
Reaction score
25
In the hidden variable model for CHSH, I computed the probabilities for the CHSH operators to have results -4,-2,0,2,4, then the average S.

Using a computer program, I could find the minimum of that function, obtaining S>2 :

C:
#include<stdio.h>
#include<math.h>

double PI=4.0*atan(1.0);

double S(double b, double b1, double a1)
{
   double x=a1-b, y=b1-a1;
   double ret=-4.0*(1.0-b)*b1*(1.0-x)*(1.0-y);
   
   ret+=-2.0*(1.0-x)*(1.0-y)*(b*b1);
   ret+=-2.0*(1.0-b)*(1.0-b1)*(1.0-x)*(1.0-y);
   ret+=-2.0*(1.0-b)*b1*x*(1.0-y);
   ret+=-2.0*(1.0-b)*b1*(1.0-x)*y;

   
   ret+=2.0*b*b1*x*y;
   ret+=2.0*(1.0-b)*b1*x*y;
   ret+=2.0*b*(1.0-b1)*x*(1.0-y);
   ret+=2.0*b*(1.0-b1)*(1.0-x)*y;

   ret+=4.0*b*(1.0-b1)*x*y;

   return(ret);
}

int main(void)
{
   int sub=500;
   int si,sj,sk;

   double smax=-5.0, smin=5.0;
   double val;
   double b,b1,a1;

   printf("%lf\n", S(.25, .5, .75));

   for(int i=0;i<sub;i++)
   for(int j=i;j<sub;j++)
   for(int k=j;k<sub;k++)
   {
     b=(double)i/(double)sub;
     a1=(double)j/(double)sub;
     b1=(double)k/(double)sub;

     val=S(b,b1,a1);

     if(val>smax) smax=val;
     if(val<smin)
     {
       smin=val;
       si=i;
       sj=j;
       sk=k;
     }
   }

   printf("Shv=%lf b=%lf b1=%lf a1=%lf\n", smin, (double)si/(double)sub, (double)sk/(double)sub, (double)sj/(double)sub);

   b=(double)si/(double)sub*PI;
   a1=(double)sj/(double)sub*PI;
   b1=(double)sk/(double)sub*PI;

   printf("Sqm=%lf\n", -cos(b)+cos(b1)-cos(a1-b)-cos(b1-a1));
}

giving
Shv=-2.018518 b=0.000000 b1=0.334000 a1=0.166000But there should be an error in the S function.
 
Physics news on Phys.org
Some things you should do when presenting code like this:
  • Structure the program so you can't cheat. A local model of CHSH should have all the "magic" happening inside a local-to-alice function and a local-to-bob function. Any function touching parameters from both Alice and Bob should be boring and trivially correct, instead of involving 10 arbitrary unexplained equations.
  • Cut everything unnecessary. For example, any not-related-to-your-point code; like the triply-nested find-minimum-parameters loop. Just hard-code the computed parameters.
  • Use descriptive variable names and comments. We don't know what "b1" is or how it relates to "b" vs "a1".
As it stands, I don't understand what your code in S is trying to do. Presumably an expected value computation, but it's all mixed up with whatever mistake you made and disentangling it is really your responsibility.

You should have a function that returns a probability distribution of Alice-hidden-particle-state and Bob-hidden-particle-state pairs. Then you should have an Alice function and a Bob function that, given the corresponding hidden-particle-state, returns a probability distribution of local measurement outcomes for each measurement setting specified by the CHSH game. Then you should have a function that combines those local outcome distributions for each possible state pair and computes the overall expectation, to see if it violates CHSH.
 
Thanks for your hints. I will try to explain the S function :

In fact S is the average CHSH but computed "horizontally" instead of vertically. By this I mean that we normally compute the covariance $$<AB> $$.

Here I looked at $$Chsh=AB-AB'+A'B+A'B'$$

There are 16 possibilities since the As and B are +\-1.

Following CHSH theorem the values can only be locally 2 , -2.

However I could compute this way : I use the local model $$A (a,x)=2\Theta (x-a)-1$$, $$B (b,x)=-A (b,x) $$.

The difference comes then : for exemple $$p (AB=1)=\int_a^b A (a,x)B(b,x)dx=b-a $$ where b and a are the angles divided by pi hence running from 0 to 1.

Then by noting that the pairs AB AB' aso are independent we can compute $$p (Chsh=-4)=p (AB=-1)p (AB'=1)p (A'B=-1)p (A'B'=-1)=(1-b)b'(1-a'+b)(1-b'+a') $$ (a is taken as zero we can choose a reference angle)

Thus using the latter local model we see there is a non vanishing probability for the value -4.

The remaining is to compute the probabilities for 2 -2 which have 4 possibilities each and 4.

The job was then to compute the average which should be smaller equals 2 in absolute value.

Pogo, And there I found the mistake, it is in the code, namely in the computation of the probabilities for 2, one factor is (1-b) and not b.

So wrong alarm, sorry to have disturbed, the average gives smaller equals 2.

What we learn, is that even if CHSH can get value -4 with local hidden variables, it's average is smaller than 2 in absolute value, in agreement with Bell's theorem.

In fact using this local model the covariance is -1+2 (b-a) and $$S=-1+2 (b-a)+1-2 (b'-a)-1+2 (a'-b)-1+2 (b'-a')=-2$$ it can take only one value and is a constant independent of the angles.

However in experiments the average of Chsh varies depending on the angles hence we deduce this model is wrong.
 
Last edited:
As summary :

For Chsh the average is lower equals 2 on average but 4,0,-4 results have to be taken into account.

Thus lhv is not as dead as it seems from the onset. A point is that with this lhv we have p (-2)<p (-4).
In an experiment however the probabilities are statistics so that on a sample we could get more -4 results than expected and hence obtain a violation.

If we look at experiment like Ansmann's with Josephsons qubit and others we see that in general the experimental results are nearer to 2 than 2.82.

Does this mean that lhv models could still be an explanation for correlation ?
 
There is always the possibility that a lhv model will violate the CHSH-inequality with a finite number of trials. However, any given violation becomes less and less likely as we increase the number of trials. Experimentally observed violations of the inequality is thus reported together with a p-value, which is the probability that a lhv-model could generate the observed value (or more).

See more here: http://arxiv.org/abs/1207.5103
 
  • Like
Likes Nugatory
This is not in agreement with Chsh which affirms that the extreme values of S are 2 and -2 strictly. It is very sharp. Hence any violation of Chsh implies the derivation of the theorem is wrong at least the one usuaĺly presented but not the calculation above.

I think the flaw in Bell theorem is that it makes the sum of two integrals and makes deduction on the integrand without actually doing the change of variable :

$$\int A (a,x)B (b,x)\rho (x)dx+\int A (a,y)B (b',y)\rho (y)dy+\int A (a',z)B (b,z)\rho (z)dz-\int A (a',w)B (b',w)\rho (w)dw$$

Chsh continues by putting w=y=z=x which gives the correct result of integration namely in -2, 2.

However the derivation is misleading since it affirms that the integrand is $$A (B+B')+A'(B-B')\in\{2,-2\} $$

This allows for no violation at all since we are summing 2s and -2s.

However another change of variables y=f (x) z=g (x) w=h (x) in general shows that we are summing -4s -2s 0s 2s and 4s with an average at -2. Here we could get a sample that violates Chsh but not in the original derivation.
 
jk22 said:
This is not in agreement with Chsh which affirms that the extreme values of S are 2 and -2 strictly.

That's an oversimplification. Bell's theorem puts a strict upper limit $$\langle A B \rangle + \langle A B' \rangle + \langle A' B \rangle - \langle A' B' \rangle \leq 2$$ on the sum of the expectation values ##\langle A B \rangle## (etc.), but this quantity can only be estimated in a finitely long Bell experiment and the estimate could exceed 2 just by chance. So it's not true that any simulated or measured result greater than 2 automatically falsifies the assumptions of the derivation. An additional statistical analysis is needed to determine if the violation is statistically significant.

I think the flaw in Bell theorem is that it makes the sum of two integrals and makes deduction on the integrand without actually doing the change of variable :

$$\int A (a,x)B (b,x)\rho (x)dx+\int A (a,y)B (b',y)\rho (y)dy+\int A (a',z)B (b,z)\rho (z)dz-\int A (a',w)B (b',w)\rho (w)dw$$

Chsh continues by putting w=y=z=x which gives the correct result of integration namely in -2, 2.

Your criticism doesn't make sense. In what you wrote, ##w##, ##x##, ##y##, and ##z## are integration variables, and the integrals don't depend on what the variables are called. ##\int f(x) \mathrm{d}x## is the same thing as ##\int f(y) \mathrm{d}y##, just like ##\sum_{i} a_{i}## is the same thing as ##\sum_{j} a_{j}##.

There is an assumption needed to combine the integrals though, which is that it is the same function ##\rho(\lambda)## that appears in each term independently of the choice of measurement (##a## or ##a'## and ##b## or ##b'##). Physically, this translates to assuming that the initial hidden state ##\lambda## is uncorrelated with the choices of measurements. But if this is what you're getting at then you're not pointing out anything new. This is already a known loophole in Bell's theorem.
 
Last edited:
It was just to make clear that 2 is not the maximum value but an average.

By chance it is not new because new things means it is wrong how dare you do that ?
 
jk22 said:
This is not in agreement with Chsh which affirms that the extreme values of S are 2 and -2 strictly.
No, the extreme values for any experiment with a finite number of trials are in fact 4 and -4. Only when you consider experiments with a large number of trials will the CHSH probability interval converge towards 2 and -2.
This is the same principle as tossing a fair coin 1000 times. The extreme values for the number of heads is between 0 and 1000. But obviously, if the coin is fair, that number will be close to 500. Any substantial deviation from that (say 800) should lead you to the conclusion that the coin is not fair, with large confidence.
 
Last edited:
  • #10
jk22 said:
By chance it is not new because new things means it is wrong how dare you do that ?

No. I was describing the "no superdeterminism" (or "no retrocausality") assumption needed to derive Bell inequalities. If this is what you were getting at then it isn't new because researchers in the field already know about this loophole.
 
  • #11
Heinera said:
No, the extreme values for any experiment with a finite number of trials are in fact 4 and -4. Only when you consider experiments with a large number of trials will the CHSH probability interval converge towards 2 and -2.
.

I don't think that's true because for the maximum violation the probability p (-4) tends towards (3/4)^4 and not zero.
 
  • #12
jk22 said:
I don't think that's true because for the maximum violation the probability p (-4) tends towards (3/4)^4 and not zero.

And so what is the probability that after collecting the results for N pairs, we will find S=-4 across the entire set? That will be an expression of the form ##\alpha^{N}## where ##0\lt\alpha\lt{1}##, and that goes to zero as ##N## goes to infinity.
 
Last edited:
  • #13
So for S=-2 except if in this case the probability is 1. But having -2 as average does not imply we have only -2 results in the average. The opposite if we have a small number of measurements it could be bigger than two hence 4 happened.

I wanted to apologize towards PF it seems new but new means wrong i found the flaw :

In fact my calculation makes use of the indepence hypothesis which is wrong since the average is fixed. If they were really independent the average could vary from 4 to -4 but it is not the case.
 
  • #14
Nugatory said:
And so what is the probability that after collecting the results for N pairs, we will find S=-4 across the entire set? That will be an expression of the form ##\alpha^{N}## where ##0\lt\alpha\lt{1}##, and that goes to zero as ##N## goes to infinity.

Nugatory, This is philosophically interesting : it means that on the long run there remain only processes where we have no choice. This reasoning would also go for coin tossing. P (head)=1/2 1/2^n tends to zero. Which means head is impossible the same for tail. So this experiment simply does not exist on the long run. We are given the illusion of choice for a small period so that on the long term the world would be deterministic ? But this seems to contradict chaos theory.
 
  • #15
jk22 said:
Nugatory, This is philosophically interesting : it means that on the long run there remain only processes where we have no choice. This reasoning would also go for coin tossing. P (head)=1/2 1/2^n tends to zero. Which means head is impossible the same for tail. So this experiment simply does not exist on the long run.

That quantity ##(1/2)^N## that goes to zero as N approaches infinity is the probability that N tosses will all be heads (or all tails) - any single toss can of course be heads or tails with 50% probability either way. And experiments of this sort most certainly do exist in the long run. Casinos make their profits that way, and (for much larger values of N) it is the reason why pressure and temperature work the way they do.

Finding an inequality violation when you run a few pairs through a CHSH experiment is like tossing a coin and getting heads - there's a fair chance of it happening. Let's call that probability ##\alpha##. But if you double the length of the run, that's like doing two short runs and getting a violation both times, or tossing the coin twice and getting heads both times - now the probability is ##\alpha\times\alpha=\alpha^2##. And if we run N times as many pairs through the device, the probability of getting the violation by random chance is ##\alpha^N##, which tends to zero as N increases. Thus, we run the experiment with a value of N large enough that the probability of deviation from random chance is negligible - and look again at Heinera's post above about tossing an coin 1000 times.

For more, you can google for "Law of large numbers", "expectation value", and "Gaussian distribution"... but this is all basic probability theory.
 
  • #16
So basically you mean that for any angles of measurement the local variables imply $$lim_{N
\to\infty}\frac {n (-4)}{N}=0$$ ?

But had small bunches of -4 at the beginning of each experiment. Whereas -2 has a limit towards 1 ?
 
Last edited:
  • #17
jk22 said:
So basically you mean that for any angles of measurement the local variables imply $$lim_{N
\to\infty}\frac {n (-4)}{N}=0$$ ?

But had small bunches of -4 at the beginning of each experiment. Whereas -2 has a limit towards 1 ?
Pretty much, yes.
 
  • #18
So if we imagined that we repeat an experiment we get -4 more than average.

But which process causes this diminishing in p (-4) ? Is it that switching on the measurement makes an electrical scock that stabilizes afterwards ? Is it a poisson process $$p (-4,t)\propto exp (-at) $$ ? Is there any reference about that ?
 
  • #19
In my opinion the Chsh is not an asymptotic criterion for nonlocality but simply the covariance : $$ lim |cov_{lhv}(b-a)|\leq |-1+2 (b-a)/\pi|$$

Where limit is taken towards the number of variables taken to compute the covariance.

If this number is small then the computed covariance can be beyond the quantum but the error bar is huge. As the error tends to zero while averaging over lambda the covariance becomes smaller than the linear one.
 
  • #20
Nugatory said:
Pretty much, yes.

Could it be that p(-4) were not zero if the local functions A(a,x) where not sharp step functions ?

Like the following code :

Code:
#include<stdio.h>
#include<stdlib.h>
#include<time.h>
#include<math.h>

#define PI 4.0*atan(1.0)

double cof=1.0;
double MAX_INT=4294967296.0/2.0-1.0;

//local function for the result in A depending on the angle of measurement and the parameter

int A(double angle, double lambda)
{
   int resa=0;
   double pa, choice;

  //set probabilities
   pa=0.5+cos(angle-lambda)/cof;

   //choose +/- for A
   choice=(double)rand()/MAX_INT;
   if(choice>pa)
     resa=1;
   else
     resa=-1;

   return(resa);
}

int main(void)
{
   int a,b,ap,bp;

   int i=0;
   int na=0,nb=0;
   double choice;
   double anga,angb,angap,angbp;
   double phi;
   double cov1=0.0,cov2=0.0,cov3=0.0,cov4=0.0;
   int vrmax=10000;
   int angdiv=100;
   int vrmin=10;
   int resa, resb;

   double avgcov1=0.0, avgcov2=0.0, avgcov3=0.0, avgcov4=0.0;
   FILE *varout;

   anga=0.0;
   angb=PI/4.0;
   angap=PI/2.0;
   angbp=3.0*PI/4.0;

   srand(time(NULL));

   bool stop=false;

   varout=fopen("varab.out","w");

   int pm4=0,pm2=0,pzero=0,p2=0,p4=0;
   int S=0;

//calculate probabilities of -4 -2 0 2 4 :
   na=nb=0;
   stop=false;
   for(i=0;!stop;i++)
   {
     phi=(double)rand()/MAX_INT*2.0*PI;
     //angb=PI/(double)angdiv*(double)j;

     cov1=-A(anga,phi)*A(angb,phi);
     cov2=-A(anga,phi)*A(angbp,phi);
     cov3=-A(angap,phi)*A(angb,phi);
     cov4=-A(angap,phi)*A(angbp,phi);

     avgcov1+=cov1;
     avgcov2+=cov2;
     avgcov3+=cov3;
     avgcov4+=cov4;

     switch((int)(cov1-cov2+cov3+cov4))
     {
       case -4 : pm4++; break;
       case -2 : pm2++; break;
       case 0 : pzero++; break;
       case 2 : p2++; break;
       case 4 : p4++; break;
     }

     S+=(int)(cov1-cov2+cov3+cov4);
             
     if(i>=vrmax-1) stop=true;
   }

   cov1/=(double)i;
   cov1=0.0;
   
   printf("p-4=%lf p-2=%lf p0=%lf\n",(double)pm4/(double)vrmax, (double)pm2/(double)vrmax, (double)pzero/(double)vrmax);   
   printf("S=%lf Covariances : %lf %lf %lf %lf\n",(double)S/(double)vrmax, avgcov1/(double)vrmax, avgcov2/(double)vrmax, avgcov3/(double)vrmax, avgcov4/(double)vrmax);   
   printf("Sumcov=%lf\n",(avgcov1-avgcov2+avgcov3+avgcov4)/(double)vrmax);
   fclose(varout);
}

I got p(-4)=0.2 and but of course S<2
 

Similar threads

Back
Top