Measurement problem and computer-like functions

Click For Summary
The discussion revolves around the implications of defining measurement in quantum mechanics through an algorithm that can yield different eigenvalues for the same observable, challenging the deterministic nature of measurements. Participants debate whether this approach provides a loophole to Bell's theorem, which asserts that no local hidden variable theory can reproduce quantum predictions while maintaining both locality and realism. The conversation highlights the distinction between counterfactual definiteness and realism, with some arguing that conflating these concepts weakens the conclusions of Bell's proof. Ultimately, the consensus is that quantum mechanics, with its probabilistic nature, cannot be reconciled with classical intuitions about measurement and locality. The discussion emphasizes the need to accept quantum mechanics as it is, rather than attempting to fit it into classical frameworks.
  • #31
gill1109 said:
I think that "non realistic" means that there are no variables that determine the outcome! It is irreducibly random. It is not pre-determined.

To me, realism versus non-realism is about whether the theory describes the objective, observer-independent state of the system under consideration, as opposed to describing something subjective (our knowledge, for example). A classical probability theory is non-realistic in the sense that the probabilities don't reflect anything about the world (except in the case of probabilities 0 and 1), they only reflect our knowledge, or lack of knowledge about the world.
 
Physics news on Phys.org
  • #32
stevendaryl said:
I don't think that's true. Adding randomness does not allow you to violate the Bell inequality unless you add randomness nonlocally..

If i take the Chsh inequality

ab+ab'+a'b-a'b'<=2

Take a a' and b' b equals to rand() and you can obtain 4 obviously.this you can check pn a computer but it is straight forward.

This said the way out to the local dilemma is that each pair ab shares the same variable but each term receives a different variable.
 
  • #33
jk22 said:
If i take the Chsh inequality

ab+ab'+a'b-a'b'<=2

Take a a' and b' b equals to rand() and you can obtain 4 obviously.this you can check pn a computer but it is straight forward..

I think you've misunderstood what the a, a', etc., mean in the CHSH inequality. The inequality is about correlations, not values. The real statement of the inequality is:

\rho(a,b) + \rho(a, b&#039;) + \rho(a&#039;,b) - \rho(a&#039;, b&#039;) \leq 2

\rho(\alpha, \beta) is computed this way: (for the spin-1/2 case)
  • Generate many twin pairs, and have Alice and Bob measure the spins of their respective particles at a variety of detector orientations.
  • Let \alpha_n be Alice's setting on trial number n
  • Let \beta_n be Bob's setting on trial number n
  • Define A_n to be +1 if Alice measures spin-up on trial number n, and -1 if Alice measures spin-down.
  • Define B_n to be \pm 1 depending on Bob's result for trial n
Define \rho(\alpha, \beta) to be the average of A_n \cdot B_n, averaged over those trials for which \alpha_n = \alpha and \beta_n= \beta. Then if a and a&#039; are two different values for \alpha, and b and b&#039; are two different values for \beta, then the CHSH inequality says that \rho(a,b) + \rho(a&#039;,b) + \rho(a,b&#039;) - \rho(a&#039;,b&#039;) \leq 2.

It doesn't make any sense to say \rho(\alpha, \beta) = rnd(). \rho is not a random number, it's an average of products of random variables.

You can certainly propose a model where A_n = rnd() for each n, and similarly for B_n. But that would not make \rho(\alpha, \beta) = rnd().
 
  • #34
The demonstration uses factorization of measurement results if the measurement result of this sum is smaller than 2 then of course the average is.

The fact is that qm says we cannot factorize

$$ A_n B_n+A_nB'_n\neq A_n(B_n+B'_n)$$

Because A_n are different since measured two times.

To make sense we have $$\rho_i$$ with $$A_{i,n},B_{i,n}$$

The $$A_n$$ for the first rho are not the same as for the second.
 
  • #35
jk22 said:
The demonstration uses factorization of measurement results if the measurement result of this sum is smaller than 2 then of course the average is.

The fact is that qm says we cannot factorize

$$ A_n B_n+A_nB'_n\neq A_n(B_n+B'_n)$$

Because A_n are different since measured two times.

Well, in the CHSH inequality, you compute \rho(a,b) by averaging over all trials for which Alice's setting is a and Bob's setting is b. Then you independently compute \rho(a,b&#039;), \rho(a&#039;,b) and \rho(a&#039;,b&#039;). You never compute a quantity such as A_n B_n+A_nB&#039;_n (because you can't, for a single value of n, measure both B and B&#039;.

Anyway, my point is that the use of random number generators doesn't affect the conclusion of Bell's theorem, or at least I don't think it does.
 
  • #36
But how do you prove the sum of the 4 rhos is smaller than 2 then ?

$n$ just means the nth measurement for B and B' they don't need to be simultaneously measured.
 
Last edited:
  • #37
jk22 said:
But how do you prove the sum of the 4 rhos is smaller than 2 then ?

$n$ just means the nth measurement for B and B' they don't need to be simultaneously measured.

But the quantity A_n B_n + A&#039;_n B_n + A_n B&#039;_n - A&#039;_n B&#039;_n does not appear in the derivation of the CHSH inequality.

As for what is proved:

A local, memoryless hidden-variables model for the EPR experiment would involve three probability distributions:
  1. P(\lambda), the probability distribution for the hidden variable \lambda.
  2. P_A(\lambda, \alpha), the probability of Alice getting result +1 given that the hidden variable has value \lambda and Alice's device has setting \alpha
  3. P_B(\lambda, \beta), the probability of Bob getting result +1 given that the hidden variable has value \lambda and Bob's device has setting \beta
Such a model could be used to generate sample EPR results in the following way:
  • In round number n, Alice chooses a setting \alpha_n
  • Bob chooses a setting \beta_n.
  • \lambda_n is chosen randomly, according to the probability distribution P(\lambda)
  • Then with probability P_A(\lambda_n, \alpha_n) let A_n = +1. With probability 1 - P_A(\lambda_n, \alpha_n) let A_n = -1
  • With probability P_B(\lambda_n, \beta_n) let B_n = +1. With probability 1 - P_B(\lambda_n, \beta_n) let B_n = -1
  • Then define \rho_n = A_n \cdot B_n
After many, many rounds, you let \rho(\alpha, \beta) be the average of \rho_n over those rounds for which \alpha_n = \alpha and \beta_n = \beta.

The CHSH proof shows that no such model will satisfy a certain inequality (in the limit of many, many rounds).
 
  • #38
stevendaryl said:
The CHSH proof shows that no such model will satisfy a certain inequality (in the limit of many, many rounds).

Actually, the proof is about correlations computed using a continuous distribution:

\rho(\alpha, \beta) = \int_\lambda P(\lambda) d\lambda (P^+(\lambda, \alpha, \beta) - P^-(\lambda, \alpha, \beta))

where P^+(\lambda, \alpha, \beta) is the probability that Alice's and Bob's results are the same sign, and P^-(\lambda, \alpha, <br /> \beta) is the probability that they are opposite signs. In terms the probabilities given earlier:

P^+(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) P_B(\lambda, \beta) + (1 - P_A(\lambda, \alpha))(1 - P_B(\lambda, \beta)) and
P^-(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) (1 - P_B(\lambda, \beta)) + (1 - P_A(\lambda, \alpha)) P_B(\lambda, \beta)

But tests of the CHSH inequality assume that these continuously defined correlations can be approximated by discretely computed correlations.
 
  • #39
stevendaryl said:
Actually, the proof is about correlations computed using a continuous distribution:

\rho(\alpha, \beta) = \int_\lambda P(\lambda) d\lambda (P^+(\lambda, \alpha, \beta) - P^-(\lambda, \alpha, \beta))

where P^+(\lambda, \alpha, \beta) is the probability that Alice's and Bob's results are the same sign, and P^-(\lambda, \alpha,<br /> \beta) is the probability that they are opposite signs. In terms the probabilities given earlier:

P^+(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) P_B(\lambda, \beta) + (1 - P_A(\lambda, \alpha))(1 - P_B(\lambda, \beta)) and
P^-(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) (1 - P_B(\lambda, \beta)) + (1 - P_A(\lambda, \alpha)) P_B(\lambda, \beta)

But tests of the CHSH inequality assume that these continuously defined correlations can be approximated by discretely computed correlations.
Actually, (and IMHO), the proof uses elementary logic and elementary probability theory, but you can hide it behind calculus if you like.
http://www.slideshare.net/gill1109/epidemiology-meets-quantum-statistics-causality-and-bells-theorem
http://arxiv.org/abs/1207.5103
Statistics, Causality and Bell's Theorem
Statistical Science 2014, Vol. 29, No. 4, 512-528
 
  • Like
Likes bhobba
  • #40
stevendaryl said:
... it seems to me that some kind of local world-splitting could give a local, non-realistic toy model to explain EPR-type correlations...
But I think if a world-splitting is caused by an event A and it has different variable values in the two branches for objects/events non-local to A it still should not count as a local model. So even world-splitting doesn't fit the bill.
EDIT: Plus I feel any world-splitting model could be Occam-ified by looking at just a single branch. But let's not go into religious wars right now ;)
 
Last edited:
  • #41
stevendaryl said:
they only reflect our knowledge, or lack of knowledge about the world.
Maybe you're too quick to put an equal sign between "having knowledge" and "not having knowledge". I mean think about it...when you measure Alice, you instantly know about Bob. Maybe the very thing that we've been looking for has been literally staring at us in the mirror all along.
 
  • #42
DirkMan said:
Maybe you're too quick to put an equal sign between "having knowledge" and "not having knowledge". I mean think about it...when you measure Alice, you instantly know about Bob. Maybe the very thing that we've been looking for has been literally staring at us in the mirror all along.

I'm not sure I understand what you're saying. What are you saying has been staring us in the face?
 
  • #43
georgir said:
But I think if a world-splitting is caused by an event A and it has different variable values in the two branches for objects/events non-local to A it still should not count as a local model.

I used the wrong words. It's not the entire world that splits, it's just Alice who splits when she does a measurement, and Bob who splits when he does a measurement. So it's only local parts of the world that split locally.

georgir said:
EDIT: Plus I feel any world-splitting model could be Occam-ified by looking at just a single branch.

But to go from multiple branches to a single branch is a nonlocal change. So it's better from the point of view of occam, but not from the point of view of locality.
 
  • #44
stevendaryl said:
I used the wrong words. It's not the entire world that splits, it's just Alice who splits when she does a measurement, and Bob who splits when he does a measurement. So it's only local parts of the world that split locally.
But if Alice splits into two branches where Bob's particle or detector have different variables (especially ones somehow related to her detector readings) I'd still classify this as non-local. If she splits into two branches where the only difference is her own readings, then that is equivalent to a single-universe local random variable model.

EDIT: To extend on that, I can see EPR explanations to the tune of "the universe splits into two branches, one where BOTH particles were H, one where BOTH particles were V" but if the particles are already separated (split happening on measurement), this split is practically a non-local interaction.
 
Last edited:
  • #45
stevendaryl said:
\rho(a,b) + \rho(a, b&#039;) + \rho(a&#039;,b) - \rho(a&#039;, b&#039;) \leq 2

\rho(\alpha, \beta) is computed this way: (for the spin-1/2 case)
  • Generate many twin pairs, and have Alice and Bob measure the spins of their respective particles at a variety of detector orientations.
  • Let \alpha_n be Alice's setting on trial number n
  • Let \beta_n be Bob's setting on trial number n
  • Define A_n to be +1 if Alice measures spin-up on trial number n, and -1 if Alice measures spin-down.
  • Define B_n to be \pm 1 depending on Bob's result for trial n
Define \rho(\alpha, \beta) to be the average of A_n \cdot B_n, averaged over those trials for which \alpha_n = \alpha and \beta_n= \beta. Then if a and a&#039; are two different values for \alpha, and b and b&#039; are two different values for \beta, then the CHSH inequality says that \rho(a,b) + \rho(a&#039;,b) + \rho(a,b&#039;) - \rho(a&#039;,b&#039;) \leq 2.
I just looked at a trivial average when the A_n are limited to one value.The continuous model is $$A(a,\lambda_1)B(b,\lambda_1)\rho(\lambda_1)+A(a,\lambda_2)B(b',\lambda_2)\rho(\lambda_2)+A(a',\lambda_3)B(b,\lambda_3)\rho(\lambda_3)-A(a',\lambda_4)B(b',\lambda_4)\rho(\lambda_4)$$

I looked at the case where we take away the integrals because there is only one measurement value.
 
  • #46
jk22 said:
I just looked at a trivial average when the A_n are limited to one value.The continuous model is $$A(a,\lambda_1)B(b,\lambda_1)\rho(\lambda_1)+A(a,\lambda_2)B(b',\lambda_2)\rho(\lambda_2)+A(a',\lambda_3)B(b,\lambda_3)\rho(\lambda_3)-A(a',\lambda_4)B(b',\lambda_4)\rho(\lambda_4)$$

I looked at the case where we take away the integrals because there is only one measurement value.
Rho usually stands for probability density. For each of the four correlations, Bell assumes that lambda is drawn at random from the same probability distrubution.
 
  • #47
gill1109 said:
Rho usually stands for probability density. For each of the four correlations, Bell assumes that lambda is drawn at random from the same probability distrubution.

Actually, in the Wikipedia article about the CHSH inequality, \rho(\alpha, \beta) is the correlation. If A(\lambda, \alpha) and B(\lambda, \beta) both take on values +/- 1, then \rho(\alpha, \beta) is the average over \lambda of A(\lambda, \alpha) B(\lambda, \beta).
 
  • #48
stevendaryl said:
Actually, in the Wikipedia article about the CHSH inequality, \rho(\alpha, \beta) is the correlation. If A(\lambda, \alpha) and B(\lambda, \beta) both take on values +/- 1, then \rho(\alpha, \beta) is the average over \lambda of A(\lambda, \alpha) B(\lambda, \beta).
Yes you are right, there are a lot of alternative notations, can be very confusing ... A lot of people use E(a, b) for the correlation, and it's the integral over lambda over A(lambda, a) B(lambda, b) rho(lambda) d lambda where rho(lambda) is the probability density of the hidden variable lambda.

The point being that when you change the settings, the probability density of lambda does not change. This is the freedom assumption (no conspiracy, fair-sampling, ...). Bell's theorem ( a theorem belonging to meta-phsyics) is not: "QM is incompatible with LHV". It's "QM is incompatible with LHV + no-conspiracy"). Bell himself referred to his inequality (and later the CHSH improvement) as "my theorem". I would say that Bell's inequality is a rather elementary mathematical theorem about limits of distributed (classical) computing. You can't build a network of computers which wins the "Bell game".
 
Last edited:
  • #49
jk22 said:
I just looked at a trivial average when the A_n are limited to one value.The continuous model is $$A(a,\lambda_1)B(b,\lambda_1)\rho(\lambda_1)+A(a,\lambda_2)B(b',\lambda_2)\rho(\lambda_2)+A(a',\lambda_3)B(b,\lambda_3)\rho(\lambda_3)-A(a',\lambda_4)B(b',\lambda_4)\rho(\lambda_4)$$

I looked at the case where we take away the integrals because there is only one measurement value.

What you wrote doesn't make sense to me. If you only use a single "round" of the experiment, then you can't have \lambda_1 and \lambda_2, you just have a single value of \lambda. Also, what does "\rho" mean in your formula?

I think you're getting confused about this. There are two different correlations computed: (1) the actual measured correlations, and (2) the predicted correlations.

The way that the actual measured correlations are computed is this:
  1. For every round (labeled n), there are corresponding values for \alpha_n and \beta_n, the settings of the two detectors, and there are corresponding values for the measured results, A_n and B_n
  2. To compute \rho(\alpha, \beta), you average A_n \cdot B_n over all those rounds n such that \alpha_n = a and \beta_n = b
So there is no \lambda in what is actually measured. \lambda is only involved in the hypothetical model for explaining the results. The local hidden variables explanation of the results are:
  • For every round n there is a corresponding value of some hidden variable, \lambda_n
  • A_n is a (deterministic) function of \alpha_n and \lambda_n
  • B_n is a (deterministic) function of \beta_n and \lambda_n
What you suggested is that rather than having A_n be a deterministic function, it could be nondeterministic, using say a random number generator. That actually doesn't make any difference. Intuitively, you can think that a nondeterministic function can be turned into a deterministic function of a random number r. Then r_n (the value of r on round n) can be incorporated into \lambda_n. It's just an extra hidden variable.
 
  • #50
Rho is the density of probability of the hidden variable, in fact if i consider a single run it is simply 1.

The fact that we have only one lambda comes for a renaming of the integration. If we don't have the integration then allowing for four different hidden variables is maybe reasonable.
 
  • #51
jk22 said:
Rho is the density of probability of the hidden variable, in fact if i consider a single run it is simply 1.

The fact that we have only one lambda comes for a renaming of the integration. If we don't have the integration then allowing for four different hidden variables is maybe reasonable.

No, it's not reasonable. If you only have one twin-pair, then there is only one value for \lambda. The expression you wrote doesn't have much connection with the CHSH inequality.

The CHSH inequality involves 2 settings for one device, a and a&#039;, and two settings for the other device, b and b&#039;. Since a single "round" of the experiment only has one setting for each device, it takes at least 4 rounds to get information about all the combinations

a, b
a&#039;, b
a, b&#039;
a&#039;, b&#039;
 
  • #52
If you allow only for one lambda then the pairs are not independent which i don't see the reason. Pairs are supposed to be independent ?

You can write with only one lambda if you perform a change of variable $$\lambda_i=f_i(\lambda)$$ we cannot simply rename.

The point is to explain why qm allows this to be bigger than two since the experiments show a violation, not why it is smaller since it is not.
 
Last edited:
  • #53
jk22 said:
If you allow only for one lambda then the pairs are not independent which i don't see the reason. Pairs are supposed to be independent ?

There is one value of \lambda for each twin-pair that is produced.

I'm confused as what exactly you are disputing, if anything. Is it:
  1. The definition of how the correlations \rho(\alpha, \beta) are computed?
  2. The proof that a local hidden-variables model predicts (in the deterministic case) that \rho would satisfy the CSHS inequality?
  3. The proof that introducing randomness makes no difference to that prediction?
  4. The proof that QM violates the inequality?
 
  • #54
stevendaryl said:
There is one value of \lambda for each twin-pair that is produced.

I'm confused as what exactly you are disputing, if anything. Is it:
  1. The definition of how the correlations \rho(\alpha, \beta) are computed?
  2. The proof that a local hidden-variables model predicts (in the deterministic case) that \rho would satisfy the CSHS inequality?
  3. The proof that introducing randomness makes no difference to that prediction?
  4. The proof that QM violates the inequality?

As for #3, suppose that Alice's outcome A(\lambda, \alpha) is nondeterministic. (The notation here is a little weird, because writing A(\lambda, \alpha) usually implies that A is a deterministic function of its arguments. I hope that doesn't cause confusion.) Then let X(\lambda,\alpha) be the probability that A(\lambda, \alpha) = +1 and so the probability that it is -1 is given by 1-X(\lambda,\alpha). Similarly, let Y(\lambda,\beta) be the probability that Bob's outcome B(\lambda, \beta) = +1. Then the probability that both Alice and Bob will get +1 is given by:

P_{both}(\lambda, \alpha, \beta) = X(\lambda, \alpha) \cdot Y(\lambda, \beta)

But in the EPR experiment, if \alpha = \beta, then Alice and Bob never get the same result (in the anti-correlated version of EPR). So this implies

P_{both}(\lambda, \alpha, \alpha) = X(\lambda, \alpha) \cdot Y(\lambda, \alpha) = 0

So either X(\lambda, \alpha) = 0 or Y(\lambda, \alpha) = 0

Similarly, the probability of both getting -1 is given by:

P_{neither}(\lambda, \alpha, \alpha) = (1 - X(\lambda, \alpha)) \cdot (1 - Y(\lambda, \alpha))

Since this never happens, the probability must be zero. So either X(\lambda, \alpha) = 1 or Y(\lambda, \alpha) = 1.

So for every value of \lambda and \alpha, A(\lambda, \alpha) either has probability 0 of being +1, or it has probability 1 of being +1. So it's value must be a deterministic function of \lambda and \alpha. Similarly for B(\lambda, \beta). So the perfect anti-correlations of EPR imply that there is no room for randomness.
 
  • #55
stevendaryl said:
As for #3, suppose that Alice's outcome A(\lambda, \alpha) is nondeterministic. (The notation here is a little weird, because writing A(\lambda, \alpha) usually implies that A is a deterministic function of its arguments. I hope that doesn't cause confusion.) Then let X(\lambda,\alpha) be the probability that A(\lambda, \alpha) = +1 and so the probability that it is -1 is given by 1-X(\lambda,\alpha). Similarly, let Y(\lambda,\beta) be the probability that Bob's outcome B(\lambda, \beta) = +1. Then the probability that both Alice and Bob will get +1 is given by:

P_{both}(\lambda, \alpha, \beta) = X(\lambda, \alpha) \cdot Y(\lambda, \beta)

But in the EPR experiment, if \alpha = \beta, then Alice and Bob never get the same result (in the anti-correlated version of EPR). So this implies

P_{both}(\lambda, \alpha, \alpha) = X(\lambda, \alpha) \cdot Y(\lambda, \alpha) = 0

So either X(\lambda, \alpha) = 0 or Y(\lambda, \alpha) = 0

Similarly, the probability of both getting -1 is given by:

P_{neither}(\lambda, \alpha, \alpha) = (1 - X(\lambda, \alpha)) \cdot (1 - Y(\lambda, \alpha))

Since this never happens, the probability must be zero. So either X(\lambda, \alpha) = 1 or Y(\lambda, \alpha) = 1.

So for every value of \lambda and \alpha, A(\lambda, \alpha) either has probability 0 of being +1, or it has probability 1 of being +1. So it's value must be a deterministic function of \lambda and \alpha. Similarly for B(\lambda, \beta). So the perfect anti-correlations of EPR imply that there is no room for randomness.
This is fine in theory. The problem is that in experiment we do not see *perfect* anti-correlation. And experiment can't prove that we have *perfect* anti-correlation. It can only give statistical support to the hypothesis that we have close to perfect anti-correlation.

Hence the need to come up with an argument which allows for imperfection, allows for a bit of noise - lends itself to experimental verification ... CHSH.
 
  • #56
Indeed for randomness. If we suppose perfect cases are relevant while being of measure zero.

I think i would make an extension of point 4. That the violation of Chsh implies nonlocality.

if one $$\lambda$$ is given to each pair then violation of Chsh does not imply nonlocality. We could find four lambdas and the model $$A(a,\lambda)=sgn(\vec{a}\cdot\vec{\lambda})$$ to obtain the value 4 for a single trial ? Maybe I did wrong.
 
  • #57
jk22 said:
Indeed for randomness.
I think i would make an extension of 4. That the violation of Chsh implies nonlocality.

if one $$\lambda$$ is given to each pair then violation of Chsh does not imply nonlocality. We could find four lambdas and the model $$A(a,\lambda)=sgn(\vec{a}\cdot\vec{\lambda})$$ to obtain the value 4 for a single trial ? Maybe I did wrong.
We do lots of trials, for each of the four setting pairs. Four sub experiments, you could say, one each for each of the four correlations in the CHSH quantity S. So if you believe in local hidden variables, each of the four correlations is an average of values A(a, lambda)B(b, lambda) based on a completely different sample of hidden variables lambda. But we assume that those four samples are all random samples from the same probability distribution. This is called the freedom assumption (no conspiracy, fair sampling ...).

People tend to miss this step in the argument, or to misunderstand it. It's the usual reason for people to argue that Bell was wrong - they aren't aware of a statistical assumption and they aren't aware of it being needed to complete the argument. (Physicists tend to have poor training in probability and statistics but Bell's statistical intuition was very very good indeed.) Bell did mention this step explicitly (in his 1981 paper "Bertlmann's socks") but his critics tend to overlook it.
 
  • #58
This probability distribution is probably uniform ? Else it could be seen as a kind of conspiracy ?

In the proof i saw they make the sum of a single trial of each correlation but i suppose that the problem is that the variables can be different : https://en.m.wikipedia.org/wiki/Bell's_theorem under Derivation of the CHSH inequality.

It is written B+B' and B-B' but i think they supposed They all four depend on the same lambda. Isn't there four different lambdas since each term comes from a different pair ?
 
Last edited:
  • #59
jk22 said:
Indeed for randomness. If we suppose perfect cases are relevant while being of measure zero.

I think i would make an extension of point 4. That the violation of Chsh implies nonlocality.[

if one $$\lambda$$ is given to each pair then violation of Chsh does not imply nonlocality. We could find four lambdas and the model $$A(a,\lambda)=sgn(\vec{a}\cdot\vec{\lambda})$$ to obtain the value 4 for a single trial ? Maybe I did wrong.

The quantity of interest is A(a, \lambda) B(b, \lambda) + A(a&#039;, \lambda) B(b, \lambda) + A(a, \lambda) B(b&#039;, \lambda) - A(a&#039;, \lambda) B(b&#039;, \lambda) (averaged over \lambda). You can rearrange this into

A(a, \lambda) (B(b, \lambda) + B(b&#039;, \lambda)) + A(a&#039;, \lambda) (B(b, \lambda) - B(b&#039;, \lambda))

Either B(b,\lambda) has the same sign as B(b&#039;, \lambda), or they have opposite signs. If they have the same sign, then the second term (A(a&#039;, \lambda) (B(b, \lambda) - B(b&#039;, \lambda))) is zero. If they have opposite signs, then the first term, (A(a, \lambda) (B(b, \lambda) + B(b&#039;, \lambda))) is zero. So there is no way to get that sum to be greater than 2.
 
  • #60
jk22 said:
It is written B+B' and B-B' but i think they supposed They all four depend on the same lambda. Isn't there four different lambdas since each term comes from a different pair ?

The idea is that for every pair, the quantity A(a,\lambda) B(b,\lambda) + A(a&#039;,\lambda) B(b,\lambda) + A(a,\lambda) B(b&#039;,\lambda) - A(a&#039;,\lambda) B(b&#039;,\lambda) has to be less than 2. Now, we don't measure all 4 terms for each pair, we can only measure one term. However, if we average that quantity over lambda, we get:

\langle A(a) B(b) \rangle + \langle A(a) B(b&#039;) \rangle + \langle A(a&#039;) B(b) \rangle - \langle A(a&#039;) B(b&#039;) \rangle \leq 2

where \langle ... \rangle means average over \lambda

So even though no single twin-pair can give us information about all four terms, we can experimentally determine the 4 separate values:
  1. \langle A(a) B(b) \rangle
  2. \langle A(a) B(b&#039;) \rangle
  3. \langle A(a&#039;) B(b) \rangle
  4. \langle A(a&#039;) B(b&#039;) \rangle
Then we can show that the four quantities violate CHSH.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 24 ·
Replies
24
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 65 ·
3
Replies
65
Views
6K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 8 ·
Replies
8
Views
1K