Measurement problem and computer-like functions

  • #51
jk22 said:
Rho is the density of probability of the hidden variable, in fact if i consider a single run it is simply 1.

The fact that we have only one lambda comes for a renaming of the integration. If we don't have the integration then allowing for four different hidden variables is maybe reasonable.

No, it's not reasonable. If you only have one twin-pair, then there is only one value for \lambda. The expression you wrote doesn't have much connection with the CHSH inequality.

The CHSH inequality involves 2 settings for one device, a and a', and two settings for the other device, b and b'. Since a single "round" of the experiment only has one setting for each device, it takes at least 4 rounds to get information about all the combinations

a, b
a', b
a, b'
a', b'
 
Physics news on Phys.org
  • #52
If you allow only for one lambda then the pairs are not independent which i don't see the reason. Pairs are supposed to be independent ?

You can write with only one lambda if you perform a change of variable $$\lambda_i=f_i(\lambda)$$ we cannot simply rename.

The point is to explain why qm allows this to be bigger than two since the experiments show a violation, not why it is smaller since it is not.
 
Last edited:
  • #53
jk22 said:
If you allow only for one lambda then the pairs are not independent which i don't see the reason. Pairs are supposed to be independent ?

There is one value of \lambda for each twin-pair that is produced.

I'm confused as what exactly you are disputing, if anything. Is it:
  1. The definition of how the correlations \rho(\alpha, \beta) are computed?
  2. The proof that a local hidden-variables model predicts (in the deterministic case) that \rho would satisfy the CSHS inequality?
  3. The proof that introducing randomness makes no difference to that prediction?
  4. The proof that QM violates the inequality?
 
  • #54
stevendaryl said:
There is one value of \lambda for each twin-pair that is produced.

I'm confused as what exactly you are disputing, if anything. Is it:
  1. The definition of how the correlations \rho(\alpha, \beta) are computed?
  2. The proof that a local hidden-variables model predicts (in the deterministic case) that \rho would satisfy the CSHS inequality?
  3. The proof that introducing randomness makes no difference to that prediction?
  4. The proof that QM violates the inequality?

As for #3, suppose that Alice's outcome A(\lambda, \alpha) is nondeterministic. (The notation here is a little weird, because writing A(\lambda, \alpha) usually implies that A is a deterministic function of its arguments. I hope that doesn't cause confusion.) Then let X(\lambda,\alpha) be the probability that A(\lambda, \alpha) = +1 and so the probability that it is -1 is given by 1-X(\lambda,\alpha). Similarly, let Y(\lambda,\beta) be the probability that Bob's outcome B(\lambda, \beta) = +1. Then the probability that both Alice and Bob will get +1 is given by:

P_{both}(\lambda, \alpha, \beta) = X(\lambda, \alpha) \cdot Y(\lambda, \beta)

But in the EPR experiment, if \alpha = \beta, then Alice and Bob never get the same result (in the anti-correlated version of EPR). So this implies

P_{both}(\lambda, \alpha, \alpha) = X(\lambda, \alpha) \cdot Y(\lambda, \alpha) = 0

So either X(\lambda, \alpha) = 0 or Y(\lambda, \alpha) = 0

Similarly, the probability of both getting -1 is given by:

P_{neither}(\lambda, \alpha, \alpha) = (1 - X(\lambda, \alpha)) \cdot (1 - Y(\lambda, \alpha))

Since this never happens, the probability must be zero. So either X(\lambda, \alpha) = 1 or Y(\lambda, \alpha) = 1.

So for every value of \lambda and \alpha, A(\lambda, \alpha) either has probability 0 of being +1, or it has probability 1 of being +1. So it's value must be a deterministic function of \lambda and \alpha. Similarly for B(\lambda, \beta). So the perfect anti-correlations of EPR imply that there is no room for randomness.
 
  • #55
stevendaryl said:
As for #3, suppose that Alice's outcome A(\lambda, \alpha) is nondeterministic. (The notation here is a little weird, because writing A(\lambda, \alpha) usually implies that A is a deterministic function of its arguments. I hope that doesn't cause confusion.) Then let X(\lambda,\alpha) be the probability that A(\lambda, \alpha) = +1 and so the probability that it is -1 is given by 1-X(\lambda,\alpha). Similarly, let Y(\lambda,\beta) be the probability that Bob's outcome B(\lambda, \beta) = +1. Then the probability that both Alice and Bob will get +1 is given by:

P_{both}(\lambda, \alpha, \beta) = X(\lambda, \alpha) \cdot Y(\lambda, \beta)

But in the EPR experiment, if \alpha = \beta, then Alice and Bob never get the same result (in the anti-correlated version of EPR). So this implies

P_{both}(\lambda, \alpha, \alpha) = X(\lambda, \alpha) \cdot Y(\lambda, \alpha) = 0

So either X(\lambda, \alpha) = 0 or Y(\lambda, \alpha) = 0

Similarly, the probability of both getting -1 is given by:

P_{neither}(\lambda, \alpha, \alpha) = (1 - X(\lambda, \alpha)) \cdot (1 - Y(\lambda, \alpha))

Since this never happens, the probability must be zero. So either X(\lambda, \alpha) = 1 or Y(\lambda, \alpha) = 1.

So for every value of \lambda and \alpha, A(\lambda, \alpha) either has probability 0 of being +1, or it has probability 1 of being +1. So it's value must be a deterministic function of \lambda and \alpha. Similarly for B(\lambda, \beta). So the perfect anti-correlations of EPR imply that there is no room for randomness.
This is fine in theory. The problem is that in experiment we do not see *perfect* anti-correlation. And experiment can't prove that we have *perfect* anti-correlation. It can only give statistical support to the hypothesis that we have close to perfect anti-correlation.

Hence the need to come up with an argument which allows for imperfection, allows for a bit of noise - lends itself to experimental verification ... CHSH.
 
  • #56
Indeed for randomness. If we suppose perfect cases are relevant while being of measure zero.

I think i would make an extension of point 4. That the violation of Chsh implies nonlocality.

if one $$\lambda$$ is given to each pair then violation of Chsh does not imply nonlocality. We could find four lambdas and the model $$A(a,\lambda)=sgn(\vec{a}\cdot\vec{\lambda})$$ to obtain the value 4 for a single trial ? Maybe I did wrong.
 
  • #57
jk22 said:
Indeed for randomness.
I think i would make an extension of 4. That the violation of Chsh implies nonlocality.

if one $$\lambda$$ is given to each pair then violation of Chsh does not imply nonlocality. We could find four lambdas and the model $$A(a,\lambda)=sgn(\vec{a}\cdot\vec{\lambda})$$ to obtain the value 4 for a single trial ? Maybe I did wrong.
We do lots of trials, for each of the four setting pairs. Four sub experiments, you could say, one each for each of the four correlations in the CHSH quantity S. So if you believe in local hidden variables, each of the four correlations is an average of values A(a, lambda)B(b, lambda) based on a completely different sample of hidden variables lambda. But we assume that those four samples are all random samples from the same probability distribution. This is called the freedom assumption (no conspiracy, fair sampling ...).

People tend to miss this step in the argument, or to misunderstand it. It's the usual reason for people to argue that Bell was wrong - they aren't aware of a statistical assumption and they aren't aware of it being needed to complete the argument. (Physicists tend to have poor training in probability and statistics but Bell's statistical intuition was very very good indeed.) Bell did mention this step explicitly (in his 1981 paper "Bertlmann's socks") but his critics tend to overlook it.
 
  • #58
This probability distribution is probably uniform ? Else it could be seen as a kind of conspiracy ?

In the proof i saw they make the sum of a single trial of each correlation but i suppose that the problem is that the variables can be different : https://en.m.wikipedia.org/wiki/Bell's_theorem under Derivation of the CHSH inequality.

It is written B+B' and B-B' but i think they supposed They all four depend on the same lambda. Isn't there four different lambdas since each term comes from a different pair ?
 
Last edited:
  • #59
jk22 said:
Indeed for randomness. If we suppose perfect cases are relevant while being of measure zero.

I think i would make an extension of point 4. That the violation of Chsh implies nonlocality.[

if one $$\lambda$$ is given to each pair then violation of Chsh does not imply nonlocality. We could find four lambdas and the model $$A(a,\lambda)=sgn(\vec{a}\cdot\vec{\lambda})$$ to obtain the value 4 for a single trial ? Maybe I did wrong.

The quantity of interest is A(a, \lambda) B(b, \lambda) + A(a', \lambda) B(b, \lambda) + A(a, \lambda) B(b', \lambda) - A(a', \lambda) B(b', \lambda) (averaged over \lambda). You can rearrange this into

A(a, \lambda) (B(b, \lambda) + B(b', \lambda)) + A(a', \lambda) (B(b, \lambda) - B(b', \lambda))

Either B(b,\lambda) has the same sign as B(b', \lambda), or they have opposite signs. If they have the same sign, then the second term (A(a', \lambda) (B(b, \lambda) - B(b', \lambda))) is zero. If they have opposite signs, then the first term, (A(a, \lambda) (B(b, \lambda) + B(b', \lambda))) is zero. So there is no way to get that sum to be greater than 2.
 
  • #60
jk22 said:
It is written B+B' and B-B' but i think they supposed They all four depend on the same lambda. Isn't there four different lambdas since each term comes from a different pair ?

The idea is that for every pair, the quantity A(a,\lambda) B(b,\lambda) + A(a',\lambda) B(b,\lambda) + A(a,\lambda) B(b',\lambda) - A(a',\lambda) B(b',\lambda) has to be less than 2. Now, we don't measure all 4 terms for each pair, we can only measure one term. However, if we average that quantity over lambda, we get:

\langle A(a) B(b) \rangle + \langle A(a) B(b') \rangle + \langle A(a') B(b) \rangle - \langle A(a') B(b') \rangle \leq 2

where \langle ... \rangle means average over \lambda

So even though no single twin-pair can give us information about all four terms, we can experimentally determine the 4 separate values:
  1. \langle A(a) B(b) \rangle
  2. \langle A(a) B(b') \rangle
  3. \langle A(a') B(b) \rangle
  4. \langle A(a') B(b') \rangle
Then we can show that the four quantities violate CHSH.
 
  • #61
Neverthless I computed the probabilities with hidden variable and i got p(-4)=(3/4)^4 aso

They differ from qm and give the average S=2

The problem i see is numerically and experimentally : those are always finite number of trials and the statistics can vary.

If -4 arrives as the sum of a single trial then we could imagine we could select a sample where we get a violation ?
 
Last edited:
  • #62
jk22 said:
Neverthless I computed the probabilities with hidden variable and i got p(-4)=(3/4)^4 aso

They differ from qm and give the average S=2

The problem i see is numerically and experimentally : those are always finite number of trials and the statistics can vary.

If -4 arrives as the sum of a single trial then we could imagine we could select a sample where we get a violation ?

Yes, I think that a local hidden variables model can give a violation for a small sample size. the assumption is that

The average of A(\alpha) B(\beta) over the sample is approximately equal to \int_\lambda P(\lambda) d\lambda A(\alpha, \lambda) B(\beta, \lambda). If you have a violation of CHSH that approximate equality can't hold.
 
  • #63
stevendaryl said:
Yes, I think that a local hidden variables model can give a violation for a small sample size. the assumption is that

The average of A(\alpha) B(\beta) over the sample is approximately equal to \int_\lambda P(\lambda) d\lambda A(\alpha, \lambda) B(\beta, \lambda). If you have a violation of CHSH that approximate equality can't hold.
In fact, if local hidden variables are true, and you do the experiment, you will probably violate the CHSH bound with probability 50%. Nowadays we have exact finite N probability bounds: assuming LHV, the chance to violate CHSH: "S < = 2" in an experiment with N trials by more than epsilon, is less than ... (something like A exp( - B N eps^2).)

See e.g. http://arxiv.org/abs/1207.5103 Statistical Science 2014, Vol. 29, No. 4, 512-528 Theorem 1 (assuming no memory). Not the best result at all, but as simple as possible and with a relatively simple proof (elementary discrete probability ... at least, elementary for mathematicians. A first year undergraduate course is enough). A = 8 and B = 1 / 256
 
  • Like
Likes jk22
  • #64
For the results I obtained in qm

$$p(-4)=(1/2(1+1/\sqrt{2}))^4$$
$$p(-2)=4(1/2(1+1/\sqrt{2}))^3(1/2(1-1/\sqrt{2}))$$

And lhv

$$p(-4)=(3/4)^4$$
$$p(-2)=4(3/4)^3(1/4)$$

Hence in qm -4 appears more frequently than -2 whereas for hidden variables it is the opposite.
 
  • #65
jk22 said:
For the results I obtained in qm

$$p(-4)=(1/2(1+1/\sqrt{2}))^4$$
$$p(-2)=4(1/2(1+1/\sqrt{2}))^3(1/2(1-1/\sqrt{2}))$$

And lhv

$$p(-4)=(3/4)^4$$
$$p(-2)=4(3/4)^3(1/4)$$

Hence in qm -4 appears more frequently than -2 whereas for hidden variables it is the opposite.
I have no idea what these calculations are supposed to refer to.
 
  • #66
These should be the probabilities for the measurement results of AB-AB'+A'B+A'B' for the angles of measurement 0,Pi/4,Pi/2,3Pi/4 for A B A' B' respectively.

The lhv model considered was given in a previous post, it's the signum of the projection of the hidden vector on the direction of measurement.
 
  • #67
By the way shouldn't the measurement operator for CHSH not be $$A\otimes B\ominus A\otimes B '\oplus A'\otimes B\oplus A'\otimes B'$$

where $$\oplus$$ is the Kronecker sum ?

I think of that because in a CHSH experiment we sum eigenvalues of measurement.
 
  • #68
jk22 said:
By the way shouldn't the measurement operator for CHSH not be $$A\otimes B\ominus A\otimes B '\oplus A'\otimes B\oplus A'\otimes B'$$

where $$\oplus$$ is the Kronecker sum ?

I think of that because in a CHSH experiment we sum eigenvalues of measurement.
In an ideal CHSH experiment, we many times either simultaneously measure A on subsystem 1 and B on subsystem 2, or A on subsystem 1 and B' on subsystem 2, or A' on subsystem 1 and B on subsystem 2, or A' on subsystem 1 and B' on subsystem 2. Each time, the two subsystems have been yet again prepared in the same joint state.
 
  • #69
Indeed i saw a paper that shows the whole cannot be measured simultaneously : http://arxiv.org/abs/quant-ph/0206076

However If we use beam splitter instead of fast changing switcher could we say this were experimentally simultaneous ?
 
  • #70
stevendaryl said:
The idea is that for every pair, the quantity A(a,\lambda) B(b,\lambda) + A(a&#039;,\lambda) B(b,\lambda) + A(a,\lambda) B(b&#039;,\lambda) - A(a&#039;,\lambda) B(b&#039;,\lambda) has to be less than 2.

What i meant is that we have A(a,\lambda_1) B(b,\lambda_1) + A(a&#039;,\lambda_2) B(b,\lambda_2)+ A(a,\lambda_3) B(b&#039;,\lambda_3)- A(a&#039;,\lambda_4) B(b&#039;,\lambda_4) has to be less than 4 so that a violation is possible. we don't measure all 4 terms for each pair, we can only measure one term. However, if we average that quantity over lambda_i we get:

\langle A(a) B(b) \rangle + \langle A(a) B(b&#039;) \rangle + \langle A(a&#039;) B(b) \rangle - \langle A(a&#039;) B(b&#039;) \rangle \leq 2 [/QUOTE]
 
  • #71
jk22 said:
Suppose we define the measurement of an observable A by v(A) v being an 'algorithm giving out one of the eigenvalues each time it is called' (we accept the axiom of choice)

Sorry, I'm a bit late to this thread and there have been many good answers, but I was struck by your initial question. Actually I think it's a good question because it really brings out an essential difference between classical and quantum thinking.

There is a difference between a measurement that 'chooses' one out of a set of pre-existing values, and a measurement that 'generates' a value that is a member of a set.

In standard QM there's no pre-existing value to 'choose' from.

In the usual Bell set up we have Alice and Bob, and at least conceptually we can imagine Alice to be on Earth and Bob to be on Pluto. One of the particles is winging its way to Bob who has set up his apparatus to measure some property. Now we could suppose that the particle is somehow carrying the set of possible values with it and all the measurement is going to do is to pick one of them. But what if Bob changes his mind about what to measure at the very last moment? Is the particle also carrying the new set of possible values with it in some pre-existing sense?

It's this kind of question that the Bell set up really tackles very beautifully. It asks what are the limitations on what we measure if we do assume that in some appropriate sense these properties have some kind of 'pre-existence'.

One of the things that took me a little while to appreciate when I first tried to understand Bell's arguments was the assumption that the result of Alice's measurement cannot depend on the setting of Bob's measurement device (and vice versa). It's so obvious - and it's also true in QM too. The only way we could have a dependence (assuming Alice and Bob are actually free to choose the settings) is if some information about Bob's setting reached Alice and affected the result she obtained.

Put this together with the assumption that there are some real properties that are orchestrating things (our so-called hidden variables) and one consequence of this is the Bell inequality.

OK - some of that is a little vague and imprecise, but I'm trying to highlight the essential components in an intuitive way. I don't think anyone would question too much the assumption that local results can't depend on distant settings but this question of whether physical properties pre-exist before measurement in some appropriate sense is really the mind-bender, for me at least.
 
  • #72
Yess^the locality should hold, but it is maybe the question if it is in principle possible to determine completely with lambda the result, or if we should let the door open for an indeterminacy that would be determined afterwards.

However stebendaryl showed that if we can predict with certainty in some cases, then there is no place for indeterminacy, so that the parameters : angles and lambda should determine the result. Then the correlation is classical, it can have no bumps, or else it is a saw curve.
 

Similar threads

Replies
2
Views
1K
Replies
4
Views
2K
Replies
4
Views
1K
Replies
24
Views
2K
Replies
4
Views
1K
Replies
65
Views
6K
Replies
4
Views
2K
Replies
6
Views
2K
Replies
8
Views
837
Back
Top