Measurement problem and computer-like functions

jk22
Messages
732
Reaction score
25
Suppose we define the measurement of an observable A by v(A) v being an 'algorithm giving out one of the eigenvalues each time it is called' (we accept the axiom of choice)

In this context we have in particular v(A)≠v(A) since when we call the left hand side and then the right handside the algorithm could give different values.

Is this not a way out from Bell's theorem since one cannot factorize the measurement results ?

But is this not at the same time the end of logical writing we are used to in maths ?
 
Physics news on Phys.org
You have defined a deterministic observable - it has nothing to do with Bell.

Thanks
Bill
 
Since we have v(A) different from v(A) then the expression v(A)v(B)+v(A)v(B') is not equal to v(A)(v(B)+v(B')) and Bell's theorem do not apply ?
 
jk22 said:
Since we have v(A) different from v(A) then the expression v(A)v(B)+v(A)v(B') is not equal to v(A)(v(B)+v(B')) and Bell's theorem do not apply ?

Bells theorem proves - there is no escaping it - if QM is correct you can't have both locality and reality. All you have done is define some kind of deterministic observable which has nothing to do with bell.

Here is the proof of Bells theorem:
http://www.johnboccio.com/research/quantum/notes/paper.pdf

If you think you have found an out specify exatly which step is in error.

Thanks
Bill
 
There is no error my aim is to understand how quantum mechanics violates the inequality. There is no escape from Bell derivation.

The problem local or global is not an issue we are used in physics to have global/local correspondances like Euler lagrange equations, Maxwell equations have local or global formulations.

Leggett showed that nonlocality and realism does not correspond to quantum neither.

Hence we conclude that it is not local or global the problem, but the realistic hypothesis.

But what does realistic mean ? I thought it was that elements of reality can exist independently of observation ?
 
jk22 said:
\But what does realistic mean ? I thought it was that elements of reality can exist independently of observation ?

Technically counter-factual definiteness as defined in my linked paper.

Thanks
Bill
 
Does this mean that we could have known the result without having actually performed the measurement ?

Could we say that this were equivalent to say : it is possible in principle to collect enough data to be able to predict any result, data which is coded in the variable ?
 
jk22 said:
Does this mean that we could have known the result without having actually performed the measurement ?

Of course not. QM says that's impossible.

jk22 said:
Could we say that this were equivalent to say : it is possible in principle to collect enough data to be able to predict any result, data which is coded in the variable ?

Again QM says that's impossible.

May I suggest a bit of reading about QM:
https://www.amazon.com/dp/0465062903/?tag=pfamazon01-20

Having a look at some of your other posts your math background may be enough to follow the following which axiomatically is the essence of QM (see post 137):
https://www.physicsforums.com/threads/the-born-rule-in-many-worlds.763139/page-7

As you can see right at its foundations QM is probabilistic.

Thanks
Bill
 
Last edited by a moderator:
What disturbs me is that Bell's theorem is also valid when we use probabilities instead of measurement results. Using probabilities would make in some way that the result is no more defined counterfactually.
 
  • #10
jk22 said:
What disturbs me is that Bell's theorem is also valid when we use probabilities instead of measurement results. Using probabilities would make in some way that the result is no more defined counterfactually.

Sure. QM is two things. First an extension to probability theory. Secondly a theory about that extension applied to observations.

So?

Thanks
Bill
 
  • #11
bhobba said:
Bells theorem proves - there is no escaping it - if QM is correct you can't have both locality and reality. All you have done is define some kind of deterministic observable which has nothing to do with bell.

Here is the proof of Bells theorem:
http://www.johnboccio.com/research/quantum/notes/paper.pdf

If you think you have found an out specify exatly which step is in error.

Thanks
Bill

I don't think that there is any error in that paper, but I think that it can be misleading to replace, as the author does, the assumption of realism with the assumption of "counterfactuality" (counterfactualness?). The usual definition of "counterfactual" is that the counterfactuals are the answers to questions of the form "If I had (counter to fact) used device setting X instead of Y, what result would I have obtained?" The assumption that such questions have definite answers is usually called (though not in this paper) the assumption of "counterfactual definiteness". For a theory to be counterfactually definite, measurement results have to be a deterministic function of the values of physical variables describing the system being measured and the measuring devices. The author tacitly makes this assumption by conflating the measurement results A, B, and C with the "hidden variables" describing the system's state prior to being measured.

Of course, in the case of coins, which the author uses to illustrate the principle, this conflation is natural: You're not going to observe that a coin is gold (for example) unless it was already gold before you looked. However, conflating the hidden variables (the author doesn't use this term, but I think his "counterfactual properties" is the same thing as what people usually mean by local hidden variables) with the measurement results (or more precisely, assuming that measurement results are a deterministic function of the hidden variables) gives the misleading impression that nondeterminism is a loophole to Bell's theorem. What's more general than assuming counterfactual definiteness is to assume that the outcome of a measurement is a random variable, with probabilities influenced by the variables describing the system and the measurement device. However, allowing this extra generality doesn't change the conclusion: No local realistic theory, counterfactual or not, can reproduce the predictions of QM for the EPR experiment.

So the author's focus on counterfactuality unnecessarily weakens the conclusions of Bell's proof.
 
  • #12
  • #13
bhobba said:
Here is the proof of Bells theorem:
http://www.johnboccio.com/research/quantum/notes/paper.pdf

If you think you have found an out specify exatly which step is in error.
"The terms in equation (1) refer to three measurements on the same set of coins. The terms in equation (4) refer to measurements on 3 separate disjoint sets of coins.

In probability theory, whenever you add and subtract probabilities, the expression is only meaningful if all the probabilities are from the same sample space. While you can guarantee that a set of triples of measurements A,B,C each on a pair of coins will be able to generate P(A,B), P(A,C) and P(B,C) that are from the same sample space, there is no way to guarantee that the same can be true for 3 separate measurements in which you only measure A,B on one set of coins, A,C on another set of coins and B,C on yet a different set of coins."
 
  • #14
That doesn't matter - the equality holds regardless. I can take a number of sample spaces and add the probabilities of events in those spaces up to get a number. It turns out to be 3/4 which violates Bell. indeed they can't be from the same sample space because equation 1 says it can be >= 1.

However your concerns have shifted from the measurement problem and computer like functions to issues with the proof of Bells theorem. I suggest you start a new thread about it. We have a number of regular posters here such as Dr Chinese that are experts in it and they are best positioned to address your concerns.

Thanks
Bill
 
Last edited:
  • #15
I think the point to understand how qm violate CHSH for example is that we can't write $$A(\theta_A,\lambda)$$ see https://en.m.wikipedia.org/wiki/Bell's_theorem For the result but it should be $$A(\theta_A,\Psi)$$

Indeed in qm we don't have lambda only psi. Since the wavefunction does not in general determine the result we can see the violation arising.
 
  • #16
I don't quite understand what you are saying.

However what's going on is well known. QM is the simplest extension to standard probability theory that allows continuous transformations between pure states:
http://arxiv.org/pdf/quant-ph/0101012.pdf

It turns out due to that extension allowing entanglement you get a different type of correlation than classical probability theory. If you want it like classical probability theory with properties independent of observation then you need to violate locality. However if you accept nature as is and say, for example, locality isn't even a valid concept of correlated systems, then there is no issue - we have a different kind of correlation - big deal.

As I often say about QM the real issue is simply letting go of classically developed intuition about how the world is. Simply accept QM as it is. We have met the enemy and he is us - Pogo.

Anyway this has gone way off topic. What you proposed in your original post won't resolve anything. If you want to discuss general QM issues best to start other threads.

Thanks
Bill
 
Last edited:
  • #17
stevendaryl said:
So the author's focus on counterfactuality unnecessarily weakens the conclusions of Bell's proof.

Yes - although I wouldn't express it that harshly.

The difference between realism and counter-factual definiteness has been discussed in a number of threads. It a bit subtle, but I don't think someone starting out really needs to worry about it. Understand what going on in a general sense first.

Thanks
Bill
 
  • #18
bhobba said:
Simply accept QM as it is. We have met the enemy and he is us - Pogo

So a global variable says for the CHSH operator S=AB-AB'+A'B+A'B' shall we write that the values are 4,2,0,-2,-4 and do we have for example the probability $$p(-4)=(\frac{1}{2}(1+\frac{1}{\sqrt{2}}))^4$$ ?

This was pbtained using quantum probabilities for the pairs and assuming them independent https://www.physicsforums.com/threads/in-bell-are-pairs-independent.827997/

Whereas qm predicts the results $$0,\pm 2\sqrt{2}$$ with $$p(-2\sqrt{2})=1$$

My problem with qm is that adding 1 and -1s does give a decimal result. How do you explain that ?
 
Last edited:
  • #19
jk22 said:
My problem with qm is that adding 1 and -1s does give a decimal result. How do you explain that ?

Can you be a bit clearer - I can't follow what your issue is.

Also we are getting off topic here - please start a new thread about the proof of Bells theorem - we have wandered well and truly from what you posted about.

Thanks
Bill
 
  • #20
DirkMan said:
"The terms in equation (1) refer to three measurements on the same set of coins. The terms in equation (4) refer to measurements on 3 separate disjoint sets of coins.

In probability theory, whenever you add and subtract probabilities, the expression is only meaningful if all the probabilities are from the same sample space. While you can guarantee that a set of triples of measurements A,B,C each on a pair of coins will be able to generate P(A,B), P(A,C) and P(B,C) that are from the same sample space, there is no way to guarantee that the same can be true for 3 separate measurements in which you only measure A,B on one set of coins, A,C on another set of coins and B,C on yet a different set of coins."
The point is that Bell's inequality assume local hidden variables (or local realism). So even though you can't actually measure A, B and C on the same coins, they are all defined. Secondly, Bell also assumes a "no conspiracy" or freedom assumption or fair sampling assumption. When you actually observe A and B and look at the correlation, you get the same result (up to statistical sampling error) as you would have got from, say, the correlation between A and B when observing A and C.
 
  • Like
Likes bhobba
  • #21
bhobba said:
Bells theorem proves - there is no escaping it - if QM is correct you can't have both locality and reality. All you have done is define some kind of deterministic observable which has nothing to do with bell.

Here is the proof of Bells theorem:
http://www.johnboccio.com/research/quantum/notes/paper.pdf

If you think you have found an out specify exatly which step is in error.

Thanks
Bill
Perhaps worth mentioning that Lorenzo Maccone's paper can be found on arXiv http://arxiv.org/abs/1212.5214 and was published in Am. J. Phys. 81, 854 (2013).

There are similar absolutely elementary proofs of the "four variable" Bell inequality aka CHSH inequality. My favourite is on slide 7 of http://www.slideshare.net/gill1109/epidemiology-meets-quantum-statistics-causality-and-bells-theorem. And note that at long last they finally did the definitive (loophole free) experiment: http://arxiv.org/abs/1508.05949
 
  • Like
Likes bhobba
  • #22
gill1109 said:
Perhaps worth mentioning that Lorenzo Maccone's paper can be found on arXiv http://arxiv.org/abs/1212.5214 and was published in Am. J. Phys. 81, 854 (2013).

There are similar absolutely elementary proofs of the "four variable" Bell inequality aka CHSH inequality. My favourite is on slide 7 of http://www.slideshare.net/gill1109/epidemiology-meets-quantum-statistics-causality-and-bells-theorem. And note that at long last they finally did the definitive (loophole free) experiment: http://arxiv.org/abs/1508.05949

I have the same complaint about those slides that I had about another paper linked to. Slide number 6 says: "Realism = existence of counterfactual outcomes". That doesn't seem right to me. A realistic model could allow for a nondeterministic relationship between "hidden variables" and measurement outcomes. For example, you could have a model along the following lines:
  • Each twin pair is associated with a hidden variable \lambda, randomly produced according to a probability distribution P(\lambda)
  • Alice will measure spin-up with probability P_A(\lambda, \alpha) where \alpha is the setting of her measuring device.
  • Bob will measure spin-up with probability P_B(\lambda, \beta) where \beta is the setting of his measuring device.
  • For fixed lambda, the probabilities are independent; the probability of both Alice and Bob getting spin-up is given by: P_{A\&B}(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) \cdot P_B(\lambda, \beta)
I would call such a model "realistic"; it's just not deterministic, and the values of the "hidden variables" are not directly measurable. But in such a model, there are no counterfactual outcomes (there is no definite answer to a question of the form "What measurement result would Alice have gotten if she had chosen setting \alpha' instead of \alpha). So I don't think it's correct to equate realism with counterfactual outcomes.

On the other hand, for the purposes of Bell's proof as applied to EPR, it doesn't actually matter, because the perfect correlations (or anti-correlations) between Alice's and Bob's results when their settings are equal implies that any local hidden-variables explanation must, in fact, be counterfactually definite. But the counterfactuality is a conclusion, rather than an assumption.
 
  • #23
stevendaryl said:
I have the same complaint about those slides that I had about another paper linked to. Slide number 6 says: "Realism = existence of counterfactual outcomes". That doesn't seem right to me.

Its not right. But I think going into it would confuse the beginner more than illuminate. Once they grasp the essentials of it fine points like that can be gone into.

Thanks
Bill
 
  • #24
stevendaryl said:
I have the same complaint about those slides that I had about another paper linked to. Slide number 6 says: "Realism = existence of counterfactual outcomes". That doesn't seem right to me. A realistic model could allow for a nondeterministic relationship between "hidden variables" and measurement outcomes. For example, you could have a model along the following lines:
  • Each twin pair is associated with a hidden variable \lambda, randomly produced according to a probability distribution P(\lambda)
  • Alice will measure spin-up with probability P_A(\lambda, \alpha) where \alpha is the setting of her measuring device.
  • Bob will measure spin-up with probability P_B(\lambda, \beta) where \beta is the setting of his measuring device.
  • For fixed lambda, the probabilities are independent; the probability of both Alice and Bob getting spin-up is given by: P_{A\&B}(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) \cdot P_B(\lambda, \beta)
I would call such a model "realistic"; it's just not deterministic, and the values of the "hidden variables" are not directly measurable. But in such a model, there are no counterfactual outcomes (there is no definite answer to a question of the form "What measurement result would Alice have gotten if she had chosen setting \alpha' instead of \alpha). So I don't think it's correct to equate realism with counterfactual outcomes.

On the other hand, for the purposes of Bell's proof as applied to EPR, it doesn't actually matter, because the perfect correlations (or anti-correlations) between Alice's and Bob's results when their settings are equal implies that any local hidden-variables explanation must, in fact, be counterfactually definite. But the counterfactuality is a conclusion, rather than an assumption.
I agree that my definition seems to exclude stochastic hidden variable models. But every probabilistic model can be recast as a deterministic model simply by adding another layer of hidden variables. Remember: in Kolmogorovian probability theory, every random variable is "just" a deterministic function of a hidden variable omega and it is omega which is picked by chance. I claim that mathematically, every local realistic model *can* be mathematically represented as a deterministic model with counterfactual outcomes. Of course the mathematical representation need not be physically very interesting. Those counterfactual outcomes exist in a theoretical mathematical description of our experiment. I am not claiming that they exist in reality (whatever that might mean).
 
  • Like
Likes Nugatory and bhobba
  • #25
I've always had trouble understanding what else other than non-locality could lead to Bell violations, in other words, what local but non-realistic model means.
As gill explains, any mumbo-jumbo about something not existing until it gets measured and such, is still the same as it being determined (deterministically, lol) from a hidden variable. The only question is if the variables that determine it are local or not, and I see no way to cause violations if they are local.
 
  • #26
I think to see a violation one should consider that in Chsh (because it is used in most experiments) at least when it is written AB+AB' that the operator A is measured twice hence we can have two different results for the A operator appearing.

On the other hand if you consider that these two A have the same measurement result, like a register, then you can have no violation.
 
  • #27
georgir said:
I've always had trouble understanding what else other than non-locality could lead to Bell violations, in other words, what local but non-realistic model means

I haven't spent much effort trying to work out the details, but it seems to me that some kind of local world-splitting could give a local, non-realistic toy model to explain EPR-type correlations. As I said, I'm fuzzy about the details, but such a model might look like this:
  1. You produce a twin pair of anti-correlated spin 1/2 particles at some location intermediate between Alice and Bob.
  2. When Alice measures the spin of one particle, she splits into two noninertacting copies, one of which sees spin-up, and the other of which sees spin-down.
  3. Same for Bob.
  4. If Alice and Bob compare notes by sending a message telling the other what result he/she got, the message is nondeterministically delivered to one copy or the other. If Alice saw spin-up at angle \alpha, and Bob also chose angle \alpha, then the message from Alice would be delivered to the copy of Bob that saw spin-down. If they chose different angles, then there would be some nonzero chance of the messages being sent to either copy.
  5. After one copy of Alice interacts with a copy of Bob, the other copies become inaccessible for future communication.
It seems that such a toy model, which would only apply to EPR and not to QM more generally, could explain the EPR correlations. I'm not claiming this as a serious possibility, but just as an example of a local, non-realistic model. It's non-realistic in the sense that there is no "hidden variable" corresponding to the true, unknown spin of a particle.
 
  • #28
georgir said:
I've always had trouble understanding what else other than non-locality could lead to Bell violations, in other words, what local but non-realistic model means.
As gill explains, any mumbo-jumbo about something not existing until it gets measured and such, is still the same as it being determined (deterministically, lol) from a hidden variable. The only question is if the variables that determine it are local or not, and I see no way to cause violations if they are local.
I think that "non realistic" means that there are no variables that determine the outcome! It is irreducibly random. It is not pre-determined.
 
  • #29
Yes if you have $$A(a,hv)=rand()$$ where rand() is a computer like function that returns 1 or -1 each time it is called then it violates the inequality but you cannot predict with certainty in any case.

I think that the local approach is confronted to a dilemma : either you violate xor you can predict with certainty but quantum does both at least theoretically.

It is then an experimental thing to see if both arise. Violation seems okay but i have seen experimental curves that were only at 87% of prediction power for the same angles.
 
  • #30
jk22 said:
Yes if you have $$A(a,hv)=rand()$$ where rand() is a computer like function that returns 1 or -1 then it violates the inequality but you cannot predict with certainty in any case.

I don't think that's true. Adding randomness does not allow you to violate the Bell inequality unless you add randomness nonlocally. That is, you can assume that Alice's result is random, and that Bob's result is random, but as long as Alice's random choice is independent of Bob's random choice, you're not going to violate Bell's inequality. As Richard says, localized randomness is indistinguishable from a deterministic model with an extra hidden variable.
 
  • #31
gill1109 said:
I think that "non realistic" means that there are no variables that determine the outcome! It is irreducibly random. It is not pre-determined.

To me, realism versus non-realism is about whether the theory describes the objective, observer-independent state of the system under consideration, as opposed to describing something subjective (our knowledge, for example). A classical probability theory is non-realistic in the sense that the probabilities don't reflect anything about the world (except in the case of probabilities 0 and 1), they only reflect our knowledge, or lack of knowledge about the world.
 
  • #32
stevendaryl said:
I don't think that's true. Adding randomness does not allow you to violate the Bell inequality unless you add randomness nonlocally..

If i take the Chsh inequality

ab+ab'+a'b-a'b'<=2

Take a a' and b' b equals to rand() and you can obtain 4 obviously.this you can check pn a computer but it is straight forward.

This said the way out to the local dilemma is that each pair ab shares the same variable but each term receives a different variable.
 
  • #33
jk22 said:
If i take the Chsh inequality

ab+ab'+a'b-a'b'<=2

Take a a' and b' b equals to rand() and you can obtain 4 obviously.this you can check pn a computer but it is straight forward..

I think you've misunderstood what the a, a', etc., mean in the CHSH inequality. The inequality is about correlations, not values. The real statement of the inequality is:

\rho(a,b) + \rho(a, b&#039;) + \rho(a&#039;,b) - \rho(a&#039;, b&#039;) \leq 2

\rho(\alpha, \beta) is computed this way: (for the spin-1/2 case)
  • Generate many twin pairs, and have Alice and Bob measure the spins of their respective particles at a variety of detector orientations.
  • Let \alpha_n be Alice's setting on trial number n
  • Let \beta_n be Bob's setting on trial number n
  • Define A_n to be +1 if Alice measures spin-up on trial number n, and -1 if Alice measures spin-down.
  • Define B_n to be \pm 1 depending on Bob's result for trial n
Define \rho(\alpha, \beta) to be the average of A_n \cdot B_n, averaged over those trials for which \alpha_n = \alpha and \beta_n= \beta. Then if a and a&#039; are two different values for \alpha, and b and b&#039; are two different values for \beta, then the CHSH inequality says that \rho(a,b) + \rho(a&#039;,b) + \rho(a,b&#039;) - \rho(a&#039;,b&#039;) \leq 2.

It doesn't make any sense to say \rho(\alpha, \beta) = rnd(). \rho is not a random number, it's an average of products of random variables.

You can certainly propose a model where A_n = rnd() for each n, and similarly for B_n. But that would not make \rho(\alpha, \beta) = rnd().
 
  • #34
The demonstration uses factorization of measurement results if the measurement result of this sum is smaller than 2 then of course the average is.

The fact is that qm says we cannot factorize

$$ A_n B_n+A_nB'_n\neq A_n(B_n+B'_n)$$

Because A_n are different since measured two times.

To make sense we have $$\rho_i$$ with $$A_{i,n},B_{i,n}$$

The $$A_n$$ for the first rho are not the same as for the second.
 
  • #35
jk22 said:
The demonstration uses factorization of measurement results if the measurement result of this sum is smaller than 2 then of course the average is.

The fact is that qm says we cannot factorize

$$ A_n B_n+A_nB'_n\neq A_n(B_n+B'_n)$$

Because A_n are different since measured two times.

Well, in the CHSH inequality, you compute \rho(a,b) by averaging over all trials for which Alice's setting is a and Bob's setting is b. Then you independently compute \rho(a,b&#039;), \rho(a&#039;,b) and \rho(a&#039;,b&#039;). You never compute a quantity such as A_n B_n+A_nB&#039;_n (because you can't, for a single value of n, measure both B and B&#039;.

Anyway, my point is that the use of random number generators doesn't affect the conclusion of Bell's theorem, or at least I don't think it does.
 
  • #36
But how do you prove the sum of the 4 rhos is smaller than 2 then ?

$n$ just means the nth measurement for B and B' they don't need to be simultaneously measured.
 
Last edited:
  • #37
jk22 said:
But how do you prove the sum of the 4 rhos is smaller than 2 then ?

$n$ just means the nth measurement for B and B' they don't need to be simultaneously measured.

But the quantity A_n B_n + A&#039;_n B_n + A_n B&#039;_n - A&#039;_n B&#039;_n does not appear in the derivation of the CHSH inequality.

As for what is proved:

A local, memoryless hidden-variables model for the EPR experiment would involve three probability distributions:
  1. P(\lambda), the probability distribution for the hidden variable \lambda.
  2. P_A(\lambda, \alpha), the probability of Alice getting result +1 given that the hidden variable has value \lambda and Alice's device has setting \alpha
  3. P_B(\lambda, \beta), the probability of Bob getting result +1 given that the hidden variable has value \lambda and Bob's device has setting \beta
Such a model could be used to generate sample EPR results in the following way:
  • In round number n, Alice chooses a setting \alpha_n
  • Bob chooses a setting \beta_n.
  • \lambda_n is chosen randomly, according to the probability distribution P(\lambda)
  • Then with probability P_A(\lambda_n, \alpha_n) let A_n = +1. With probability 1 - P_A(\lambda_n, \alpha_n) let A_n = -1
  • With probability P_B(\lambda_n, \beta_n) let B_n = +1. With probability 1 - P_B(\lambda_n, \beta_n) let B_n = -1
  • Then define \rho_n = A_n \cdot B_n
After many, many rounds, you let \rho(\alpha, \beta) be the average of \rho_n over those rounds for which \alpha_n = \alpha and \beta_n = \beta.

The CHSH proof shows that no such model will satisfy a certain inequality (in the limit of many, many rounds).
 
  • #38
stevendaryl said:
The CHSH proof shows that no such model will satisfy a certain inequality (in the limit of many, many rounds).

Actually, the proof is about correlations computed using a continuous distribution:

\rho(\alpha, \beta) = \int_\lambda P(\lambda) d\lambda (P^+(\lambda, \alpha, \beta) - P^-(\lambda, \alpha, \beta))

where P^+(\lambda, \alpha, \beta) is the probability that Alice's and Bob's results are the same sign, and P^-(\lambda, \alpha, <br /> \beta) is the probability that they are opposite signs. In terms the probabilities given earlier:

P^+(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) P_B(\lambda, \beta) + (1 - P_A(\lambda, \alpha))(1 - P_B(\lambda, \beta)) and
P^-(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) (1 - P_B(\lambda, \beta)) + (1 - P_A(\lambda, \alpha)) P_B(\lambda, \beta)

But tests of the CHSH inequality assume that these continuously defined correlations can be approximated by discretely computed correlations.
 
  • #39
stevendaryl said:
Actually, the proof is about correlations computed using a continuous distribution:

\rho(\alpha, \beta) = \int_\lambda P(\lambda) d\lambda (P^+(\lambda, \alpha, \beta) - P^-(\lambda, \alpha, \beta))

where P^+(\lambda, \alpha, \beta) is the probability that Alice's and Bob's results are the same sign, and P^-(\lambda, \alpha,<br /> \beta) is the probability that they are opposite signs. In terms the probabilities given earlier:

P^+(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) P_B(\lambda, \beta) + (1 - P_A(\lambda, \alpha))(1 - P_B(\lambda, \beta)) and
P^-(\lambda, \alpha, \beta) = P_A(\lambda, \alpha) (1 - P_B(\lambda, \beta)) + (1 - P_A(\lambda, \alpha)) P_B(\lambda, \beta)

But tests of the CHSH inequality assume that these continuously defined correlations can be approximated by discretely computed correlations.
Actually, (and IMHO), the proof uses elementary logic and elementary probability theory, but you can hide it behind calculus if you like.
http://www.slideshare.net/gill1109/epidemiology-meets-quantum-statistics-causality-and-bells-theorem
http://arxiv.org/abs/1207.5103
Statistics, Causality and Bell's Theorem
Statistical Science 2014, Vol. 29, No. 4, 512-528
 
  • Like
Likes bhobba
  • #40
stevendaryl said:
... it seems to me that some kind of local world-splitting could give a local, non-realistic toy model to explain EPR-type correlations...
But I think if a world-splitting is caused by an event A and it has different variable values in the two branches for objects/events non-local to A it still should not count as a local model. So even world-splitting doesn't fit the bill.
EDIT: Plus I feel any world-splitting model could be Occam-ified by looking at just a single branch. But let's not go into religious wars right now ;)
 
Last edited:
  • #41
stevendaryl said:
they only reflect our knowledge, or lack of knowledge about the world.
Maybe you're too quick to put an equal sign between "having knowledge" and "not having knowledge". I mean think about it...when you measure Alice, you instantly know about Bob. Maybe the very thing that we've been looking for has been literally staring at us in the mirror all along.
 
  • #42
DirkMan said:
Maybe you're too quick to put an equal sign between "having knowledge" and "not having knowledge". I mean think about it...when you measure Alice, you instantly know about Bob. Maybe the very thing that we've been looking for has been literally staring at us in the mirror all along.

I'm not sure I understand what you're saying. What are you saying has been staring us in the face?
 
  • #43
georgir said:
But I think if a world-splitting is caused by an event A and it has different variable values in the two branches for objects/events non-local to A it still should not count as a local model.

I used the wrong words. It's not the entire world that splits, it's just Alice who splits when she does a measurement, and Bob who splits when he does a measurement. So it's only local parts of the world that split locally.

georgir said:
EDIT: Plus I feel any world-splitting model could be Occam-ified by looking at just a single branch.

But to go from multiple branches to a single branch is a nonlocal change. So it's better from the point of view of occam, but not from the point of view of locality.
 
  • #44
stevendaryl said:
I used the wrong words. It's not the entire world that splits, it's just Alice who splits when she does a measurement, and Bob who splits when he does a measurement. So it's only local parts of the world that split locally.
But if Alice splits into two branches where Bob's particle or detector have different variables (especially ones somehow related to her detector readings) I'd still classify this as non-local. If she splits into two branches where the only difference is her own readings, then that is equivalent to a single-universe local random variable model.

EDIT: To extend on that, I can see EPR explanations to the tune of "the universe splits into two branches, one where BOTH particles were H, one where BOTH particles were V" but if the particles are already separated (split happening on measurement), this split is practically a non-local interaction.
 
Last edited:
  • #45
stevendaryl said:
\rho(a,b) + \rho(a, b&#039;) + \rho(a&#039;,b) - \rho(a&#039;, b&#039;) \leq 2

\rho(\alpha, \beta) is computed this way: (for the spin-1/2 case)
  • Generate many twin pairs, and have Alice and Bob measure the spins of their respective particles at a variety of detector orientations.
  • Let \alpha_n be Alice's setting on trial number n
  • Let \beta_n be Bob's setting on trial number n
  • Define A_n to be +1 if Alice measures spin-up on trial number n, and -1 if Alice measures spin-down.
  • Define B_n to be \pm 1 depending on Bob's result for trial n
Define \rho(\alpha, \beta) to be the average of A_n \cdot B_n, averaged over those trials for which \alpha_n = \alpha and \beta_n= \beta. Then if a and a&#039; are two different values for \alpha, and b and b&#039; are two different values for \beta, then the CHSH inequality says that \rho(a,b) + \rho(a&#039;,b) + \rho(a,b&#039;) - \rho(a&#039;,b&#039;) \leq 2.
I just looked at a trivial average when the A_n are limited to one value.The continuous model is $$A(a,\lambda_1)B(b,\lambda_1)\rho(\lambda_1)+A(a,\lambda_2)B(b',\lambda_2)\rho(\lambda_2)+A(a',\lambda_3)B(b,\lambda_3)\rho(\lambda_3)-A(a',\lambda_4)B(b',\lambda_4)\rho(\lambda_4)$$

I looked at the case where we take away the integrals because there is only one measurement value.
 
  • #46
jk22 said:
I just looked at a trivial average when the A_n are limited to one value.The continuous model is $$A(a,\lambda_1)B(b,\lambda_1)\rho(\lambda_1)+A(a,\lambda_2)B(b',\lambda_2)\rho(\lambda_2)+A(a',\lambda_3)B(b,\lambda_3)\rho(\lambda_3)-A(a',\lambda_4)B(b',\lambda_4)\rho(\lambda_4)$$

I looked at the case where we take away the integrals because there is only one measurement value.
Rho usually stands for probability density. For each of the four correlations, Bell assumes that lambda is drawn at random from the same probability distrubution.
 
  • #47
gill1109 said:
Rho usually stands for probability density. For each of the four correlations, Bell assumes that lambda is drawn at random from the same probability distrubution.

Actually, in the Wikipedia article about the CHSH inequality, \rho(\alpha, \beta) is the correlation. If A(\lambda, \alpha) and B(\lambda, \beta) both take on values +/- 1, then \rho(\alpha, \beta) is the average over \lambda of A(\lambda, \alpha) B(\lambda, \beta).
 
  • #48
stevendaryl said:
Actually, in the Wikipedia article about the CHSH inequality, \rho(\alpha, \beta) is the correlation. If A(\lambda, \alpha) and B(\lambda, \beta) both take on values +/- 1, then \rho(\alpha, \beta) is the average over \lambda of A(\lambda, \alpha) B(\lambda, \beta).
Yes you are right, there are a lot of alternative notations, can be very confusing ... A lot of people use E(a, b) for the correlation, and it's the integral over lambda over A(lambda, a) B(lambda, b) rho(lambda) d lambda where rho(lambda) is the probability density of the hidden variable lambda.

The point being that when you change the settings, the probability density of lambda does not change. This is the freedom assumption (no conspiracy, fair-sampling, ...). Bell's theorem ( a theorem belonging to meta-phsyics) is not: "QM is incompatible with LHV". It's "QM is incompatible with LHV + no-conspiracy"). Bell himself referred to his inequality (and later the CHSH improvement) as "my theorem". I would say that Bell's inequality is a rather elementary mathematical theorem about limits of distributed (classical) computing. You can't build a network of computers which wins the "Bell game".
 
Last edited:
  • #49
jk22 said:
I just looked at a trivial average when the A_n are limited to one value.The continuous model is $$A(a,\lambda_1)B(b,\lambda_1)\rho(\lambda_1)+A(a,\lambda_2)B(b',\lambda_2)\rho(\lambda_2)+A(a',\lambda_3)B(b,\lambda_3)\rho(\lambda_3)-A(a',\lambda_4)B(b',\lambda_4)\rho(\lambda_4)$$

I looked at the case where we take away the integrals because there is only one measurement value.

What you wrote doesn't make sense to me. If you only use a single "round" of the experiment, then you can't have \lambda_1 and \lambda_2, you just have a single value of \lambda. Also, what does "\rho" mean in your formula?

I think you're getting confused about this. There are two different correlations computed: (1) the actual measured correlations, and (2) the predicted correlations.

The way that the actual measured correlations are computed is this:
  1. For every round (labeled n), there are corresponding values for \alpha_n and \beta_n, the settings of the two detectors, and there are corresponding values for the measured results, A_n and B_n
  2. To compute \rho(\alpha, \beta), you average A_n \cdot B_n over all those rounds n such that \alpha_n = a and \beta_n = b
So there is no \lambda in what is actually measured. \lambda is only involved in the hypothetical model for explaining the results. The local hidden variables explanation of the results are:
  • For every round n there is a corresponding value of some hidden variable, \lambda_n
  • A_n is a (deterministic) function of \alpha_n and \lambda_n
  • B_n is a (deterministic) function of \beta_n and \lambda_n
What you suggested is that rather than having A_n be a deterministic function, it could be nondeterministic, using say a random number generator. That actually doesn't make any difference. Intuitively, you can think that a nondeterministic function can be turned into a deterministic function of a random number r. Then r_n (the value of r on round n) can be incorporated into \lambda_n. It's just an extra hidden variable.
 
  • #50
Rho is the density of probability of the hidden variable, in fact if i consider a single run it is simply 1.

The fact that we have only one lambda comes for a renaming of the integration. If we don't have the integration then allowing for four different hidden variables is maybe reasonable.
 

Similar threads

Replies
2
Views
1K
Replies
4
Views
2K
Replies
4
Views
1K
Replies
24
Views
2K
Replies
4
Views
1K
Replies
65
Views
6K
Replies
4
Views
2K
Replies
6
Views
2K
Replies
8
Views
833
Back
Top