I Quantum mechanics is not weird, unless presented as such

  • #451
stevendaryl said:
The assumption that it was true beforehand, and Alice's measurement only revealed its truth is a hidden variables theory, which is ruled out by Bell's theorem.
That's not true. Bell's theorem rules out non-contextual hidden variables. This is a critical assumption in the derivation of the inequality.
 
Physics news on Phys.org
  • #452
rubi said:
That's not true. Bell's theorem rules out non-contextual hidden variables. This is a critical assumption in the derivation of the inequality.

Well, I'm not sure what the "non-contextual" adjective implies here. What would be an example of a contextual hidden-variables theory?
 
  • #453
stevendaryl said:
Well, I'm not sure what the "non-contextual" adjective implies here. What would be an example of a contextual hidden-variables theory?
Non-contextual means that the hidden variables can be modeled on a single joint probability space. One could call QM itself a contextual hidden variable theory.
This is a nice introduction: http://www.mdpi.com/1099-4300/10/2/19/pdf
 
  • #454
rubi said:
Non-contextual means that the hidden variables can be modeled on a single joint probability space. One could call QM itself a contextual hidden variable theory.
This is a nice introduction: http://www.mdpi.com/1099-4300/10/2/19/pdf

I've read such papers before (maybe that very paper), and it doesn't do a thing for me. I don't see how it contributes anything to the discussion of Bell's theorem. If Bell made an unwarranted assumption about the existence of a single joint probability space, so his proof of the nonexistence of hidden variables is incorrect, then I would like to see that loophole exploited by seeing an explicit hidden-variables model that reproduces the statistics of EPR.
 
  • #455
stevendaryl said:
I've read such papers before (maybe that very paper), and it doesn't do a thing for me. I don't see how it contributes anything to the discussion of Bell's theorem. If Bell made an unwarranted assumption about the existence of a single joint probability space, so his proof of the nonexistence of hidden variables is incorrect, then I would like to see that loophole exploited by seeing an explicit hidden-variables model that reproduces the statistics of EPR.

I think I understand the idea behind "contextuality". Suppose that you have a source of coins that sends them spinning on edge toward you. When a coin reaches you, you slap it to the floor, and check whether it's "heads" or "tails". It might be a mistake to assume that there is a "hidden variable" in the coin that determines whether it ends up heads or tails. The act of "measurement" in this case creates the measurement result. If the slapping action were slightly different, you may have ended up with a different result.

On the other hand, if we had a pair of coins sent spinning in opposite directions, such that the measurement of coin always produced the opposite of the measurement of the other coin, then we would suspect that the details of the measurement act were irrelevant. So we would suspect that this anti-correlation was due to noncontextual hidden variables (to use the physics terminology). That's the case with EPR measurements (in the case of anti-correlated spin-1/2 particles), when Alice and Bob both measure spin relative to the same axis. The details of the entire measurement setup seem irrelevant, because if Alice gets spin-up, then regardless of the details of Bob's apparatus, he will get spin-down.
 
  • #456
Isn't it just another way of saying what Peres says? Changing the observable bases changes the experiment, so the anticorrelation arises for one experiment and not another (contextually). If you re-define "reality" to be the probability spectrum for specific non-local experiments then sure, reality isn't dead, but it's irreducibly setup-dependent (not so "real")...
 
  • #457
stevendaryl said:
I think I understand the idea behind "contextuality". Suppose that you have a source of coins that sends them spinning on edge toward you. When a coin reaches you, you slap it to the floor, and check whether it's "heads" or "tails". It might be a mistake to assume that there is a "hidden variable" in the coin that determines whether it ends up heads or tails. The act of "measurement" in this case creates the measurement result. If the slapping action were slightly different, you may have ended up with a different result.

On the other hand, if we had a pair of coins sent spinning in opposite directions, such that the measurement of coin always produced the opposite of the measurement of the other coin, then we would suspect that the details of the measurement act were irrelevant. So we would suspect that this anti-correlation was due to noncontextual hidden variables (to use the physics terminology). That's the case with EPR measurements (in the case of anti-correlated spin-1/2 particles), when Alice and Bob both measure spin relative to the same axis. The details of the entire measurement setup seem irrelevant, because if Alice gets spin-up, then regardless of the details of Bob's apparatus, he will get spin-down.
The end result must conserve momentum so the only detail that matters physically is that. The arrangements do seem to be irrelevant.

Envisage this - when the state is prepared and we consider the detectors as part of that, we can use local hidden variables that decide if the detectors will click regardless of any other details. So it is decided already and no kind of random intervention can change it. But the context, i.e. the detectors must be part of the probabilty space.
 
  • #458
Mentz114 said:
The end result must conserve momentum so the only detail that matters physically is that.

Well, angular momentum in the case that I'm talking about.[/QUOTE]

The arrangements do seem to be irrelevant.

Envisage this - when the state is prepared and we consider the detectors as part of that, we can use local hidden variables that decide if the detectors will click regardless of any other details. So it is decided already and no kind of random intervention can change it. But the context, i.e. the detectors must be part of the probabilty space.

I don't understand this business about being part of the probability space. Let P_A(\vec{a}, \alpha, \lambda) be the probability that Alice will measure spin-up for her particle, given that she measures spin along axis \vec{a}, and that \alpha represents other details of Alice's detector (above and beyond orientation), and \lambda represents details about the production of the twin pair. Similarly, let P_B(\vec{b}, \beta, \lambda) be the probability that Bob will measure spin-up for his particle, given that he measures along axis \vec{b}, and that \beta represents additional details about Bob's detector. By assuming that the probabilities depend on these particular parameters, where have I made an assumption about the existence of a single joint probability space? What does "contextuality" mean, other than that the outcome might depend both on facts about the particle and facts about the device? The only assumption, it seems to me, is locality, that P_A doesn't depend on \vec{b} and P_B doesn't depend on \vec{b}.

But the predictions of QM for EPR is perfect anti-correlation. Which means that:

If Alice measures spin-up at angle \vec{a}, then Bob will measure spin-down at angle \vec{a}. That seems to me to mean that the probabilities must be 0 or 1:

If P_A(\vec{a}, \alpha, \lambda) is nonzero, then that means that Alice has a chance of measuring spin-up. But if Alice measures spin-up, then Bob has no chance of measuring spin-up at that angle. So Bob's probabilities must be zero whenever Alice's are nonzero, and vice-versa. That's only possible if the probabilities are all zero or one. That means that the outcome is actually deterministic, given \lambda, which in turn implies that the details \alpha and \beta don't matter.

I don't think that the non-contextuality is an assumption, I think it follows from the perfect anti-correlations.
 
  • #459
stevendaryl said:
I've read such papers before (maybe that very paper), and it doesn't do a thing for me. I don't see how it contributes anything to the discussion of Bell's theorem. If Bell made an unwarranted assumption about the existence of a single joint probability space, so his proof of the nonexistence of hidden variables is incorrect, then I would like to see that loophole exploited by seeing an explicit hidden-variables model that reproduces the statistics of EPR.
Do you doubt the fact that Bell makes such an assumption?

Bell's proof ist not incorrect. His theorem excludes a wide range of hidden variable theories and proves that QM is definitely non-classical, since classical theories are non-contextual. This fact is undisputed. The theorem is just not strong enough to exclude common causes. Of course you can still be of the opinion that QM is non-local. All I'm saying is that this is not backed up by mathematics and therefore stays a belief until you figure out how to prove Bell's theorem without assuming a joint probability space.

I don't need to give you a counterexample, since mathematical statements aren't assumed to be true until they are proven wrong. Nevertheless, I suppose you could take the quantum state to be a contextual hidden variable. If you don't like this idea, it still doesn't free you from the burden of proof.

stevendaryl said:
I think I understand the idea behind "contextuality". Suppose that you have a source of coins that sends them spinning on edge toward you. When a coin reaches you, you slap it to the floor, and check whether it's "heads" or "tails". It might be a mistake to assume that there is a "hidden variable" in the coin that determines whether it ends up heads or tails. The act of "measurement" in this case creates the measurement result. If the slapping action were slightly different, you may have ended up with a different result.

On the other hand, if we had a pair of coins sent spinning in opposite directions, such that the measurement of coin always produced the opposite of the measurement of the other coin, then we would suspect that the details of the measurement act were irrelevant. So we would suspect that this anti-correlation was due to noncontextual hidden variables (to use the physics terminology). That's the case with EPR measurements (in the case of anti-correlated spin-1/2 particles), when Alice and Bob both measure spin relative to the same axis. The details of the entire measurement setup seem irrelevant, because if Alice gets spin-up, then regardless of the details of Bob's apparatus, he will get spin-down.
What if the coins are magnetized (heads = N, tails = S) and instead of slapping down the coin, Alice and Bob use bar magnets, which they can arrange freely either in the NS or the SN direction. If they compare their results, then they will find that the results are either correlated or anti-correlated, depending on whether they chose the same arrangement or not. (Now of course, one would have to check the inequality in order to find out whether this is really contextual or admits a joint probability space description.)
 
  • #460
stevendaryl said:
I don't understand this business about being part of the probability space. Let P_A(\vec{a}, \alpha, \lambda) be the probability that Alice will measure spin-up for her particle, given that she measures spin along axis \vec{a}, and that \alpha represents other details of Alice's detector (above and beyond orientation), and \lambda represents details about the production of the twin pair. Similarly, let P_B(\vec{b}, \beta, \lambda) be the probability that Bob will measure spin-up for his particle, given that he measures along axis \vec{b}, and that \beta represents additional details about Bob's detector. By assuming that the probabilities depend on these particular parameters, where have I made an assumption about the existence of a single joint probability space? What does "contextuality" mean, other than that the outcome might depend both on facts about the particle and facts about the device? The only assumption, it seems to me, is locality, that P_A doesn't depend on \vec{b} and P_B doesn't depend on \vec{b}.

But the predictions of QM for EPR is perfect anti-correlation. Which means that:

If Alice measures spin-up at angle \vec{a}, then Bob will measure spin-down at angle \vec{a}. That seems to me to mean that the probabilities must be 0 or 1:

If P_A(\vec{a}, \alpha, \lambda) is nonzero, then that means that Alice has a chance of measuring spin-up. But if Alice measures spin-up, then Bob has no chance of measuring spin-up at that angle. So Bob's probabilities must be zero whenever Alice's are nonzero, and vice-versa. That's only possible if the probabilities are all zero or one. That means that the outcome is actually deterministic, given \lambda, which in turn implies that the details \alpha and \beta don't matter.

I don't think that the non-contextuality is an assumption, I think it follows from the perfect anti-correlations.
My point is that probabilities are irrelevant after the preparation. Suppose that the correlation has to be 1 or -1 ( depending on what is being conserved ). Whatever happens the required correlations ( coincidences or anti-coincidences ) will become fact. The result has already been set up. Crudely, there is a conspiracy where each detector is instructed to ignore everything else and click/not click as required. Non-locality is not an issue.

(I have to go to work so I won't be here for some hours now ).
 
  • #461
Hornbein said:
Don't underestimate Izzy Junior.

That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro' a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it. [4]

— Isaac Newton, Letters to Bentley, 1692/3
In the Principia, he carefully avoids any trace of making things appear weird. It would be interesting to know what he found so greatly absurd about ''action at a distance'', but I suppose that the margin of his letter was not enough to be able to contain his arguments...

In EPR we have no faster than light communication. Thus the nonlocality there is only ''passion at a distance''. Would this have been as absurd for him? We'll never know.
 
  • #462
rubi said:
mathematical statements aren't assumed to be true until they are proven wrong
?

Mathematical statements are true if proved from the assumptions that are part of the statement (or the underlying theory).
 
  • #463
A. Neumaier said:
?

Mathematical statements are true if proved from the assumptions that are part of the statement (or the underlying theory).
Right, but the Riemann hypothesis isn't true, just because nobody has found a counterexample yet. The Riemann hypothesis is true if it can be proved. Until then, we just don't know the truth value. Stevendaryl seems to assume that QM is non-local based on the fact that I haven't given him a convincing counterexample, even though the burdon of proof is on him.
 
  • #464
rubi said:
Do you doubt the fact that Bell makes such an assumption?

I doubt that such an assumption is involved. Bell in his derivation of his inequalities makes the assumption that there is a deterministic function F(\lambda, <br /> vec{a}) giving \pm 1 for every possible spin direction \vec{a}. But that's a short-cut. He could have allowed a more general dependency, but he, like Einstein, did not think it was possible to get perfect anti-correlations without such a deterministic function.

I don't need to give you a counterexample, since mathematical statements aren't assumed to be true until they are proven wrong.

Well, in that case, I'm not interested. To me, the whole point of Bell's theorem is to rule out a class of models. If you want to say that there are models that are not ruled out, fine. I already knew that--superdeterministic models, retrocausal models, nonlocal models. If you want to throw in another model that is not covered, I'd like to know what it is.
 
  • #465
rubi said:
Right, but the Riemann hypothesis isn't true, just because nobody has found a counterexample yet. The Riemann hypothesis is true if it can be proved. Until then, we just don't know the truth value. Stevendaryl seems to assume that QM is non-local based on the fact that I haven't given him a convincing counterexample, even though the burdon of proof is on him.

Only if I'm looking for proof. I'm not. I'm looking for a local, realistic explanation of quantum correlations. If you have one, I'd like to see it.
 
  • #466
A. Neumaier said:
Yes, but you'd nevertheless replace your utterly wrong statement [it asserts something completely different!] by one that really expresses what you meant.
I think you confused "aren't assumed to be true" with "are assumed to be false". Not assuming X to be true isn't the same as assuming X to be false.

stevendaryl said:
I doubt that such an assumption is involved. Bell in his derivation of his inequalities makes the assumption that there is a deterministic function F(\lambda,<br /> vec{a}) giving \pm 1 for every possible spin direction \vec{a}. But that's a short-cut. He could have allowed a more general dependency, but he, like Einstein, did not think it was possible to get perfect anti-correlations without such a deterministic function.
I don't see how you can doubt that this assumption is made. Khrennikov has pointed it out clearly. If you are not satisfied with his presentation, you can also check out this paper:
http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.48.291
It proves that Bell's factorization criterion is exactly equivalent to the existence of a joint probability distribution. If you reject the proof, you should be able to point out a mistake.

Well, in that case, I'm not interested. To me, the whole point of Bell's theorem is to rule out a class of models. If you want to say that there are models that are not ruled out, fine. I already knew that--superdeterministic models, retrocausal models, nonlocal models. If you want to throw in another model that is not covered, I'd like to know what it is.
Well, you keep claiming that the violation of Bell's inequality unambiguously rules out locality. I'm just pointing out that this is not backed up by the mathematics, so you shouldn't be claiming it as if it were a fact, rather than an opinion. I don't want to throw in another model. I'm happy with QM as it is.

stevendaryl said:
Only if I'm looking for proof. I'm not. I'm looking for a local, realistic explanation of quantum correlations. If you have one, I'd like to see it.
There cannot be a local realistic explanation, since local realism is usually defined to mean the Bell factorization criterion. Theories satisfying the factorization criterion are definitely ruled out. But apparently you are claiming that it is a fact that no contextual theory can be local either.
 
  • #467
rubi said:
I don't see how you can doubt that this assumption is made.

Because if you start with a more general formulation, then Bell's formulation seems to follow from the more general formulation, plus the requirement of perfect anti-correlations.

Khrennikov has pointed it out clearly.

I don't agree.

Well, you keep claiming that the violation of Bell's inequality unambiguously rules out locality.

I'm not saying that. I'm saying that Bell's inequality violation implies nonlocality OR superdeterminism OR retrocausality OR something weirder like Many Worlds, OR...

I don't want to throw in another model. I'm happy with QM as it is.

QM clearly works as a recipe for making predictions. If you're happy with that, fine.
 
  • #468
stevendaryl said:
Because if you start with a more general formulation, then Bell's formulation seems to follow from the more general formulation, plus the requirement of perfect anti-correlations.
What more general formulation doesn't use Bell's factorization criterion?

I don't agree.
Well, what do you say about Fine's paper that I quoted? Do you think his proof is erroneous?

I'm not saying that. I'm saying that Bell's inequality violation implies nonlocality OR superdeterminism OR retrocausality OR something weirder like Many Worlds, OR...
Then I probably misunderstood you. I thought you reject a common cause in the intersection of the past lightcones. If that is not the case, then I'm happy.

QM clearly works as a recipe for making predictions. If you're happy with that, fine. But the business about the possibility of contextual hidden variables does not in any way help understanding QM. I don't see any point in such papers.
I think it improves our understanding quite a bit, since it makes it more clear what exactly the implication of Bell's inequality and their violation are for physics. Knowing that non-contextuality is a crucial assumption in Bell's theorem changes the way we think about the theorem. I think this fact is not widely known in the physics community and should be pointed out more clearly in presentations of Bell's theorem.
 
  • #469
rubi said:
I think it improves our understanding quite a bit, since it makes it more clear what exactly the implication of Bell's inequality and their violation are for physics.

I don't agree with that, at all.
 
  • #470
stevendaryl said:
I don't agree with that, at all.
Well, Fine, Khrennikov, and others have pointed out an assumption in Bell's theorem that is not usually stated clearly and most physicists don't even understand that it is non-trivial. For me, this definitely improves my understanding of Bell's theorem and its implications a lot. Getting to know something about Bell's theorem that I previously had not known clearly improves my ability to judge its implications. That's what science is about, so I don't understand why you don't acknowledge it.
 
  • #471
rubi said:
Well, Fine, Khrennikov, and others have pointed out an assumption in Bell's theorem that is not usually stated clearly and most physicists don't even understand that it is non-trivial. For me, this definitely improves my understanding of Bell's theorem and its implications a lot. Getting to know something about Bell's theorem that I previously had not known clearly improves my ability to judge its implications. That's what science is about, so I don't understand why you don't acknowledge it.

I am not convinced that it improves anybody's ability to judge the implications of violations of Bell's inequality.
 
  • #472
stevendaryl said:
I am not convinced that it improves anybody's ability to judge the implications of violations of Bell's inequality.
It leaves open the possibility that contextual models can be local and admit common causes, which I thought you had rejected initially.
 
  • #473
stevendaryl said:
I am not convinced that it improves anybody's ability to judge the implications of violations of Bell's inequality.

I don't understand the point of considering three probability distributions: dP_1(\lambda, \lambda_{\theta_1}, \lambda_{\theta_2}), dP_2(\lambda, \lambda_{\theta_1}, \lambda_{\theta_3}), dP_3(\lambda, \lambda_{\theta_2}, \lambda_{\theta_3}). When a twin pair is generated, the settings to be chosen by Alice and Bob haven't been determined yet. The particles have whatever variables they have, independent of what is eventually done with them. So I don't understand the point of the three probability distributions. I would think that there is a set of possible parameters \lambda that can be assigned to the pair, and that they are assigned according to some probability distribution. So the assumption of three different probability distributions, for the three different types of experiments that might be performed in the future, seems very weird to me.
 
  • #474
stevendaryl said:
I don't understand the point of considering three probability distributions: dP_1(\lambda, \lambda_{\theta_1}, \lambda_{\theta_2}), dP_2(\lambda, \lambda_{\theta_1}, \lambda_{\theta_3}), dP_3(\lambda, \lambda_{\theta_2}, \lambda_{\theta_3}). When a twin pair is generated, the settings to be chosen by Alice and Bob haven't been determined yet. The particles have whatever variables they have, independent of what is eventually done with them. So I don't understand the point of the three probability distributions. I would think that there is a set of possible parameters \lambda that can be assigned to the pair, and that they are assigned according to some probability distribution. So the assumption of three different probability distributions, for the three different types of experiments that might be performed in the future, seems very weird to me.

To me, rather than talking about different probability distributions for each possible future experiment, I would think that there would be three different processes with associated probabilities:
  1. A twin pair is produced in some state, characterized by a parameter \lambda according to a probability distribution P(\lambda)
  2. A particle with associated parameter \lambda interacts with Alice's device, which is characterized by an orientation \vec{a} and perhaps other variables, \alpha. The probability of Alice getting +1 would be given by a probability P_A(\lambda, \vec{a}, \alpha)
  3. A particle with associated parameter \lambda interacts with Bob 's device, which is characterized by an orientation \vec{b} and perhaps other variables, \beta. The probability of Bob getting +1 would be given by a probability P_B(\lambda, \vec{b}, \beta)
 
  • #475
stevendaryl said:
I don't understand the point of considering three probability distributions: dP_1(\lambda, \lambda_{\theta_1}, \lambda_{\theta_2}), dP_2(\lambda, \lambda_{\theta_1}, \lambda_{\theta_3}), dP_3(\lambda, \lambda_{\theta_2}, \lambda_{\theta_3}).
Let's say there is a hidden variable ##\lambda## and 3 combinations of detector settings ##i=1,2,3##, for example Alice measures at angle ##\theta_i## and Bob measures at angle ##\theta_{i+1}## (where ##\theta_4:=\theta_1##). Then for each of these combinations, we collect probability distributions ##P_i(a_i,b_i)##. There may be a hidden variable ##\lambda## such that ##P_i(a_i,b_i) = \int_\Lambda p_i(\lambda,a_i,b_i)\mathrm d\lambda##. Now the fact that all the ##p_i## arise from a single joint probability space is equivalent to Bell's factorization criterion, which implies Bell's inequality. Thus a violation of Bell's inequality falsifies Bell's factorization criterion, but at the same time falsifies non-contextuality. You can't falsify the factorization criterion without falsifying non-contextuality.
 
  • #476
rubi said:
Let's say there is a hidden variable ##\lambda## and 3 combinations of detector settings ##i=1,2,3##, for example Alice measures at angle ##\theta_i## and Bob measures at angle ##\theta_{i+1}## (where ##\theta_4:=\theta_1##). Then for each of these combinations, we collect probability distributions ##P_i(a_i,b_i)##. There may be a hidden variable ##\lambda## such that ##P_i(a_i,b_i) = \int_\Lambda p_i(\lambda,a_i,b_i)\mathrm d\lambda##.

But as I said, there are two different processes involved in Alice getting a measurement result: (1) The production of a twin pair with parameter \lambda, and (2) Alice measuring the polarization along some axis of her choosing. Why should either process depend on Bob's setting?
 
  • #477
rubi said:
Well, Fine, Khrennikov, and others have pointed out an assumption in Bell's theorem that is not usually stated clearly and most physicists don't even understand that it is non-trivial.

That isn't an extra assumption. As far as I can tell, "joint probability space" just means that there is an underlying joint probability distribution and all the correlations can be obtained as marginals of this distribution, i.e., $$P(ab \mid xy) = \sum_{\hat{a}_{x}, \hat{b}_{y}} P(a_{1}, a_{2}, \dotsc, b_{1}, b_{2}, \dotsc) \,,$$ where, e.g., ##\hat{a}_{x}## means to sum over all combinations ##(a_{1}, \dotsc, a_{x-1}, a_{x+1}, \dotsc)## except the variable ##a_{x}## and similarly for ##\hat{b}_{y}##. (I don't find Khrennikov so clear but this is definitely what Fine was describing.) This construction is mathematically equivalent to the locality condition Bell arrived at. This means that if you can construct a Bell-local model for a given set of probabilities ##P(ab \mid xy)## then you can also construct an underlying joint probability distribution of the type defined just above, and vice versa.

This equivalence does not mean that the existence of an underlying joint probability distribution is an extra hidden assumption in Bell's theorem. That's just bad logic. Quite the opposite: it means that one of these assumptions is always redundant for the purpose of deriving Bell inequalities, since it is implied by the other anyway. You can derive exactly the same Bell inequalities from either starting assumption alone. You also don't get to choose which assumption to blame for a Bell violation. If a Bell inequality is violated, then both assumptions are contradicted.
 
  • #478
stevendaryl said:
But as I said, there are two different processes involved in Alice getting a measurement result: (1) The production of a twin pair with parameter \lambda, and (2) Alice measuring the polarization along some axis of her choosing. Why should either process depend on Bob's setting?
It doesn't depend on Bob's setting. ##P_i(a_i,b_i)## are just the estimated probability distributions that have been measured. Alice and Bob can certainly perform these 3 experiments, collect the data and then meet and calculate the ##P_i## from their results.

wle said:
That isn't an extra assumption. As far as I can tell, "joint probability space" just means that there is an underlying joint probability distribution and all the correlations can be obtained as marginals of this distribution, i.e., $$P(ab \mid xy) = \sum_{\hat{a}_{x}, \hat{b}_{y}} P(a_{1}, a_{2}, \dotsc, b_{1}, b_{2}, \dotsc) \,,$$ where, e.g., ##\hat{a}_{x}## means to sum over all combinations ##(a_{1}, \dotsc, a_{x-1}, a_{x+1}, \dotsc)## except the variable ##a_{x}## and similarly for ##\hat{b}_{y}##. (I don't find Khrennikov so clear but this is definitely what Fine was describing.) This construction is mathematically equivalent to the locality condition Bell arrived at. This means that if you can construct a Bell-local model for a given set of probabilities ##P(ab \mid xy)## then you can also construct an underlying joint probability distribution of the type defined just above, and vice versa.
That's right. Bell's factorization criterion is equivalent to the existence of a joint probability distribution.

This equivalence does not mean that the existence of an underlying joint probability distribution is an extra hidden assumption in Bell's theorem. That's just bad logic. Quite the opposite: it means that one of these assumptions is always redundant for the purpose of deriving Bell inequalities, since it is implied by the other anyway. You can derive exactly the same Bell inequalities from either starting assumption alone. You also don't get to choose which assumption to blame for a Bell violation. If a Bell inequality is violated, then both assumptions are contradicted.
If Bell's inequality is violated, we must reject Bell's factorization criterion, but at the same time we must reject non-contextuality (joint probability distributions). Bell's criterion doesn't formalize what locality is supposed to mean in the case of contextual theories, because it can only be applied to non-contextual theories in the first place due to the equivalence to non-contextuality. Thus a violation of Bell's inequality says nothing about locality in the case of contextual theories.
 
  • #479
rubi said:
It doesn't depend on Bob's setting. ##P_i(a_i,b_i)## are just the estimated probability distributions that have been measured. Alice and Bob can certainly perform these 3 experiments, collect the data and then meet and calculate the ##P_i## from their results.

Then I don't really understand the point. What is the point of computing these P_i?

What I assumed that the phrase "contextual theory" is a way of computing probabilities that take into account the measurement process, as opposed to revealing a pre-existing value. So I would think that that would mean describing the process by which a system to be measured (the particle produced in the twin pair) interacts with the measuring device to produce an outcome. So I don't understand what the relevance of the P_i you're describing is to such a theory.
 
  • #480
stevendaryl said:
Then I don't really understand the point. What is the point of computing these P_i?

What I assumed that the phrase "contextual theory" is a way of computing probabilities that take into account the measurement process, as opposed to revealing a pre-existing value. So I would think that that would mean describing the process by which a system to be measured (the particle produced in the twin pair) interacts with the measuring device to produce an outcome. So I don't understand what the relevance of the P_i you're describing is to such a theory.
Let's assume we use the angles ##\theta_1=0^\circ##, ##\theta_2=45^\circ## and ##\theta_3=90^\circ##. We can prepare different experiments using these angles, for instance Alice sets her detector to ##0^\circ## and Bob sets his detector to ##45^\circ##. There are 6 possible combinations, but 3 of them will suffice to establish the non-existence of a joint probability space. Each of these situations determines an experimental situation (context). We can perform each of these experiments randomly and in the end collect all the data in the probability distributions ##P_i##. For example if ##i=1## refers to Alice using ##\theta_1## and Bob using ##\theta_3##, then we could ask for the probability ##P_1(\text{Alices measures }\rightarrow,\text{Bob measures }\uparrow)##. Of course, for another ##i##, ##P_i(\uparrow,\rightarrow)## makes no sense, because the experiment might not even have a detector aligned in one of these directions, so we are forced to collect our data in different ##P_i## distributions for each ##i##. After all, you wouldn't collect the data of LIGO in the same probability distribution as the data of ATLAS either. So after we have collected the ##P_i##, we can ask, whether all these ##P_i## arise from one joint probability distribution as marginals. And it turns out that this is exactly the case if and only if Bell's inequality holds.
 
  • #481
rubi said:
Let's assume we use the angles ##\theta_1=0^\circ##, ##\theta_2=45^\circ## and ##\theta_3=90^\circ##. We can prepare different experiments using these angles, for instance Alice sets her detector to ##0^\circ## and Bob sets his detector to ##45^\circ##. There are 6 possible combinations, but 3 of them will suffice to establish the non-existence of a joint probability space. Each of these situations determines an experimental situation (context). We can perform each of these experiments randomly and in the end collect all the data in the probability distributions ##P_i##. For example if ##i=1## refers to Alice using ##\theta_1## and Bob using ##\theta_3##, then we could ask for the probability ##P_1(\text{Alices measures }\rightarrow,\text{Bob measures }\uparrow)##. Of course, for another ##i##, ##P_i(\uparrow,\rightarrow)## makes no sense, because the experiment might not even have a detector aligned in one of these directions, so we are forced to collect our data in different ##P_i## distributions for each ##i##. After all, you wouldn't collect the data of LIGO in the same probability distribution as the data of ATLAS either. So after we have collected the ##P_i##, we can ask, whether all these ##P_i## arise from one joint probability distribution as marginals. And it turns out that this is exactly the case if and only if Bell's inequality holds.

The issue is whether there is a sensible notion of "local" that violates Bell's factorizability condition. You seem to be saying that there is no proof that there is not. Okay, I'll buy that. Then it takes on the role of a conjecture: that every plausible local theory is factorizable in Bell's sense.
 
  • #482
When you have eliminated every possibility you have to take what is left quite seriously. The issue as I see it is that the arguments so far seem to be all or nothing. Either the direction is determined or it isn't. What about considering its a bit of both? Perhaps spin is fixed in one direction but not the other two, Would this lead to the correlations we observe?
 
  • #483
stevendaryl said:
The issue is whether there is a sensible notion of "local" that violates Bell's factorizability condition. You seem to be saying that there is no proof that there is not.
That's right, although I would put it slightly differently: Locality means that whenever an event A is the cause for an event B, there must be a future directed causal curve connecting these events. So the question is really which events are to be considered as causes or effects. In the non-contextual case, this is quite clear and leads to Bell's factorization criterion. In the contextual case, it is not that obvious. At least QM is silent on it.

Then it takes on the role of a conjecture: that every plausible local theory is factorizable in Bell's sense.
Or equivalently: "Every plausible local theory is non-contextual." We will probably disagree here, but at least I find it plausible that contextual theories can also be local, so I would tend to believe that the conjecture is wrong. However, this is only my opinion.
 
  • #484
rubi said:
If Bell's inequality is violated, we must reject Bell's factorization criterion, but at the same time we must reject non-contextuality (joint probability distributions).

This is fine.

Bell's criterion doesn't formalize what locality is supposed to mean in the case of contextual theories, because it can only be applied to non-contextual theories in the first place due to the equivalence to non-contextuality. Thus a violation of Bell's inequality says nothing about locality in the case of contextual theories.

That doesn't follow. I linked to references in another thread where Bell explains where the factorisation condition comes from and how it captures the idea of locality (or at least, the specific idea of locality that EPR and Bell were concerned with). The reasoning is quite general and has nothing to do with contextuality. Now it so happens that the factorisation condition Bell ends up with is mathematically equivalent to having a joint underlying probability distribution which you call noncontextuality, so noncontextuality implies the same Bell inequalities as Bell locality does. That does not mean Bell inadvertently assumes noncontextuality. What it means is that if you assume Bell locality then it makes no difference to the end result if you additionally assume or don't assume noncontextuality. Or put differently: if I give you a model for some correlations that is Bell local but it isn't obviously noncontextual and you like noncontextuality, then you will always be able to change the model so that it is noncontextual and still makes the same predictions.

Something similar happens with determinism in Bell's theorem: if you have a local stochastic model for a set of correlations then it's known that you can always turn it into a local deterministic model just by adding additional hidden variables. This similarly doesn't mean that determinism is a "hidden assumption" in Bell's theorem. It means that determinism is a redundant assumption that does not affect the end result either way.
 
  • #485
wle said:
I linked to references in another thread where Bell explains where the factorisation condition comes from and how it captures the idea of locality (or at least, the specific idea of locality that EPR and Bell were concerned with). The reasoning is quite general and has nothing to do with contextuality.
That's not right. You need to assume a joint probability space in order to even perform the mathematical manipulations that are needed to justify the factorization criterion. Bell just assumes this implicitly.
 
  • #486
  • #487
Those 2 articles talk about a loophole that is supposed to have been closed already...
 
  • #488
rubi said:
That's not right. You need to assume a joint probability space in order to even perform the mathematical manipulations that are needed to justify the factorization criterion. Bell just assumes this implicitly.

Huh? If you're referring to the finite statistics loophole like atyy says then this only really concerns experiments and it's known not to be a real issue. Considering theory only, quantum physics as a theory predicts joint conditional probability distributions for results (according to the Born rule) and these can be compared directly with the joint conditional probabilities that can be predicted by models respecting Bell locality.
 
  • #489
wle said:
Huh? If you're referring to the finite statistics loophole like atyy says then this only really concerns experiments and it's known not to be a real issue. Considering theory only, quantum physics as a theory predicts joint conditional probability distributions for results (according to the Born rule) and these can be compared directly with the joint conditional probabilities that can be predicted by models respecting Bell locality.
I'm still reading atyy's papers, so I can't comment on them yet. I'm not referring to any loophole or experiment. I'm saying that Bell assumes that ##A_a(\lambda)## and ##B_b(\lambda)## are random variables on one probability space ##(\Lambda,\Sigma)## and thus joint probability distributions exist. QM certainly does not predict joint probability distributions for non-commuting observables. A particle cannot be both spin up and spin left. The spin observables can't be modeled on one probability space.
 
  • #490
rubi said:
I'm still reading atyy's papers, so I can't comment on them yet. I'm not referring to any loophole or experiment. I'm saying that Bell assumes that ##A_a(\lambda)## and ##B_b(\lambda)## are random variables on one probability space ##(\Lambda,\Sigma)## and thus joint probability distributions exist. QM certainly does not predict joint probability distributions for non-commuting observables.

You've certainly misunderstood something here. The object of study in Bell's theorem is the joint probability ##P(ab \mid xy)## (according to some candidate theory) that Alice and Bob obtain results indexed by variables ##a## and ##b## given that they decide to do measurements indexed by variables ##x## and ##y##. This is not restrictive. In particular, the joint probability distribution should be given by the Born rule according to quantum mechanics, i.e., have the form $$P(ab \mid xy) = \mathrm{Tr} \bigl[ (M_{a \mid x} \otimes N_{b \mid y}) \rho_{\mathrm{AB}} \bigr]$$ where in general the variables ##x## and ##y## are associated with POVMs ##\mathcal{M}_{x} = \{M_{a \mid x}\}_{a}## and ##\mathcal{N}_{y} = \{N_{b \mid y}\}_{b}##. This is perfectly well defined even if the POVMs ##\mathcal{M}_{x}## for different ##x## and ##\mathcal{N}_{y}## for different ##y## are incompatible.
 
  • #491
wle said:
You've certainly misunderstood something here. The object of study in Bell's theorem is the joint probability ##P(ab \mid xy)## (according to some candidate theory) that Alice and Bob obtain results indexed by variables ##a## and ##b## given that they decide to do measurements indexed by variables ##x## and ##y##. This is not restrictive. In particular, the joint probability distribution should be given by the Born rule according to quantum mechanics, i.e., have the form $$P(ab \mid xy) = \mathrm{Tr} \bigl[ (M_{a \mid x} \otimes N_{b \mid y}) \rho_{\mathrm{AB}} \bigr]$$ where in general the variables ##x## and ##y## are associated with POVMs ##\mathcal{M}_{x} = \{M_{a \mid x}\}_{a}## and ##\mathcal{N}_{y} = \{N_{b \mid y}\}_{b}##. This is perfectly well defined even if the POVMs ##\mathcal{M}_{x}## for different ##x## and ##\mathcal{N}_{y}## for different ##y## are incompatible.

Well, the assumption that Bell makes that I think rubi is objecting to is factorizability:

P(ab \mid xy) = \sum_\lambda P(\lambda) P(a\mid \lambda x) P(b\mid \lambda y)
 
  • #492
wle said:
You've certainly misunderstood something here. The object of study in Bell's theorem is the joint probability ##P(ab \mid xy)## (according to some candidate theory) that Alice and Bob obtain results indexed by variables ##a## and ##b## given that they decide to do measurements indexed by variables ##x## and ##y##. This is not restrictive. In particular, the joint probability distribution should be given by the Born rule according to quantum mechanics, i.e., have the form $$P(ab \mid xy) = \mathrm{Tr} \bigl[ (M_{a \mid x} \otimes N_{b \mid y}) \rho_{\mathrm{AB}} \bigr]$$ where in general the variables ##x## and ##y## are associated with POVMs ##\mathcal{M}_{x} = \{M_{a \mid x}\}_{a}## and ##\mathcal{N}_{b} = \{N_{b \mid y}\}_{b}##. This is perfectly well defined even if the POVMs for different ##x## and ##y## are incompatible.
It is you who has misunderstood something. Alice's and Bob's observables commute and thus a joint distribution exists for them. However, Alice's observables ##A_a## don't commute among each other and neither do Bob's. It is completely uncontroversial that non-commuting observables can't be represented on a joint probability space. The probabilities won't add up to ##1## in general. (Also, using POVMs is completely unnecessary here.)

Edit: To put it differently: Bell assumes that ##A_a## and ##B_b## are random variables on a probability space ##(\Lambda,\Sigma,\mu)##. Then you can take random vectors like ##X=(A_1, A_2, B_1, B_2)## and get joint probability distributions ##P_X(A) =\mu(X^{-1}(A))##. The fact that the ##A_a## and ##B_b## are random variables on one space entails this already.
 
Last edited:
  • #493
rubi said:
However, Alice's observables ##A_a## don't commute among each other and neither do Bob's. It is completely uncontroversial that non-commuting observables can't be represented on a joint probability space.

Bell's theorem does not depend on an assumption here that is different from quantum mechanics. Like I said, Bell's theorem only assumes a priori that it is meaningful to talk about the conditional probabilities ##P(ab \mid xy)##, according to some theory, of obtaining different results depending on different possible measurements. This in itself is not in conflict with quantum mechanics, like I said in my previous post. Bell does not assume, a priori, that there is a joint underlying probability distribution ##P(a_{1}, a_{2}, \dotsc, b_{1}, b_{2}, \dotsc)##. In the end, it turns out that for any model satisfying the locality constraint that Bell arrives at (which stevendaryl posted) you can always construct a joint probability distribution for all the measurement outcomes, but this is a corollary of Bell's definition, not an additional assumption.
 
  • #494
wle said:
Bell's theorem does not depend on an assumption here that is different from quantum mechanics. Like I said, Bell's theorem only assumes a priori that it is meaningful to talk about the conditional probabilities ##P(ab \mid xy)##, according to some theory, of obtaining different results depending on the choices of measurements. This is perfectly consistent with quantum mechanics, like I said in my previous post. Bell does not assume, a priori, that there is a joint underlying probability distribution ##P(a_{1}, a_{2}, \dotsc, b_{1}, b_{2}, \dotsc)##. In the end, it turns out that for any model satisfying the locality constraint that Bell arrives at (which stevendaryl posted) you can always construct a joint probability distribution for all the measurement outcomes, but this is a corollary of Bell's definition, not an additional assumption.
Repeating it doesn't make it true.
Bell clearly assumes that the variables ##A_a##, ##B_b## are random variables on one probability space (and thus joint probabilities exist). Only then can you write down Bell's factorization condition. Quantum mechanics clearly says that no joint probability distribution for all these variables exists. (QM is also not relevant for the proof of Bell's inequality.)
It feels like we're going in circles.

Do you deny that ##X_x:\Lambda\rightarrow\{-1,1\}## are random variables on a probability space ##(\Lambda,\Sigma,\rho(\lambda)\mathrm d\lambda)##? I don't see how you can seriously deny that and if you do, then I don't know what else I can say. I don't agree.
 
  • #495
rubi said:
Repeating it doesn't make it true.
Bell clearly assumes that the variables ##A_a##, ##B_b## are random variables on one probability space (and thus joint probabilities exist).

I've seen more than one version of the derivation of Bell's theorem even by Bell, and they don't simply assume the "random variables on one probability space" that you refer to. The closest I've seen to this is the functions ##A(\vec{a}, \lambda)## and ##B(\vec{b}, \lambda)## appearing in Bell's original 1964 paper and similar derivations, but even there: 1) these are deterministic mappings, not random variables, and 2) assuming locality, Bell inferred that these functions should exist, via the EPR argument, from the fact that quantum physics predicts perfectly correlated and anticorrelated results for certain measurement choices. He did not simply assume that they should exist a priori.
 
Last edited:
  • #496
wle said:
Repeating things you read on the internet doesn't make them true.
Your style of argumentation is really annoying. Can you please stop treating me like an idiot who just repeats things from the internet? I obtained my information from books and papers and I have worked hard to understand it. I'm not an amateur.

I've seen more than one version of the derivation of Bell's theorem even by Bell, and they don't simply assume the "random variables on one probability space" that you refer to. The closest I've seen to this is the functions ##A(\vec{a}, \lambda)## and ##B(\vec{b}, \lambda)## appearing in Bell's original 1964 paper and similar derivations, but even there: 1) these are deterministic mappings, not random variables
The maps ##\lambda\mapsto A(a,\lambda)## are clearly random variables. They map from one probability space to a measurable space. This makes them random variables by definition.

2) assuming locality, Bell inferred that these functions should exist, via the EPR argument, from the fact that quantum physics predicts perfectly correlated and anticorrelated results for certain measurement choices. He did not simply assume that they should exist a priori.
Locality is the assumption that ##A_a## does not depend on ##b## and vice versa. Locality does not entail that these variables must be random variables on the same probability space. This is an extra assumption.

I don't have any more time for this, since apparently, we don't even agree on the very basics of probability theory.
 
  • #497
rubi said:
wle said:
rubi said:
Repeating it doesn't make it true.
Repeating things you read on the internet doesn't make them true.

Your style of argumentation is really annoying. Can you please stop treating me like an idiot who just repeats things from the internet? I obtained my information from books and papers and I have worked hard to understand it. I'm not an amateur.

Has it occurred to you to maybe do me the same courtesy?

Locality does not entail that these variables must be random variables on the same probability space. This is an extra assumption.

No, like I said, it is inferred from the EPR argument and the fact that quantum physics predicts perfect correlations. And this doesn't even matter since, if you find Bell's original argument based on EPR too handwavy, Bell described much more careful formulations of his theorem in the 1970s and 1980s which clearly don't depend on this "same probability space" assumption you keep bringing up.

I don't have any more time for this, since apparently, we don't even agree on the very basics of probability theory.

No, apparently we disagree on how Bell's theorem is derived.
 
  • #498
wle said:
Has it occurred to you to maybe do me the same courtesy?
Well, you kept making one wrong statement after another, while accusing me of having a misunderstanding. Naturally, I become annoyed.

No, like I said, it is inferred from the EPR argument and the fact that quantum physics predicts perfect correlations.
You can't infer from the EPR argument that the hidden variables must be non-contextual. This is a non-trivial assumption.

And this doesn't even matter since, if you find Bell's original argument based on EPR too handwavy, Bell described much more careful formulations of his theorem in the 1970s and 1980s which clearly don't depend on this "same probability space" assumption you keep bringing up.
Sooner or later, you will have to introduce random variables if you want to calculate the correlations that appear in the inequality. These random variables are always defined on the same probability space (I keep bringing it up, because it is crucial). Nevertheless, there are of course other approaches and they need to be treated differently. Khrennikov treats them in his book, but I don't want to start another topic as long as we haven't settled on the case of Bell's inequality yet.
 
  • #499
rubi said:
You can't infer from the EPR argument that the hidden variables must be non-contextual. This is a non-trivial assumption.

Do you mean contextual, as what is normally meant when people discuss the Kochen-Specker theorem? https://en.wikipedia.org/wiki/Kochen–Specker_theorem
 
  • #500
atyy said:
Do you mean contextual, as what is normally meant when people discuss the Kochen-Specker theorem? https://en.wikipedia.org/wiki/Kochen–Specker_theorem
I use it like Khrennikov, who uses it as follows: A theory is non-contextual if all observables can be modeled as random variables on one probability space, independent of the experimental setup. Otherwise, it is contextual. Kochen-Specker define non-contextuality for theories defined in the Hilbert space framework. However, if such theories were non-contextual according to KS, then they would also be non-contextual according to Khrennikov, so Khrennikov's definition is in a sense more general, as it allows for theories that are not necessarily modeled in the Hilbert space framework. For example, if a theory would exceed the Tsirelson bound, it would have to be contextual, but couldn't be modeled in a Hilbert space. (However, in general, theories that don't exceed the Tsirelson bound don't need to have a Hilbert space model either. At least I'm not aware of a proof.)
 

Similar threads

Replies
6
Views
3K
Replies
0
Views
8K
Replies
2
Views
2K
Replies
7
Views
817
Replies
2
Views
1K
Back
Top