I Quantum mechanics is not weird, unless presented as such

Click For Summary
Quantum mechanics is often perceived as "weird," a notion that some argue hinders true understanding, particularly for students. Critics of this characterization suggest that quantum mechanics can be derived from reasonable assumptions without invoking measurement devices, which they claim is essential for a valid derivation. The discussion highlights the inadequacy of certain interpretations, like the ensemble interpretation, which relies on observations that may not have existed in the early universe. Participants emphasize the need for clearer explanations of quantum mechanics that bridge the gap between complex theories and public understanding. Ultimately, while quantum mechanics may seem strange, especially to laypersons, it can be presented in a way that aligns more closely with classical mechanics.
  • #451
stevendaryl said:
The assumption that it was true beforehand, and Alice's measurement only revealed its truth is a hidden variables theory, which is ruled out by Bell's theorem.
That's not true. Bell's theorem rules out non-contextual hidden variables. This is a critical assumption in the derivation of the inequality.
 
Physics news on Phys.org
  • #452
rubi said:
That's not true. Bell's theorem rules out non-contextual hidden variables. This is a critical assumption in the derivation of the inequality.

Well, I'm not sure what the "non-contextual" adjective implies here. What would be an example of a contextual hidden-variables theory?
 
  • #453
stevendaryl said:
Well, I'm not sure what the "non-contextual" adjective implies here. What would be an example of a contextual hidden-variables theory?
Non-contextual means that the hidden variables can be modeled on a single joint probability space. One could call QM itself a contextual hidden variable theory.
This is a nice introduction: http://www.mdpi.com/1099-4300/10/2/19/pdf
 
  • #454
rubi said:
Non-contextual means that the hidden variables can be modeled on a single joint probability space. One could call QM itself a contextual hidden variable theory.
This is a nice introduction: http://www.mdpi.com/1099-4300/10/2/19/pdf

I've read such papers before (maybe that very paper), and it doesn't do a thing for me. I don't see how it contributes anything to the discussion of Bell's theorem. If Bell made an unwarranted assumption about the existence of a single joint probability space, so his proof of the nonexistence of hidden variables is incorrect, then I would like to see that loophole exploited by seeing an explicit hidden-variables model that reproduces the statistics of EPR.
 
  • #455
stevendaryl said:
I've read such papers before (maybe that very paper), and it doesn't do a thing for me. I don't see how it contributes anything to the discussion of Bell's theorem. If Bell made an unwarranted assumption about the existence of a single joint probability space, so his proof of the nonexistence of hidden variables is incorrect, then I would like to see that loophole exploited by seeing an explicit hidden-variables model that reproduces the statistics of EPR.

I think I understand the idea behind "contextuality". Suppose that you have a source of coins that sends them spinning on edge toward you. When a coin reaches you, you slap it to the floor, and check whether it's "heads" or "tails". It might be a mistake to assume that there is a "hidden variable" in the coin that determines whether it ends up heads or tails. The act of "measurement" in this case creates the measurement result. If the slapping action were slightly different, you may have ended up with a different result.

On the other hand, if we had a pair of coins sent spinning in opposite directions, such that the measurement of coin always produced the opposite of the measurement of the other coin, then we would suspect that the details of the measurement act were irrelevant. So we would suspect that this anti-correlation was due to noncontextual hidden variables (to use the physics terminology). That's the case with EPR measurements (in the case of anti-correlated spin-1/2 particles), when Alice and Bob both measure spin relative to the same axis. The details of the entire measurement setup seem irrelevant, because if Alice gets spin-up, then regardless of the details of Bob's apparatus, he will get spin-down.
 
  • #456
Isn't it just another way of saying what Peres says? Changing the observable bases changes the experiment, so the anticorrelation arises for one experiment and not another (contextually). If you re-define "reality" to be the probability spectrum for specific non-local experiments then sure, reality isn't dead, but it's irreducibly setup-dependent (not so "real")...
 
  • #457
stevendaryl said:
I think I understand the idea behind "contextuality". Suppose that you have a source of coins that sends them spinning on edge toward you. When a coin reaches you, you slap it to the floor, and check whether it's "heads" or "tails". It might be a mistake to assume that there is a "hidden variable" in the coin that determines whether it ends up heads or tails. The act of "measurement" in this case creates the measurement result. If the slapping action were slightly different, you may have ended up with a different result.

On the other hand, if we had a pair of coins sent spinning in opposite directions, such that the measurement of coin always produced the opposite of the measurement of the other coin, then we would suspect that the details of the measurement act were irrelevant. So we would suspect that this anti-correlation was due to noncontextual hidden variables (to use the physics terminology). That's the case with EPR measurements (in the case of anti-correlated spin-1/2 particles), when Alice and Bob both measure spin relative to the same axis. The details of the entire measurement setup seem irrelevant, because if Alice gets spin-up, then regardless of the details of Bob's apparatus, he will get spin-down.
The end result must conserve momentum so the only detail that matters physically is that. The arrangements do seem to be irrelevant.

Envisage this - when the state is prepared and we consider the detectors as part of that, we can use local hidden variables that decide if the detectors will click regardless of any other details. So it is decided already and no kind of random intervention can change it. But the context, i.e. the detectors must be part of the probabilty space.
 
  • #458
Mentz114 said:
The end result must conserve momentum so the only detail that matters physically is that.

Well, angular momentum in the case that I'm talking about.[/QUOTE]

The arrangements do seem to be irrelevant.

Envisage this - when the state is prepared and we consider the detectors as part of that, we can use local hidden variables that decide if the detectors will click regardless of any other details. So it is decided already and no kind of random intervention can change it. But the context, i.e. the detectors must be part of the probabilty space.

I don't understand this business about being part of the probability space. Let P_A(\vec{a}, \alpha, \lambda) be the probability that Alice will measure spin-up for her particle, given that she measures spin along axis \vec{a}, and that \alpha represents other details of Alice's detector (above and beyond orientation), and \lambda represents details about the production of the twin pair. Similarly, let P_B(\vec{b}, \beta, \lambda) be the probability that Bob will measure spin-up for his particle, given that he measures along axis \vec{b}, and that \beta represents additional details about Bob's detector. By assuming that the probabilities depend on these particular parameters, where have I made an assumption about the existence of a single joint probability space? What does "contextuality" mean, other than that the outcome might depend both on facts about the particle and facts about the device? The only assumption, it seems to me, is locality, that P_A doesn't depend on \vec{b} and P_B doesn't depend on \vec{b}.

But the predictions of QM for EPR is perfect anti-correlation. Which means that:

If Alice measures spin-up at angle \vec{a}, then Bob will measure spin-down at angle \vec{a}. That seems to me to mean that the probabilities must be 0 or 1:

If P_A(\vec{a}, \alpha, \lambda) is nonzero, then that means that Alice has a chance of measuring spin-up. But if Alice measures spin-up, then Bob has no chance of measuring spin-up at that angle. So Bob's probabilities must be zero whenever Alice's are nonzero, and vice-versa. That's only possible if the probabilities are all zero or one. That means that the outcome is actually deterministic, given \lambda, which in turn implies that the details \alpha and \beta don't matter.

I don't think that the non-contextuality is an assumption, I think it follows from the perfect anti-correlations.
 
  • #459
stevendaryl said:
I've read such papers before (maybe that very paper), and it doesn't do a thing for me. I don't see how it contributes anything to the discussion of Bell's theorem. If Bell made an unwarranted assumption about the existence of a single joint probability space, so his proof of the nonexistence of hidden variables is incorrect, then I would like to see that loophole exploited by seeing an explicit hidden-variables model that reproduces the statistics of EPR.
Do you doubt the fact that Bell makes such an assumption?

Bell's proof ist not incorrect. His theorem excludes a wide range of hidden variable theories and proves that QM is definitely non-classical, since classical theories are non-contextual. This fact is undisputed. The theorem is just not strong enough to exclude common causes. Of course you can still be of the opinion that QM is non-local. All I'm saying is that this is not backed up by mathematics and therefore stays a belief until you figure out how to prove Bell's theorem without assuming a joint probability space.

I don't need to give you a counterexample, since mathematical statements aren't assumed to be true until they are proven wrong. Nevertheless, I suppose you could take the quantum state to be a contextual hidden variable. If you don't like this idea, it still doesn't free you from the burden of proof.

stevendaryl said:
I think I understand the idea behind "contextuality". Suppose that you have a source of coins that sends them spinning on edge toward you. When a coin reaches you, you slap it to the floor, and check whether it's "heads" or "tails". It might be a mistake to assume that there is a "hidden variable" in the coin that determines whether it ends up heads or tails. The act of "measurement" in this case creates the measurement result. If the slapping action were slightly different, you may have ended up with a different result.

On the other hand, if we had a pair of coins sent spinning in opposite directions, such that the measurement of coin always produced the opposite of the measurement of the other coin, then we would suspect that the details of the measurement act were irrelevant. So we would suspect that this anti-correlation was due to noncontextual hidden variables (to use the physics terminology). That's the case with EPR measurements (in the case of anti-correlated spin-1/2 particles), when Alice and Bob both measure spin relative to the same axis. The details of the entire measurement setup seem irrelevant, because if Alice gets spin-up, then regardless of the details of Bob's apparatus, he will get spin-down.
What if the coins are magnetized (heads = N, tails = S) and instead of slapping down the coin, Alice and Bob use bar magnets, which they can arrange freely either in the NS or the SN direction. If they compare their results, then they will find that the results are either correlated or anti-correlated, depending on whether they chose the same arrangement or not. (Now of course, one would have to check the inequality in order to find out whether this is really contextual or admits a joint probability space description.)
 
  • #460
stevendaryl said:
I don't understand this business about being part of the probability space. Let P_A(\vec{a}, \alpha, \lambda) be the probability that Alice will measure spin-up for her particle, given that she measures spin along axis \vec{a}, and that \alpha represents other details of Alice's detector (above and beyond orientation), and \lambda represents details about the production of the twin pair. Similarly, let P_B(\vec{b}, \beta, \lambda) be the probability that Bob will measure spin-up for his particle, given that he measures along axis \vec{b}, and that \beta represents additional details about Bob's detector. By assuming that the probabilities depend on these particular parameters, where have I made an assumption about the existence of a single joint probability space? What does "contextuality" mean, other than that the outcome might depend both on facts about the particle and facts about the device? The only assumption, it seems to me, is locality, that P_A doesn't depend on \vec{b} and P_B doesn't depend on \vec{b}.

But the predictions of QM for EPR is perfect anti-correlation. Which means that:

If Alice measures spin-up at angle \vec{a}, then Bob will measure spin-down at angle \vec{a}. That seems to me to mean that the probabilities must be 0 or 1:

If P_A(\vec{a}, \alpha, \lambda) is nonzero, then that means that Alice has a chance of measuring spin-up. But if Alice measures spin-up, then Bob has no chance of measuring spin-up at that angle. So Bob's probabilities must be zero whenever Alice's are nonzero, and vice-versa. That's only possible if the probabilities are all zero or one. That means that the outcome is actually deterministic, given \lambda, which in turn implies that the details \alpha and \beta don't matter.

I don't think that the non-contextuality is an assumption, I think it follows from the perfect anti-correlations.
My point is that probabilities are irrelevant after the preparation. Suppose that the correlation has to be 1 or -1 ( depending on what is being conserved ). Whatever happens the required correlations ( coincidences or anti-coincidences ) will become fact. The result has already been set up. Crudely, there is a conspiracy where each detector is instructed to ignore everything else and click/not click as required. Non-locality is not an issue.

(I have to go to work so I won't be here for some hours now ).
 
  • #461
Hornbein said:
Don't underestimate Izzy Junior.

That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro' a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it. [4]

— Isaac Newton, Letters to Bentley, 1692/3
In the Principia, he carefully avoids any trace of making things appear weird. It would be interesting to know what he found so greatly absurd about ''action at a distance'', but I suppose that the margin of his letter was not enough to be able to contain his arguments...

In EPR we have no faster than light communication. Thus the nonlocality there is only ''passion at a distance''. Would this have been as absurd for him? We'll never know.
 
  • #462
rubi said:
mathematical statements aren't assumed to be true until they are proven wrong
?

Mathematical statements are true if proved from the assumptions that are part of the statement (or the underlying theory).
 
  • #463
A. Neumaier said:
?

Mathematical statements are true if proved from the assumptions that are part of the statement (or the underlying theory).
Right, but the Riemann hypothesis isn't true, just because nobody has found a counterexample yet. The Riemann hypothesis is true if it can be proved. Until then, we just don't know the truth value. Stevendaryl seems to assume that QM is non-local based on the fact that I haven't given him a convincing counterexample, even though the burdon of proof is on him.
 
  • #464
rubi said:
Do you doubt the fact that Bell makes such an assumption?

I doubt that such an assumption is involved. Bell in his derivation of his inequalities makes the assumption that there is a deterministic function F(\lambda, <br /> vec{a}) giving \pm 1 for every possible spin direction \vec{a}. But that's a short-cut. He could have allowed a more general dependency, but he, like Einstein, did not think it was possible to get perfect anti-correlations without such a deterministic function.

I don't need to give you a counterexample, since mathematical statements aren't assumed to be true until they are proven wrong.

Well, in that case, I'm not interested. To me, the whole point of Bell's theorem is to rule out a class of models. If you want to say that there are models that are not ruled out, fine. I already knew that--superdeterministic models, retrocausal models, nonlocal models. If you want to throw in another model that is not covered, I'd like to know what it is.
 
  • #465
rubi said:
Right, but the Riemann hypothesis isn't true, just because nobody has found a counterexample yet. The Riemann hypothesis is true if it can be proved. Until then, we just don't know the truth value. Stevendaryl seems to assume that QM is non-local based on the fact that I haven't given him a convincing counterexample, even though the burdon of proof is on him.

Only if I'm looking for proof. I'm not. I'm looking for a local, realistic explanation of quantum correlations. If you have one, I'd like to see it.
 
  • #466
A. Neumaier said:
Yes, but you'd nevertheless replace your utterly wrong statement [it asserts something completely different!] by one that really expresses what you meant.
I think you confused "aren't assumed to be true" with "are assumed to be false". Not assuming X to be true isn't the same as assuming X to be false.

stevendaryl said:
I doubt that such an assumption is involved. Bell in his derivation of his inequalities makes the assumption that there is a deterministic function F(\lambda,<br /> vec{a}) giving \pm 1 for every possible spin direction \vec{a}. But that's a short-cut. He could have allowed a more general dependency, but he, like Einstein, did not think it was possible to get perfect anti-correlations without such a deterministic function.
I don't see how you can doubt that this assumption is made. Khrennikov has pointed it out clearly. If you are not satisfied with his presentation, you can also check out this paper:
http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.48.291
It proves that Bell's factorization criterion is exactly equivalent to the existence of a joint probability distribution. If you reject the proof, you should be able to point out a mistake.

Well, in that case, I'm not interested. To me, the whole point of Bell's theorem is to rule out a class of models. If you want to say that there are models that are not ruled out, fine. I already knew that--superdeterministic models, retrocausal models, nonlocal models. If you want to throw in another model that is not covered, I'd like to know what it is.
Well, you keep claiming that the violation of Bell's inequality unambiguously rules out locality. I'm just pointing out that this is not backed up by the mathematics, so you shouldn't be claiming it as if it were a fact, rather than an opinion. I don't want to throw in another model. I'm happy with QM as it is.

stevendaryl said:
Only if I'm looking for proof. I'm not. I'm looking for a local, realistic explanation of quantum correlations. If you have one, I'd like to see it.
There cannot be a local realistic explanation, since local realism is usually defined to mean the Bell factorization criterion. Theories satisfying the factorization criterion are definitely ruled out. But apparently you are claiming that it is a fact that no contextual theory can be local either.
 
  • #467
rubi said:
I don't see how you can doubt that this assumption is made.

Because if you start with a more general formulation, then Bell's formulation seems to follow from the more general formulation, plus the requirement of perfect anti-correlations.

Khrennikov has pointed it out clearly.

I don't agree.

Well, you keep claiming that the violation of Bell's inequality unambiguously rules out locality.

I'm not saying that. I'm saying that Bell's inequality violation implies nonlocality OR superdeterminism OR retrocausality OR something weirder like Many Worlds, OR...

I don't want to throw in another model. I'm happy with QM as it is.

QM clearly works as a recipe for making predictions. If you're happy with that, fine.
 
  • #468
stevendaryl said:
Because if you start with a more general formulation, then Bell's formulation seems to follow from the more general formulation, plus the requirement of perfect anti-correlations.
What more general formulation doesn't use Bell's factorization criterion?

I don't agree.
Well, what do you say about Fine's paper that I quoted? Do you think his proof is erroneous?

I'm not saying that. I'm saying that Bell's inequality violation implies nonlocality OR superdeterminism OR retrocausality OR something weirder like Many Worlds, OR...
Then I probably misunderstood you. I thought you reject a common cause in the intersection of the past lightcones. If that is not the case, then I'm happy.

QM clearly works as a recipe for making predictions. If you're happy with that, fine. But the business about the possibility of contextual hidden variables does not in any way help understanding QM. I don't see any point in such papers.
I think it improves our understanding quite a bit, since it makes it more clear what exactly the implication of Bell's inequality and their violation are for physics. Knowing that non-contextuality is a crucial assumption in Bell's theorem changes the way we think about the theorem. I think this fact is not widely known in the physics community and should be pointed out more clearly in presentations of Bell's theorem.
 
  • #469
rubi said:
I think it improves our understanding quite a bit, since it makes it more clear what exactly the implication of Bell's inequality and their violation are for physics.

I don't agree with that, at all.
 
  • #470
stevendaryl said:
I don't agree with that, at all.
Well, Fine, Khrennikov, and others have pointed out an assumption in Bell's theorem that is not usually stated clearly and most physicists don't even understand that it is non-trivial. For me, this definitely improves my understanding of Bell's theorem and its implications a lot. Getting to know something about Bell's theorem that I previously had not known clearly improves my ability to judge its implications. That's what science is about, so I don't understand why you don't acknowledge it.
 
  • #471
rubi said:
Well, Fine, Khrennikov, and others have pointed out an assumption in Bell's theorem that is not usually stated clearly and most physicists don't even understand that it is non-trivial. For me, this definitely improves my understanding of Bell's theorem and its implications a lot. Getting to know something about Bell's theorem that I previously had not known clearly improves my ability to judge its implications. That's what science is about, so I don't understand why you don't acknowledge it.

I am not convinced that it improves anybody's ability to judge the implications of violations of Bell's inequality.
 
  • #472
stevendaryl said:
I am not convinced that it improves anybody's ability to judge the implications of violations of Bell's inequality.
It leaves open the possibility that contextual models can be local and admit common causes, which I thought you had rejected initially.
 
  • #473
stevendaryl said:
I am not convinced that it improves anybody's ability to judge the implications of violations of Bell's inequality.

I don't understand the point of considering three probability distributions: dP_1(\lambda, \lambda_{\theta_1}, \lambda_{\theta_2}), dP_2(\lambda, \lambda_{\theta_1}, \lambda_{\theta_3}), dP_3(\lambda, \lambda_{\theta_2}, \lambda_{\theta_3}). When a twin pair is generated, the settings to be chosen by Alice and Bob haven't been determined yet. The particles have whatever variables they have, independent of what is eventually done with them. So I don't understand the point of the three probability distributions. I would think that there is a set of possible parameters \lambda that can be assigned to the pair, and that they are assigned according to some probability distribution. So the assumption of three different probability distributions, for the three different types of experiments that might be performed in the future, seems very weird to me.
 
  • #474
stevendaryl said:
I don't understand the point of considering three probability distributions: dP_1(\lambda, \lambda_{\theta_1}, \lambda_{\theta_2}), dP_2(\lambda, \lambda_{\theta_1}, \lambda_{\theta_3}), dP_3(\lambda, \lambda_{\theta_2}, \lambda_{\theta_3}). When a twin pair is generated, the settings to be chosen by Alice and Bob haven't been determined yet. The particles have whatever variables they have, independent of what is eventually done with them. So I don't understand the point of the three probability distributions. I would think that there is a set of possible parameters \lambda that can be assigned to the pair, and that they are assigned according to some probability distribution. So the assumption of three different probability distributions, for the three different types of experiments that might be performed in the future, seems very weird to me.

To me, rather than talking about different probability distributions for each possible future experiment, I would think that there would be three different processes with associated probabilities:
  1. A twin pair is produced in some state, characterized by a parameter \lambda according to a probability distribution P(\lambda)
  2. A particle with associated parameter \lambda interacts with Alice's device, which is characterized by an orientation \vec{a} and perhaps other variables, \alpha. The probability of Alice getting +1 would be given by a probability P_A(\lambda, \vec{a}, \alpha)
  3. A particle with associated parameter \lambda interacts with Bob 's device, which is characterized by an orientation \vec{b} and perhaps other variables, \beta. The probability of Bob getting +1 would be given by a probability P_B(\lambda, \vec{b}, \beta)
 
  • #475
stevendaryl said:
I don't understand the point of considering three probability distributions: dP_1(\lambda, \lambda_{\theta_1}, \lambda_{\theta_2}), dP_2(\lambda, \lambda_{\theta_1}, \lambda_{\theta_3}), dP_3(\lambda, \lambda_{\theta_2}, \lambda_{\theta_3}).
Let's say there is a hidden variable ##\lambda## and 3 combinations of detector settings ##i=1,2,3##, for example Alice measures at angle ##\theta_i## and Bob measures at angle ##\theta_{i+1}## (where ##\theta_4:=\theta_1##). Then for each of these combinations, we collect probability distributions ##P_i(a_i,b_i)##. There may be a hidden variable ##\lambda## such that ##P_i(a_i,b_i) = \int_\Lambda p_i(\lambda,a_i,b_i)\mathrm d\lambda##. Now the fact that all the ##p_i## arise from a single joint probability space is equivalent to Bell's factorization criterion, which implies Bell's inequality. Thus a violation of Bell's inequality falsifies Bell's factorization criterion, but at the same time falsifies non-contextuality. You can't falsify the factorization criterion without falsifying non-contextuality.
 
  • #476
rubi said:
Let's say there is a hidden variable ##\lambda## and 3 combinations of detector settings ##i=1,2,3##, for example Alice measures at angle ##\theta_i## and Bob measures at angle ##\theta_{i+1}## (where ##\theta_4:=\theta_1##). Then for each of these combinations, we collect probability distributions ##P_i(a_i,b_i)##. There may be a hidden variable ##\lambda## such that ##P_i(a_i,b_i) = \int_\Lambda p_i(\lambda,a_i,b_i)\mathrm d\lambda##.

But as I said, there are two different processes involved in Alice getting a measurement result: (1) The production of a twin pair with parameter \lambda, and (2) Alice measuring the polarization along some axis of her choosing. Why should either process depend on Bob's setting?
 
  • #477
rubi said:
Well, Fine, Khrennikov, and others have pointed out an assumption in Bell's theorem that is not usually stated clearly and most physicists don't even understand that it is non-trivial.

That isn't an extra assumption. As far as I can tell, "joint probability space" just means that there is an underlying joint probability distribution and all the correlations can be obtained as marginals of this distribution, i.e., $$P(ab \mid xy) = \sum_{\hat{a}_{x}, \hat{b}_{y}} P(a_{1}, a_{2}, \dotsc, b_{1}, b_{2}, \dotsc) \,,$$ where, e.g., ##\hat{a}_{x}## means to sum over all combinations ##(a_{1}, \dotsc, a_{x-1}, a_{x+1}, \dotsc)## except the variable ##a_{x}## and similarly for ##\hat{b}_{y}##. (I don't find Khrennikov so clear but this is definitely what Fine was describing.) This construction is mathematically equivalent to the locality condition Bell arrived at. This means that if you can construct a Bell-local model for a given set of probabilities ##P(ab \mid xy)## then you can also construct an underlying joint probability distribution of the type defined just above, and vice versa.

This equivalence does not mean that the existence of an underlying joint probability distribution is an extra hidden assumption in Bell's theorem. That's just bad logic. Quite the opposite: it means that one of these assumptions is always redundant for the purpose of deriving Bell inequalities, since it is implied by the other anyway. You can derive exactly the same Bell inequalities from either starting assumption alone. You also don't get to choose which assumption to blame for a Bell violation. If a Bell inequality is violated, then both assumptions are contradicted.
 
  • #478
stevendaryl said:
But as I said, there are two different processes involved in Alice getting a measurement result: (1) The production of a twin pair with parameter \lambda, and (2) Alice measuring the polarization along some axis of her choosing. Why should either process depend on Bob's setting?
It doesn't depend on Bob's setting. ##P_i(a_i,b_i)## are just the estimated probability distributions that have been measured. Alice and Bob can certainly perform these 3 experiments, collect the data and then meet and calculate the ##P_i## from their results.

wle said:
That isn't an extra assumption. As far as I can tell, "joint probability space" just means that there is an underlying joint probability distribution and all the correlations can be obtained as marginals of this distribution, i.e., $$P(ab \mid xy) = \sum_{\hat{a}_{x}, \hat{b}_{y}} P(a_{1}, a_{2}, \dotsc, b_{1}, b_{2}, \dotsc) \,,$$ where, e.g., ##\hat{a}_{x}## means to sum over all combinations ##(a_{1}, \dotsc, a_{x-1}, a_{x+1}, \dotsc)## except the variable ##a_{x}## and similarly for ##\hat{b}_{y}##. (I don't find Khrennikov so clear but this is definitely what Fine was describing.) This construction is mathematically equivalent to the locality condition Bell arrived at. This means that if you can construct a Bell-local model for a given set of probabilities ##P(ab \mid xy)## then you can also construct an underlying joint probability distribution of the type defined just above, and vice versa.
That's right. Bell's factorization criterion is equivalent to the existence of a joint probability distribution.

This equivalence does not mean that the existence of an underlying joint probability distribution is an extra hidden assumption in Bell's theorem. That's just bad logic. Quite the opposite: it means that one of these assumptions is always redundant for the purpose of deriving Bell inequalities, since it is implied by the other anyway. You can derive exactly the same Bell inequalities from either starting assumption alone. You also don't get to choose which assumption to blame for a Bell violation. If a Bell inequality is violated, then both assumptions are contradicted.
If Bell's inequality is violated, we must reject Bell's factorization criterion, but at the same time we must reject non-contextuality (joint probability distributions). Bell's criterion doesn't formalize what locality is supposed to mean in the case of contextual theories, because it can only be applied to non-contextual theories in the first place due to the equivalence to non-contextuality. Thus a violation of Bell's inequality says nothing about locality in the case of contextual theories.
 
  • #479
rubi said:
It doesn't depend on Bob's setting. ##P_i(a_i,b_i)## are just the estimated probability distributions that have been measured. Alice and Bob can certainly perform these 3 experiments, collect the data and then meet and calculate the ##P_i## from their results.

Then I don't really understand the point. What is the point of computing these P_i?

What I assumed that the phrase "contextual theory" is a way of computing probabilities that take into account the measurement process, as opposed to revealing a pre-existing value. So I would think that that would mean describing the process by which a system to be measured (the particle produced in the twin pair) interacts with the measuring device to produce an outcome. So I don't understand what the relevance of the P_i you're describing is to such a theory.
 
  • #480
stevendaryl said:
Then I don't really understand the point. What is the point of computing these P_i?

What I assumed that the phrase "contextual theory" is a way of computing probabilities that take into account the measurement process, as opposed to revealing a pre-existing value. So I would think that that would mean describing the process by which a system to be measured (the particle produced in the twin pair) interacts with the measuring device to produce an outcome. So I don't understand what the relevance of the P_i you're describing is to such a theory.
Let's assume we use the angles ##\theta_1=0^\circ##, ##\theta_2=45^\circ## and ##\theta_3=90^\circ##. We can prepare different experiments using these angles, for instance Alice sets her detector to ##0^\circ## and Bob sets his detector to ##45^\circ##. There are 6 possible combinations, but 3 of them will suffice to establish the non-existence of a joint probability space. Each of these situations determines an experimental situation (context). We can perform each of these experiments randomly and in the end collect all the data in the probability distributions ##P_i##. For example if ##i=1## refers to Alice using ##\theta_1## and Bob using ##\theta_3##, then we could ask for the probability ##P_1(\text{Alices measures }\rightarrow,\text{Bob measures }\uparrow)##. Of course, for another ##i##, ##P_i(\uparrow,\rightarrow)## makes no sense, because the experiment might not even have a detector aligned in one of these directions, so we are forced to collect our data in different ##P_i## distributions for each ##i##. After all, you wouldn't collect the data of LIGO in the same probability distribution as the data of ATLAS either. So after we have collected the ##P_i##, we can ask, whether all these ##P_i## arise from one joint probability distribution as marginals. And it turns out that this is exactly the case if and only if Bell's inequality holds.
 

Similar threads

Replies
8
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 21 ·
Replies
21
Views
2K
  • Sticky
  • · Replies 0 ·
Replies
0
Views
8K
  • · Replies 36 ·
2
Replies
36
Views
6K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
12
Views
1K
  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 2 ·
Replies
2
Views
2K