I Quantum mechanics is not weird, unless presented as such

Click For Summary
Quantum mechanics is often perceived as "weird," a notion that some argue hinders true understanding, particularly for students. Critics of this characterization suggest that quantum mechanics can be derived from reasonable assumptions without invoking measurement devices, which they claim is essential for a valid derivation. The discussion highlights the inadequacy of certain interpretations, like the ensemble interpretation, which relies on observations that may not have existed in the early universe. Participants emphasize the need for clearer explanations of quantum mechanics that bridge the gap between complex theories and public understanding. Ultimately, while quantum mechanics may seem strange, especially to laypersons, it can be presented in a way that aligns more closely with classical mechanics.
  • #481
rubi said:
Let's assume we use the angles ##\theta_1=0^\circ##, ##\theta_2=45^\circ## and ##\theta_3=90^\circ##. We can prepare different experiments using these angles, for instance Alice sets her detector to ##0^\circ## and Bob sets his detector to ##45^\circ##. There are 6 possible combinations, but 3 of them will suffice to establish the non-existence of a joint probability space. Each of these situations determines an experimental situation (context). We can perform each of these experiments randomly and in the end collect all the data in the probability distributions ##P_i##. For example if ##i=1## refers to Alice using ##\theta_1## and Bob using ##\theta_3##, then we could ask for the probability ##P_1(\text{Alices measures }\rightarrow,\text{Bob measures }\uparrow)##. Of course, for another ##i##, ##P_i(\uparrow,\rightarrow)## makes no sense, because the experiment might not even have a detector aligned in one of these directions, so we are forced to collect our data in different ##P_i## distributions for each ##i##. After all, you wouldn't collect the data of LIGO in the same probability distribution as the data of ATLAS either. So after we have collected the ##P_i##, we can ask, whether all these ##P_i## arise from one joint probability distribution as marginals. And it turns out that this is exactly the case if and only if Bell's inequality holds.

The issue is whether there is a sensible notion of "local" that violates Bell's factorizability condition. You seem to be saying that there is no proof that there is not. Okay, I'll buy that. Then it takes on the role of a conjecture: that every plausible local theory is factorizable in Bell's sense.
 
Physics news on Phys.org
  • #482
When you have eliminated every possibility you have to take what is left quite seriously. The issue as I see it is that the arguments so far seem to be all or nothing. Either the direction is determined or it isn't. What about considering its a bit of both? Perhaps spin is fixed in one direction but not the other two, Would this lead to the correlations we observe?
 
  • #483
stevendaryl said:
The issue is whether there is a sensible notion of "local" that violates Bell's factorizability condition. You seem to be saying that there is no proof that there is not.
That's right, although I would put it slightly differently: Locality means that whenever an event A is the cause for an event B, there must be a future directed causal curve connecting these events. So the question is really which events are to be considered as causes or effects. In the non-contextual case, this is quite clear and leads to Bell's factorization criterion. In the contextual case, it is not that obvious. At least QM is silent on it.

Then it takes on the role of a conjecture: that every plausible local theory is factorizable in Bell's sense.
Or equivalently: "Every plausible local theory is non-contextual." We will probably disagree here, but at least I find it plausible that contextual theories can also be local, so I would tend to believe that the conjecture is wrong. However, this is only my opinion.
 
  • #484
rubi said:
If Bell's inequality is violated, we must reject Bell's factorization criterion, but at the same time we must reject non-contextuality (joint probability distributions).

This is fine.

Bell's criterion doesn't formalize what locality is supposed to mean in the case of contextual theories, because it can only be applied to non-contextual theories in the first place due to the equivalence to non-contextuality. Thus a violation of Bell's inequality says nothing about locality in the case of contextual theories.

That doesn't follow. I linked to references in another thread where Bell explains where the factorisation condition comes from and how it captures the idea of locality (or at least, the specific idea of locality that EPR and Bell were concerned with). The reasoning is quite general and has nothing to do with contextuality. Now it so happens that the factorisation condition Bell ends up with is mathematically equivalent to having a joint underlying probability distribution which you call noncontextuality, so noncontextuality implies the same Bell inequalities as Bell locality does. That does not mean Bell inadvertently assumes noncontextuality. What it means is that if you assume Bell locality then it makes no difference to the end result if you additionally assume or don't assume noncontextuality. Or put differently: if I give you a model for some correlations that is Bell local but it isn't obviously noncontextual and you like noncontextuality, then you will always be able to change the model so that it is noncontextual and still makes the same predictions.

Something similar happens with determinism in Bell's theorem: if you have a local stochastic model for a set of correlations then it's known that you can always turn it into a local deterministic model just by adding additional hidden variables. This similarly doesn't mean that determinism is a "hidden assumption" in Bell's theorem. It means that determinism is a redundant assumption that does not affect the end result either way.
 
  • #485
wle said:
I linked to references in another thread where Bell explains where the factorisation condition comes from and how it captures the idea of locality (or at least, the specific idea of locality that EPR and Bell were concerned with). The reasoning is quite general and has nothing to do with contextuality.
That's not right. You need to assume a joint probability space in order to even perform the mathematical manipulations that are needed to justify the factorization criterion. Bell just assumes this implicitly.
 
  • #486
  • #487
Those 2 articles talk about a loophole that is supposed to have been closed already...
 
  • #488
rubi said:
That's not right. You need to assume a joint probability space in order to even perform the mathematical manipulations that are needed to justify the factorization criterion. Bell just assumes this implicitly.

Huh? If you're referring to the finite statistics loophole like atyy says then this only really concerns experiments and it's known not to be a real issue. Considering theory only, quantum physics as a theory predicts joint conditional probability distributions for results (according to the Born rule) and these can be compared directly with the joint conditional probabilities that can be predicted by models respecting Bell locality.
 
  • #489
wle said:
Huh? If you're referring to the finite statistics loophole like atyy says then this only really concerns experiments and it's known not to be a real issue. Considering theory only, quantum physics as a theory predicts joint conditional probability distributions for results (according to the Born rule) and these can be compared directly with the joint conditional probabilities that can be predicted by models respecting Bell locality.
I'm still reading atyy's papers, so I can't comment on them yet. I'm not referring to any loophole or experiment. I'm saying that Bell assumes that ##A_a(\lambda)## and ##B_b(\lambda)## are random variables on one probability space ##(\Lambda,\Sigma)## and thus joint probability distributions exist. QM certainly does not predict joint probability distributions for non-commuting observables. A particle cannot be both spin up and spin left. The spin observables can't be modeled on one probability space.
 
  • #490
rubi said:
I'm still reading atyy's papers, so I can't comment on them yet. I'm not referring to any loophole or experiment. I'm saying that Bell assumes that ##A_a(\lambda)## and ##B_b(\lambda)## are random variables on one probability space ##(\Lambda,\Sigma)## and thus joint probability distributions exist. QM certainly does not predict joint probability distributions for non-commuting observables.

You've certainly misunderstood something here. The object of study in Bell's theorem is the joint probability ##P(ab \mid xy)## (according to some candidate theory) that Alice and Bob obtain results indexed by variables ##a## and ##b## given that they decide to do measurements indexed by variables ##x## and ##y##. This is not restrictive. In particular, the joint probability distribution should be given by the Born rule according to quantum mechanics, i.e., have the form $$P(ab \mid xy) = \mathrm{Tr} \bigl[ (M_{a \mid x} \otimes N_{b \mid y}) \rho_{\mathrm{AB}} \bigr]$$ where in general the variables ##x## and ##y## are associated with POVMs ##\mathcal{M}_{x} = \{M_{a \mid x}\}_{a}## and ##\mathcal{N}_{y} = \{N_{b \mid y}\}_{b}##. This is perfectly well defined even if the POVMs ##\mathcal{M}_{x}## for different ##x## and ##\mathcal{N}_{y}## for different ##y## are incompatible.
 
  • #491
wle said:
You've certainly misunderstood something here. The object of study in Bell's theorem is the joint probability ##P(ab \mid xy)## (according to some candidate theory) that Alice and Bob obtain results indexed by variables ##a## and ##b## given that they decide to do measurements indexed by variables ##x## and ##y##. This is not restrictive. In particular, the joint probability distribution should be given by the Born rule according to quantum mechanics, i.e., have the form $$P(ab \mid xy) = \mathrm{Tr} \bigl[ (M_{a \mid x} \otimes N_{b \mid y}) \rho_{\mathrm{AB}} \bigr]$$ where in general the variables ##x## and ##y## are associated with POVMs ##\mathcal{M}_{x} = \{M_{a \mid x}\}_{a}## and ##\mathcal{N}_{y} = \{N_{b \mid y}\}_{b}##. This is perfectly well defined even if the POVMs ##\mathcal{M}_{x}## for different ##x## and ##\mathcal{N}_{y}## for different ##y## are incompatible.

Well, the assumption that Bell makes that I think rubi is objecting to is factorizability:

P(ab \mid xy) = \sum_\lambda P(\lambda) P(a\mid \lambda x) P(b\mid \lambda y)
 
  • #492
wle said:
You've certainly misunderstood something here. The object of study in Bell's theorem is the joint probability ##P(ab \mid xy)## (according to some candidate theory) that Alice and Bob obtain results indexed by variables ##a## and ##b## given that they decide to do measurements indexed by variables ##x## and ##y##. This is not restrictive. In particular, the joint probability distribution should be given by the Born rule according to quantum mechanics, i.e., have the form $$P(ab \mid xy) = \mathrm{Tr} \bigl[ (M_{a \mid x} \otimes N_{b \mid y}) \rho_{\mathrm{AB}} \bigr]$$ where in general the variables ##x## and ##y## are associated with POVMs ##\mathcal{M}_{x} = \{M_{a \mid x}\}_{a}## and ##\mathcal{N}_{b} = \{N_{b \mid y}\}_{b}##. This is perfectly well defined even if the POVMs for different ##x## and ##y## are incompatible.
It is you who has misunderstood something. Alice's and Bob's observables commute and thus a joint distribution exists for them. However, Alice's observables ##A_a## don't commute among each other and neither do Bob's. It is completely uncontroversial that non-commuting observables can't be represented on a joint probability space. The probabilities won't add up to ##1## in general. (Also, using POVMs is completely unnecessary here.)

Edit: To put it differently: Bell assumes that ##A_a## and ##B_b## are random variables on a probability space ##(\Lambda,\Sigma,\mu)##. Then you can take random vectors like ##X=(A_1, A_2, B_1, B_2)## and get joint probability distributions ##P_X(A) =\mu(X^{-1}(A))##. The fact that the ##A_a## and ##B_b## are random variables on one space entails this already.
 
Last edited:
  • #493
rubi said:
However, Alice's observables ##A_a## don't commute among each other and neither do Bob's. It is completely uncontroversial that non-commuting observables can't be represented on a joint probability space.

Bell's theorem does not depend on an assumption here that is different from quantum mechanics. Like I said, Bell's theorem only assumes a priori that it is meaningful to talk about the conditional probabilities ##P(ab \mid xy)##, according to some theory, of obtaining different results depending on different possible measurements. This in itself is not in conflict with quantum mechanics, like I said in my previous post. Bell does not assume, a priori, that there is a joint underlying probability distribution ##P(a_{1}, a_{2}, \dotsc, b_{1}, b_{2}, \dotsc)##. In the end, it turns out that for any model satisfying the locality constraint that Bell arrives at (which stevendaryl posted) you can always construct a joint probability distribution for all the measurement outcomes, but this is a corollary of Bell's definition, not an additional assumption.
 
  • #494
wle said:
Bell's theorem does not depend on an assumption here that is different from quantum mechanics. Like I said, Bell's theorem only assumes a priori that it is meaningful to talk about the conditional probabilities ##P(ab \mid xy)##, according to some theory, of obtaining different results depending on the choices of measurements. This is perfectly consistent with quantum mechanics, like I said in my previous post. Bell does not assume, a priori, that there is a joint underlying probability distribution ##P(a_{1}, a_{2}, \dotsc, b_{1}, b_{2}, \dotsc)##. In the end, it turns out that for any model satisfying the locality constraint that Bell arrives at (which stevendaryl posted) you can always construct a joint probability distribution for all the measurement outcomes, but this is a corollary of Bell's definition, not an additional assumption.
Repeating it doesn't make it true.
Bell clearly assumes that the variables ##A_a##, ##B_b## are random variables on one probability space (and thus joint probabilities exist). Only then can you write down Bell's factorization condition. Quantum mechanics clearly says that no joint probability distribution for all these variables exists. (QM is also not relevant for the proof of Bell's inequality.)
It feels like we're going in circles.

Do you deny that ##X_x:\Lambda\rightarrow\{-1,1\}## are random variables on a probability space ##(\Lambda,\Sigma,\rho(\lambda)\mathrm d\lambda)##? I don't see how you can seriously deny that and if you do, then I don't know what else I can say. I don't agree.
 
  • #495
rubi said:
Repeating it doesn't make it true.
Bell clearly assumes that the variables ##A_a##, ##B_b## are random variables on one probability space (and thus joint probabilities exist).

I've seen more than one version of the derivation of Bell's theorem even by Bell, and they don't simply assume the "random variables on one probability space" that you refer to. The closest I've seen to this is the functions ##A(\vec{a}, \lambda)## and ##B(\vec{b}, \lambda)## appearing in Bell's original 1964 paper and similar derivations, but even there: 1) these are deterministic mappings, not random variables, and 2) assuming locality, Bell inferred that these functions should exist, via the EPR argument, from the fact that quantum physics predicts perfectly correlated and anticorrelated results for certain measurement choices. He did not simply assume that they should exist a priori.
 
Last edited:
  • #496
wle said:
Repeating things you read on the internet doesn't make them true.
Your style of argumentation is really annoying. Can you please stop treating me like an idiot who just repeats things from the internet? I obtained my information from books and papers and I have worked hard to understand it. I'm not an amateur.

I've seen more than one version of the derivation of Bell's theorem even by Bell, and they don't simply assume the "random variables on one probability space" that you refer to. The closest I've seen to this is the functions ##A(\vec{a}, \lambda)## and ##B(\vec{b}, \lambda)## appearing in Bell's original 1964 paper and similar derivations, but even there: 1) these are deterministic mappings, not random variables
The maps ##\lambda\mapsto A(a,\lambda)## are clearly random variables. They map from one probability space to a measurable space. This makes them random variables by definition.

2) assuming locality, Bell inferred that these functions should exist, via the EPR argument, from the fact that quantum physics predicts perfectly correlated and anticorrelated results for certain measurement choices. He did not simply assume that they should exist a priori.
Locality is the assumption that ##A_a## does not depend on ##b## and vice versa. Locality does not entail that these variables must be random variables on the same probability space. This is an extra assumption.

I don't have any more time for this, since apparently, we don't even agree on the very basics of probability theory.
 
  • #497
rubi said:
wle said:
rubi said:
Repeating it doesn't make it true.
Repeating things you read on the internet doesn't make them true.

Your style of argumentation is really annoying. Can you please stop treating me like an idiot who just repeats things from the internet? I obtained my information from books and papers and I have worked hard to understand it. I'm not an amateur.

Has it occurred to you to maybe do me the same courtesy?

Locality does not entail that these variables must be random variables on the same probability space. This is an extra assumption.

No, like I said, it is inferred from the EPR argument and the fact that quantum physics predicts perfect correlations. And this doesn't even matter since, if you find Bell's original argument based on EPR too handwavy, Bell described much more careful formulations of his theorem in the 1970s and 1980s which clearly don't depend on this "same probability space" assumption you keep bringing up.

I don't have any more time for this, since apparently, we don't even agree on the very basics of probability theory.

No, apparently we disagree on how Bell's theorem is derived.
 
  • #498
wle said:
Has it occurred to you to maybe do me the same courtesy?
Well, you kept making one wrong statement after another, while accusing me of having a misunderstanding. Naturally, I become annoyed.

No, like I said, it is inferred from the EPR argument and the fact that quantum physics predicts perfect correlations.
You can't infer from the EPR argument that the hidden variables must be non-contextual. This is a non-trivial assumption.

And this doesn't even matter since, if you find Bell's original argument based on EPR too handwavy, Bell described much more careful formulations of his theorem in the 1970s and 1980s which clearly don't depend on this "same probability space" assumption you keep bringing up.
Sooner or later, you will have to introduce random variables if you want to calculate the correlations that appear in the inequality. These random variables are always defined on the same probability space (I keep bringing it up, because it is crucial). Nevertheless, there are of course other approaches and they need to be treated differently. Khrennikov treats them in his book, but I don't want to start another topic as long as we haven't settled on the case of Bell's inequality yet.
 
  • #499
rubi said:
You can't infer from the EPR argument that the hidden variables must be non-contextual. This is a non-trivial assumption.

Do you mean contextual, as what is normally meant when people discuss the Kochen-Specker theorem? https://en.wikipedia.org/wiki/Kochen–Specker_theorem
 
  • #500
atyy said:
Do you mean contextual, as what is normally meant when people discuss the Kochen-Specker theorem? https://en.wikipedia.org/wiki/Kochen–Specker_theorem
I use it like Khrennikov, who uses it as follows: A theory is non-contextual if all observables can be modeled as random variables on one probability space, independent of the experimental setup. Otherwise, it is contextual. Kochen-Specker define non-contextuality for theories defined in the Hilbert space framework. However, if such theories were non-contextual according to KS, then they would also be non-contextual according to Khrennikov, so Khrennikov's definition is in a sense more general, as it allows for theories that are not necessarily modeled in the Hilbert space framework. For example, if a theory would exceed the Tsirelson bound, it would have to be contextual, but couldn't be modeled in a Hilbert space. (However, in general, theories that don't exceed the Tsirelson bound don't need to have a Hilbert space model either. At least I'm not aware of a proof.)
 
  • #501
rubi said:
I use it like Khrennikov, who uses it as follows: A theory is non-contextual if all observables can be modeled as random variables on one probability space, independent of the experimental setup. Otherwise, it is contextual. Kochen-Specker define non-contextuality for theories defined in the Hilbert space framework. However, if such theories were non-contextual according to KS, then they would also be non-contextual according to Khrennikov, so Khrennikov's definition is in a sense more general, as it allows for theories that are not necessarily modeled in the Hilbert space framework. For example, if a theory would exceed the Tsirelson bound, it would have to be contextual, but couldn't be modeled in a Hilbert space. (However, in general, theories that don't exceed the Tsirelson bound don't need to have a Hilbert space model either. At least I'm not aware of a proof.)

OK, but it doesn't mean that contextuality can save locality. Bell's theorem shows that no local hidden variable theory, contextual or not, is consistent with quantum theory (the usual outs are retrocausation, superdeterminism, many-worlds - but contextuality is not one of them). Khrennikov's out is essentially to redefine "local hidden variable" so that it includes something weird like his suggestion of p-adic probabilities, which may be fine, but it's totally unclear how that would solve the measurement problem. It's a bit similar to consistent histories, whose claim to be local is not in contradiction to Bell's theorem, because it is not a realistic theory.
 
  • #502
atyy said:
OK, but it doesn't mean that contextuality can save locality. Bell's theorem shows that no local hidden variable theory, contextual or not, is consistent with quantum theory (the usual outs are retrocausation, superdeterminism, many-worlds - but contextuality is not one of them).
I don't agree here. One can clearly point to the place where the non-contextuality assumption is made in the proof of Bell's inequality. Bell's theorem rules out a large class of hidden variable theories. Maybe we shouldn't call contextual theories hidden variable theories (I'm not sure about that), but Bell's locality definition can only be applied to non-contextual theories. Locality has no clear probabilistic definition in the case of contextual theories.

Khrennikov's out is essentially to redefine "local hidden variable" so that it includes something weird like his suggestion of p-adic probabilities, which may be fine, but it's totally unclear how that would solve the measurement problem. It's a bit similar to consistent histories, whose claim to be local is not in contradiction to Bell's theorem, because it is not a realistic theory.
I don't find his p-adic probability theory appealing either and I'm also not advocating (contextual) hidden variables. However, he is right with the idea that there is no apriori reason for why we should be able to model all observables on the same probability space, independent of the experimental setting. It is important to note that this doesn't change the class of theories that are ruled out by Bell's theorem, so we aren't talking about loopholes. I'm saying that the probabilistic definition of locality can't be applied in the contextual case, so we have no probabilistic definition of locality for contextual theories, such as QM.
 
  • #503
rubi said:
I don't agree here. One can clearly point to the place where the non-contextuality assumption is made in the proof of Bell's inequality.

But is this just true by definition? Bell assumes that probability distributions for two distant measurements must factor, once you've taken into account all the relevant information that is common to the two measurements. The definition of "non-contextual" amounts to the same thing, doesn't it? So "non-contextual" is just another word for Bell's factorizability condition. It's not that contextuality provides an explanation for violation of Bell's inequalities.
 
  • #504
stevendaryl said:
But is this just true by definition? Bell assumes that probability distributions for two distant measurements must factor, once you've taken into account all the relevant information that is common to the two measurements. The definition of "non-contextual" amounts to the same thing, doesn't it? So "non-contextual" is just another word for Bell's factorizability condition. It's not that contextuality provides an explanation for violation of Bell's inequalities.
If you state it in terms of probability, you just shift the introduction of non-contextuality a bit. You will have to introduce random variables in order to compute the correlations that appear in the inequality. You make the non-contextuality assumption the moment you say that these random variables live on the same probability space.
 
  • #505
rubi said:
I don't agree here. One can clearly point to the place where the non-contextuality assumption is made in the proof of Bell's inequality. Bell's theorem rules out a large class of hidden variable theories. Maybe we shouldn't call contextual theories hidden variable theories (I'm not sure about that), but Bell's locality definition can only be applied to non-contextual theories. Locality has no clear probabilistic definition in the case of contextual theories.

I guess what is puzzling to me about your statement is that one thinks of Bohmian mechanics as contextual and a nonlocal hidden variable theory, so it is consistent with both the requirements of the Kochen-Specker theorem and the Bell theorem.
 
  • #506
rubi said:
If you state it in terms of probability, you just shift the introduction of non-contextuality a bit. You will have to introduce random variables in order to compute the correlations that appear in the inequality. You make the non-contextuality assumption the moment you say that these random variables live on the same probability space.

Well, Bell's reasoning, or at least his reasoning as interpreted by me, goes like this:

You assume that when Alice/Bob makes a measurement, his/her result depends only on the setting of his/her detector and facts about the particle being measured. So at the time of the measurement, there is some kind of probability function for Alice P_A(\lambda, \vec{a}, \alpha) that gives the probability of getting a result +1 given that the particle has property \lambda and her detector setting is \vec{a} and \alpha represents other facts about her detector above and beyond the setting. Similarly, there is a function P_B(\lambda, \vec{b}, \beta) for Bob. The assumption of locality is captured by the fact that Alice's result can't depend on anything at Bob's location, and vice-versa.

At this point, where is there an assumption of non-contextuality? It seems to me that it is simply saying that Alice's result depends only on local information. Where does this business about whether random variables "live on the same probability space" come into play?
 
  • #507
atyy said:
I guess what is puzzling to me about your statement is that one thinks of Bohmian mechanics as contextual and a nonlocal hidden variable theory, so it is consistent with both the requirements of the Kochen-Specker theorem and the Bell theorem.

I'm having trouble reconciling the definition rubi is using for "contextuality" with the definition you are using. The way I understand "contextual" as applied to Bohmian mechanics is that a measurement of spin using something like a Stern-Gerlach device doesn't reveal a pre-existing property of the particle being measured. Instead, the result--spin-up or spin-down--is the result of collaboration between the particle and the measuring device. The two together determine the spin, not the particle itself. The problem with spin measurements being "emergent" in this sense is it's hard (impossible?) to explain how Alice's results could be perfectly anti-correlated with Bob's if the results are emergent unless there is some nonlocal interaction guaranteeing the perfect anti-correlation. Which is no problem for Bohm, since it's explicitly nonlocal, but is a problem for local hidden variables.

Rubi's definition of "contextual" is not about whether measurement results are revealing pre-existing properties of the particle being measured, but is simply a statement about probability distributions governing random variables. I don't see the connection.
 
  • #508
atyy said:
I guess what is puzzling to me about your statement is that one thinks of Bohmian mechanics as contextual and a nonlocal hidden variable theory, so it is consistent with both the requirements of the Kochen-Specker theorem and the Bell theorem.
Well, contextual theories are not necessarily local (assuming we had a definition of locality for contextual theories). However, you have encountered a nice subtlety here. The original EPR state happens to have a non-contextual model and you can't derive Bell's inequality for it. This part of QM can be defined on one probability space. However, this is not true for the Bohm state, so even in BM, spin needs to stay contextual. I'm not sure how the KS definition applies here, since we are not in the Hilbert space framework, but maybe my knowledge of BM is just too narrow.
 
  • #509
stevendaryl said:
Well, Bell's reasoning, or at least his reasoning as interpreted by me, goes like this:

You assume that when Alice/Bob makes a measurement, his/her result depends only on the setting of his/her detector and facts about the particle being measured. So at the time of the measurement, there is some kind of probability function for Alice P_A(\lambda, \vec{a}, \alpha) that gives the probability of getting a result +1 given that the particle has property \lambda and her detector setting is \vec{a} and \alpha represents other facts about her detector above and beyond the setting. Similarly, there is a function P_B(\lambda, \vec{b}, \beta) for Bob. The assumption of locality is captured by the fact that Alice's result can't depend on anything at Bob's location, and vice-versa.

At this point, where is there an assumption of non-contextuality? It seems to me that it is simply saying that Alice's result depends only on local information. Where does this business about whether random variables "live on the same probability space" come into play?
In order to derive Bell's inequality, you need to introduce the correlations ##C(a,b)## (because the inequality is formulated in terms of them). Correlations are always correlations between random variables. So you can't get around introducing random variables in order to arrive at Bell's inequality. And when you introduce them, you will have to decide, which probability spaces they live on. A probability theory without random variables can't be related to experiment, just like a physical theory without observables has no connection to experiments.
 
  • #510
rubi said:
In order to derive Bell's inequality, you need to introduce the correlations ##C(a,b)## (because the inequality is formulated in terms of them). Correlations are always correlations between random variables. So you can't get around introducing random variables in order to arrive at Bell's inequality. And when you introduce them, you will have to decide, which probability spaces they live on. A probability theory without random variables can't be related to experiment, just like a physical theory without observables has no connection to experiments.

But there is only one random variable, \lambda, that is determined at the moment of pair creation. So Bell naturally only uses a single probability distribution, P(\lambda) the probability of producing hidden variable \lambda. So I don't understand this business about multiple probability spaces.
 

Similar threads

Replies
8
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 21 ·
Replies
21
Views
2K
  • Sticky
  • · Replies 0 ·
Replies
0
Views
8K
  • · Replies 36 ·
2
Replies
36
Views
6K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
12
Views
1K
  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 2 ·
Replies
2
Views
2K