Problem with Bell's lambda - again

  • Thread starter harrylin
  • Start date
  • Tags
    Lambda
In summary: B should not influence the probability PA, given the specification function λ.This is how Bell defined locality.Right. I meant the outcome for certain setting pairs - such as the assumed perfect anti-correlation.Perfect anti-correlation is a special case, but it's still something you could confirm directly by looking at your record of measurement outcomes.
  • #1
harrylin
3,875
93
I have been bugged for a long time by EPR-Bell's hidden function "lambda" (Bell1964: lambda is a hidden 'variable or a set, or even a set of functions'). Here's my latest problem with it. Triggered by some recent discussions which brought up papers, in which is argued (and claimed to be proved) that Bell's inequality is equally applicable to non-local as to local influences, I suddenly remembered the following remarks by Bell:

BERTLMANN'S SOCKS AND THE NATURE OF REALITY

"It is important to note that to the limited degree to which determinism plays a role in the EPR argument, it is not assumed but inferred. What is held sacred is the principle of "local causality" - or "no action at a distance."

[..]

Let us suppose then that the correlations between A and B in the EPR experiment are likewise "locally explicable". That is to say we suppose that there are variables λ, which, if only we knew them, would allow decoupling of the fluctuations:

P(A,B¦a,b,λ) = P1(A¦a,λ) P2(B¦b,λ) ...(11)

[..]

It is notable that in this argument nothing is said about the locality, or even localizability, of the variables λ. These variables could well include, for example, quantum mechanical state vectors, which have no particular localization in ordinary space time. It is assumed only that the outputs A and B, and the particular inputs a and b, are well localized."Indeed, as Bell stated, λ itself is not necessarily local and could well include parameters of QM; and it may be similarly stochastic, as long as it predicts with certainty the outcomes for certain settings.

But then his theorem should be equally valid for NON-local "quantum" influences: His same argument can be given for a probabilistic "quantum function" λ that fully determines the QM predictions, such that P(A|aλ) is not different from P(A|Bbaλ).

If not, why not??
 
Last edited:
Physics news on Phys.org
  • #2
[edit: B and b swapped] OK... now I came up with an answer on my own question. I was thinking about the function for B - the measurement result at one location. However the function for b - the local temperature in his illustration, but the detector setting in experiments - is not included in the prediction, and is supposed to be independent due to free will of the experimenter (which is a rather foggy issue). Thus such a factoring out of b for a non-local model, by means of including it in λ, would imply super-determinism, which is not assumed.
 
Last edited:
  • #3
harrylin said:
as long as it predicts with certainty the outcomes for certain settings.

Actually that isn't necessary. You don't need to assume the probability distributions [itex]P_{1}(A \mid a, \lambda)[/itex] and [itex]P_{2}(B \mid b, \lambda)[/itex] are deterministic in order to derive Bell inequalities. The same Bell inequalities hold whether you assume that or not.
But then his theorem should be equally valid for NON-local "quantum" influences: His same argument can be given for a probabilistic "quantum function" λ that fully determines the QM predictions, such that P(A|aλ) is not different from P(A|Bbaλ).

When you say that [itex]P(A \mid a, \lambda)[/itex] is not different from [itex]P(A \mid B, b, a, \lambda)[/itex], you are saying that, given [itex]\lambda[/itex], Bob's local measurement setting [itex]b[/itex] and local measurement outcome [itex]B[/itex] are redundant for making predictions about Alice's local outcome [itex]A[/itex]. Or in other words, [itex]b[/itex] and [itex]B[/itex] don't causally influence [itex]A[/itex]. This is how Bell defined locality.
 
  • #4
wle said:
Actually that isn't necessary. You don't need to assume the probability distributions [itex]P_{1}(A \mid a, \lambda)[/itex] and [itex]P_{2}(B \mid b, \lambda)[/itex] are deterministic in order to derive Bell inequalities. The same Bell inequalities hold whether you assume that or not.
Right. I meant the outcome for certain setting pairs - such as the assumed perfect anti-correlation.

When you say that [itex]P(A \mid a, \lambda)[/itex] is not different from [itex]P(A \mid B, b, a, \lambda)[/itex], you are saying that, given [itex]\lambda[/itex], Bob's local measurement setting [itex]b[/itex] and local measurement outcome [itex]B[/itex] are redundant for making predictions about Alice's local outcome [itex]A[/itex]. Or in other words, [itex]b[/itex] and [itex]B[/itex] don't causally influence [itex]A[/itex]. [..]
Thanks for the clarification! As some have stressed, the requirement is subtly stronger than that: b and B should not influence the probability PA, given the specification function λ. And of course, what I overlooked is that b is assumed not to be included in λ, even with a non-local λ for a non-local model. I suppose that you had not seen my further reflection in post #2.
 
Last edited:
  • #5
harrylin said:
Right. I meant the outcome for certain setting pairs - such as the assumed perfect anti-correlation.

Huh? You never need to assume perfect anti-correlation to derive most Bell inequalities. It's a special case. And even where you might be in that special case, it's something you could confirm directly by looking at your record of measurement outcomes, so even there it's not an assumption.
Thanks for the clarification! As some have stressed, the requirement is subtly stronger than that: b and B should not influence the probability PA, given the specification function λ.

Isn't that the same thing? If [itex]b[/itex] and [itex]B[/itex] can causally influence [itex]A[/itex], that means that they can influence the probability that you will get a particular outcome for [itex]A[/itex].
And of course, what I overlooked is that b is assumed not to be included in λ, even with a non-local λ for a non-local model. I suppose that you had not seen my further reflection in post #2.

More generally, derivations of Bell inequalities assume that the variable [itex]\lambda[/itex] is drawn from a probability distribution [itex]\rho(\lambda)[/itex] and is uncorrelated with the choice of settings [itex]a[/itex] and [itex]b[/itex]. This assumption could be unjustified in the case of so-called "superdeterminism" (some variable [itex]\lambda'[/itex] further back in time causally influences all of [itex]a[/itex], [itex]b[/itex], and [itex]\lambda[/itex] in such a way that they end up correlated) or retrocausality ([itex]a[/itex] and [itex]b[/itex] causally influence [itex]\lambda[/itex]).
 
  • #6
wle said:
Huh? You never need to assume perfect anti-correlation to derive most Bell inequalities. It's a special case. [..]
Oops, indeed, he used the certainty of some outcomes for certain settings the first time, but not this time.
And even where you might be in that special case, it's something you could confirm directly by looking at your record of measurement outcomes, so even there it's not an assumption.
Ehm no, his derivation concerns the theory of QM and not measurements.
Isn't that the same thing? If [itex]b[/itex] and [itex]B[/itex] can causally influence [itex]A[/itex], that means that they can influence the probability that you will get a particular outcome for [itex]A[/itex].
In general it's only true one way, as in [statement A] => [statement B]. A physical influence is not in general a necessary condition for influencing a probability; Bell even refers to that fact in his introduction. However, it looks to me that Bell implied that by means of the added λ, that is the only possibility that is left.
Jaynes jumped on Bell's argument, as it looks as if Bell mistakenly thought that those are the same thing, but maybe Jaynes misunderstood Bell's reasoning. There was a long discussion about that on this forum.
[..]
More generally, derivations of Bell inequalities assume that the variable [itex]\lambda[/itex] is drawn from a probability distribution [itex]\rho(\lambda)[/itex] and is uncorrelated with the choice of settings [itex]a[/itex] and [itex]b[/itex]. [..].
Yes. And there has been quite some debate on the required probability distribution: according to Bell it may be anything ("whatever"), but not everyone agrees with that.
 
Last edited:
  • #7
harrylin said:
Ehm no, his derivation concerns the theory of QM and not measurements.

No, Bell's theorem is an operational (i.e. experimentally testable) definition of locality. QM is simply an example of a theory that makes predictions that can be shown to be incompatible with Bell locality. If you perform a Bell experiment and you measure a Bell inequality violation, then you have evidence of nonlocality whether or not you accept QM.
In general it's only true one way, as in [statement A] => [statement B]. A physical influence is not in general a necessary condition for influencing a probability; Bell even refers to that fact in his introduction. However, it looks to me that Bell implied that by means of the added λ, that is the only possibility that is left.

In other words, a physical influence is a necessary condition for influencing the probability [itex]P_{1}(A \mid a, \lambda)[/itex] given a complete specification of the initial conditions [itex]\lambda[/itex], which is the scenario we're talking about here.
Yes. And there has been quite some debate on the required probability distribution: according to Bell it may be anything ("whatever"), but not everyone agrees with that.

Huh? Bell's theorem certainly holds whatever probability distribution [itex]\rho(\lambda)[/itex] you use. The only requirement is that [itex]\lambda[/itex] is uncorrelated with the settings [itex]a[/itex] and [itex]b[/itex], i.e. if you attribute a joint probability distribution [itex]P[/itex] to [itex]\lambda[/itex], [itex]a[/itex], and [itex]b[/itex], then [itex]P(\lambda, a, b) = \rho(\lambda) p_{1}(a) p_{2}(b)[/itex].
 
  • #8
wle said:
No, Bell's theorem is an operational (i.e. experimentally testable) definition of locality. [..]
The way he formulates it in that paper does make it sound like a "definition"; that would be a smart way out (abandoning the pretended purpose of adhering to Einstein's definition), but it is based on an unproven assumption. More elaborate explanation in the thread that I mentioned before, maybe most useful from here:

https://www.physicsforums.com/showthread.php?p=3795278

I agree with Peterdonis. Although Bell formulates it in that paper as a definition, it is in fact an assumption about a property of locality.
In other words, a physical influence is a necessary condition for influencing the probability [itex]P_{1}(A \mid a, \lambda)[/itex] given a complete specification of the initial conditions [itex]\lambda[/itex], which is the scenario we're talking about here.
Huh?? No, it just sounds very plausible. A claim or assumption about something is not a fact about that something - it needs to be formally proved in order to be a fact.
Huh? Bell's theorem certainly holds whatever probability distribution [itex]\rho(\lambda)[/itex] you use. The only requirement is that [itex]\lambda[/itex] is uncorrelated with the settings [itex]a[/itex] and [itex]b[/itex], i.e. if you attribute a joint probability distribution [itex]P[/itex] to [itex]\lambda[/itex], [itex]a[/itex], and [itex]b[/itex], then [itex]P(\lambda, a, b) = \rho(\lambda) p_{1}(a) p_{2}(b)[/itex].
According to a number of papers that I have read, his derivation is not valid for all possible probability distributions; and hopefully someone will be so helpful to elaborate this matter to us in this thread. It happens to be just one of the things that I would like to understand. If I'm not mistaken, some of those papers were linked in the other thread in which you have been participating. [edit: not sure now in which papers; if needed I will try to find at least one of them back]
 
Last edited:
  • #9
harrylin said:
The way he formulates it in that paper does make it sound like a "definition"

I don't see why you are putting "definition" in quotes. It is a definition. One that you can in principle agree or disagree with. This is why I sometimes refer specifically to "Bell locality", to be clear I'm not talking about some other possible definition or concept of locality.

For example, my understanding is that researchers with more of a quantum field theory background use "locality" to refer to what researchers in the quantum information community would more commonly call "no signalling", or in reference to a particular theory's structure (e.g. a quantum field theory with only local coupling terms in its Lagrangian). These are not the same thing as Bell locality. Quantum physics is non signalling but it is not Bell local, for instance.
that would be a smart way out (abandoning the pretended purpose of adhering to Einstein's definition)

Hardly. Bell locality is a definition of locality with a motivating argument behind it. It was originally specifically intended as a formalisation of the sort of locality that was alluded to (but not formally defined) in the EPR article for instance. Most people seem to think Bell's definition captures that idea very well -- that's why Bell's theorem is so widely accepted.
Huh?? No, it just sounds very plausible. A claim or assumption about something is not a fact about that something - it needs to be formally proved in order to be a fact.

I don't see a meaningful distinction. If the probability of getting an outcome A in one place changes depending on something happening a great distance away, then for all intents and purposes that's what I'd call an "influence".
According to a number of papers that I have read, his derivation is not valid for all possible probability distributions; and hopefully someone will be so helpful to elaborate this matter to us in this thread. It happens to be just one of the things that I would like to understand. If I'm not mistaken, some of those papers were linked in the other thread in which you have been participating. [edit: not sure now in which papers; if needed I will try to find at least one of them back]

Well it's clear to me that Bell's theorem holds just fine regardless of the [itex]\rho(\lambda)[/itex] you put in.

The only subtlety I can think of (that I have seen referred to once or twice while I've been on this forum) is that actual Bell experiments estimate the Bell correlator from the results of measurements made on a very large number of entangled particle pairs. This leaves open the possibility that, for instance, the variable [itex]\lambda[/itex] might vary from one round of the experiment to the next and might even depend explicitly on previous measurement settings and outcomes. But even that can be accounted for.
 
  • #10
wle said:
I don't see why you are putting "definition" in quotes. [..] I sometimes refer specifically to "Bell locality" [..] Bell locality is a definition of locality with a motivating argument behind it. [..]
I did the same in the other thread, to emphasize that Bell's opinion about what locality implies is not necessarily agreed upon by everyone. In particular, not all specialists in statistics agreed with his motivation.

I don't see a meaningful distinction. If the probability of getting an outcome A in one place changes depending on something happening a great distance away, then for all intents and purposes that's what I'd call an "influence".
Once more, assuming that you mean physical influence: that is fundamentally erroneous - even according to Bell.
This was also discussed in the section of thread that I linked for you...

Well it's clear to me that Bell's theorem holds just fine regardless of the [itex]\rho(\lambda)[/itex] you put in. [..]
I'll try to find back articles on that then. One is, I think, the one of DeRaedt that has a thread on it (Boole-Bell); another is I think a paper by Accardi that was linked in the hope for locality thread.
 
  • #11
wle said:
For example, my understanding is that researchers with more of a quantum field theory background use "locality" to refer to what researchers in the quantum information community would more commonly call "no signalling", or in reference to a particular theory's structure (e.g. a quantum field theory with only local coupling terms in its Lagrangian). These are not the same thing as Bell locality. Quantum physics is non signalling but it is not Bell local, for instance.

a gross oversimplification... and a rough and imprecise statement...
Contextuality and Non-Locality
https://www.physicsforums.com/showthread.php?t=619905

https://www.physicsforums.com/showpost.php?p=3993579&postcount=5
audioloop said:
mutually agree.
contextuality is broader, subsumes nonlocality.
same thing in quantum information.
(nonlocality is a generic feature of non-signaling).Existence of two spin-1/2 states that are non-local yet non-contextual.
http://arxiv.org/pdf/1207.1952v1.pdf
.
 
  • #12
harrylin said:
The way he formulates it in that paper does make it sound like a "definition"; that would be a smart way out (abandoning the pretended purpose of adhering to Einstein's definition), but it is based on an unproven assumption. More elaborate explanation in the thread that I mentioned before, maybe most useful from here:

https://www.physicsforums.com/showthread.php?p=3795278

I followed that link to the article written by Jaynes, here:
http://bayes.wustl.edu/etj/articles/cmystery.pdf

I feel that Jaynes is misunderstanding Bell's reasoning. Bell certainly was not saying that the failure of the joint probability distribution for two events to factor means that there is a causal influence between the two events. He certainly was not saying that conditional probability implies causal influence. So Jaynes counterargument (the urns example) is attacking a straw man. He's not addressing Bell, at all.

The way that I understand Bell is that he is talking about the propagation of information. If Bob at event [itex]B[/itex] gets information about the results of an experiment performed by Alice at event [itex]A[/itex], then that information must have propagated from the common causal past of events [itex]A[/itex] and [itex]B[/itex]. Jaynes' urn example is not a counter-example.

I'm not sure if it is possible to prove that Bell's notion captures the intuitive notion of "locality". But it seems to me that that's beside the point. The real issue is whether the predictions of quantum mechanics are compatible with a particular classical notion of locally realistic theories. Roughly speaking, that the universe consists of fields and particles that are only influenced by local conditions.
 
  • #13
stevendaryl said:
I followed that link to the article written by Jaynes, here:
http://bayes.wustl.edu/etj/articles/cmystery.pdf

I feel that Jaynes is misunderstanding Bell's reasoning. Bell certainly was not saying that the failure of the joint probability distribution for two events to factor means that there is a causal influence between the two events. He certainly was not saying that conditional probability implies causal influence. So Jaynes counterargument (the urns example) is attacking a straw man. He's not addressing Bell, at all.

The way that I understand Bell is that he is talking about the propagation of information. If Bob at event [itex]B[/itex] gets information about the results of an experiment performed by Alice at event [itex]A[/itex], then that information must have propagated from the common causal past of events [itex]A[/itex] and [itex]B[/itex]. Jaynes' urn example is not a counter-example.
Your understanding of what Bell meant is somewhat different from mine; but we do agree that Jaynes seems to misunderstand Bell, who even gave an example somewhat similar to Jaynes' urn example in the introduction of the Bertlmanns paper.

I'm not sure if it is possible to prove that Bell's notion captures the intuitive notion of "locality". But it seems to me that that's beside the point. The real issue is whether the predictions of quantum mechanics are compatible with a particular classical notion of locally realistic theories. Roughly speaking, that the universe consists of fields and particles that are only influenced by local conditions.
That's right; however the usefulness of Bell's theorem depends on that starting assumption. If it's not proven that it perfectly catches the problem that he wanted to describe, then Bell's theorem falls flat; and we're back to the start (with however improved understanding of the issues).
 
  • #14
harrylin said:
Your understanding of what Bell meant is somewhat different from mine; but we do agree that Jaynes seems to misunderstand Bell, who even gave an example somewhat similar to Jaynes' urn example in the introduction of the Bertlmanns paper.


That's right; however the usefulness of Bell's theorem depends on that starting assumption. If it's not proven that it perfectly catches the problem that he wanted to describe, then Bell's theorem falls flat; and we're back to the start (with however improved understanding of the issues).

Einstein never explicitly stated what sort of realistic theories he would have accepted, but assuming that they are like the ones that physicists looked for prior to the discovery of quantum mechanics, they have these features:

  1. At each point [itex]P[/itex] in spacetime, there are a number of variables [itex]V_j(P)[/itex] characterizing local conditions at that point. These variables might include: the values of fields, the presence or absence of various particles, physical properties of the particles (such as charge, momentum, angular momentum, etc.)
  2. These variables evolve in a local way (e.g., by local differential equations, where the evolution of [itex]V_j(P)[/itex] depends only on [itex]V_j(P')[/itex] for nearby values of [itex]P'[/itex])
  3. A measurement is just a macroscopic, coarse-grained, record of the variables [itex]V_j(P)[/itex].

Bell's factorizability condition would hold for any theory of this sort.
 
  • #15
stevendaryl said:
Einstein never explicitly stated what sort of realistic theories he would have accepted, but assuming that they are like the ones that physicists looked for prior to the discovery of quantum mechanics, they have these features:

  1. At each point [itex]P[/itex] in spacetime, there are a number of variables [itex]V_j(P)[/itex] characterizing local conditions at that point. These variables might include: the values of fields, the presence or absence of various particles, physical properties of the particles (such as charge, momentum, angular momentum, etc.)
  2. These variables evolve in a local way (e.g., by local differential equations, where the evolution of [itex]V_j(P)[/itex] depends only on [itex]V_j(P')[/itex] for nearby values of [itex]P'[/itex])
  3. A measurement is just a macroscopic, coarse-grained, record of the variables [itex]V_j(P)[/itex].
Bell's factorizability condition would hold for any theory of this sort.
It appears to me that point 3 may be in direct conflict with QM... I doubt that Einstein would have insisted on such a pre-QM requirement. The discovery of QM effects required non-classical features, somewhat different from before QM.
 
  • #16
harrylin said:
It appears to me that point 3 may be in direct conflict with QM... I doubt that Einstein would have insisted on such a pre-QM requirement. The discovery of QM effects required non-classical features, somewhat different from before QM.

You doubt it, but to me that's exactly what Einstein believed. That's what he's talking about when he talks about hidden variables---they are revealing pre-existing information about a hidden state of the particle.
 
  • #17
harrylin said:
It appears to me that point 3 may be in direct conflict with QM... I doubt that Einstein would have insisted on such a pre-QM requirement. The discovery of QM effects required non-classical features, somewhat different from before QM.

I don't think point 3 is in conflict with QM. When you measure that an electron has spin-up, the state of the measuring device + recording device + observer + particle + whatever is DIFFERENT than if you had measured spin-down. Measurements change the state of the measuring device/observer. Knowing the microscopic state of the measuring device/observer is sufficient (in principle, anyway) to know what the measurement result was.
 
  • #18
stevendaryl said:
You doubt it, but to me that's exactly what Einstein believed. That's what he's talking about when he talks about hidden variables---they are revealing pre-existing information about a hidden state of the particle.
That doesn't exclude a very different state upon interaction with a detector than without detection...
stevendaryl said:
I don't think point 3 is in conflict with QM. When you measure that an electron has spin-up, the state of the measuring device + recording device + observer + particle + whatever is DIFFERENT than if you had measured spin-down. Measurements change the state of the measuring device/observer. Knowing the microscopic state of the measuring device/observer is sufficient (in principle, anyway) to know what the measurement result was.
That was exactly my point: the state upon measurement doesn't have to be pre-existing according to common "local reality" concepts.
 
  • #19
harrylin said:
That doesn't exclude a very different state upon interaction with a detector than without detection...

I'm not sure that there's a real disagreement here. After one has made a measurement, that measurement reflects local information about the microscopic state. It's not clear what it shows about the local microscopic state before the measurement was made (although if you have a complete enough theory of how local microscopic states evolve, then you should be able to get at least a probability distribution on what the microscopic state could have been prior to the measurement.

That was exactly my point: the state upon measurement doesn't have to be pre-existing according to common "local reality" concepts.

I understand the distinction you're talking about, but I don't think it makes any difference to the main point, which is whether all such "local" theories obey Bell's factorizability assumption. I think they do.
 
  • #20
stevendaryl said:
I don't think point 3 is in conflict with QM.

I don't see how point 3 is necessary. The "outcomes" in a Bell experiment are simply macroscopic events and can be anything: whether a particle went up or down in a Stern-Gerlach apparatus, whether photon detector 1 or photon detector 2 clicked, or whether a red light or a green light turned on, for instance. You neither need to know nor care that these events are being generated by "measurements" on "photons" or "spin states" or whatever. If you have a local theory about photons, for instance, then it is the responsibility of that theory to explain how and why a photon being in a particular initial state ends up influencing the probability of a particular detector clicking some time later.
 
  • #21
wle said:
[..] Huh? Bell's theorem certainly holds whatever probability distribution [itex]\rho(\lambda)[/itex] you use. The only requirement is that [itex]\lambda[/itex] is uncorrelated with the settings [itex]a[/itex] and [itex]b[/itex], i.e. if you attribute a joint probability distribution [itex]P[/itex] to [itex]\lambda[/itex], [itex]a[/itex], and [itex]b[/itex], then [itex]P(\lambda, a, b) = \rho(\lambda) p_{1}(a) p_{2}(b)[/itex].
Regretfully I understand neither Bell's argument about the validity of his assumption of a joint probability distribution P, nor (or more likely: and therefore also not) the objections and/or discussion of it by other authors. I now found back a few of those papers:

"Violation of this inequality can be explained in a natural
manner by the fact that the concept of the joint probability
P (a, b, a',b' ) is invalid in accordance with of the principle of
complementarity."
- http://iopscience.iop.org/1063-7869/39/1/A06/

"Bell inequality [..] defined on the same probability space
[..]
assumptions: [..]
(ii) that the random variables are defined on the same probability space
[..] Let us consider assumption (ii). This is equivalent to the claim that the
three probability measures [..], representing the distributions of the pairs
(S(1)a, S(2)b),(S(1)c, S(2)b), (S(1)a, S(2)c) respectively,
can be obtained by restriction from a single probability measure [..].
This is indeed a strong assumption because, due to the incompatibility of
the spin variables along non parallel directions, the three correlations [..]
can only be estimated in different, in fact mutually incompatible, series
of experiments."
- http://arxiv.org/pdf/quant-ph/0007005v2.pdf

"In Kolmogorov’s final form of probability theory one deals
in a logical fashion with the more general elementary events
as well as random variables (that can assume more than two
values) and constructs a sample space and probability space.
The question of the truth content of a proposition is thus re-
duced to the question of the truth of the axioms of the proba-
bility framework that is used.
[..]
violation of such inequalities implies that functions corresponding
to S1, S2, S3 can not be defined on one probability space i.e.
are not Kolmogorov random variables."
- http://arxiv.org/abs/0901.2546

Intuitively it seems to me that all such papers point the finger to the same issue.
Can anyone explain this in plain English? In other words, what exactly does Bell's assumption of a single existing probability space of the hidden variables imply, and what objections can be brought in against that assumption?
 
  • #22
harrylin said:
Regretfully I understand neither Bell's argument about the validity of his assumption of a joint probability distribution P, nor (or more likely: and therefore also not) the objections and/or discussion of it by other authors.

I don't find Bell's reasoning mysterious, at all.

Here's a characterization of a locally realistic theory.

Pick a particular moment in time (we have to pick a rest frame for this to make sense, so let's assume that we've chosen one, and all times are relative to this frame). Divide up the universe into little regions (the exact dimensions aren't important, but for definiteness, assume that we divide it up into little cubes of with edges of length 1 centimeter). We assume that the complete state of the universe is determined by what's happening in each tiny region, together with how the regions fit together. The evolution (whether deterministic or probabilistic) of the local states only depend on the local states of nearby regions.

Note: entangled states in quantum mechanics are specifically NOT states of this form. However, the same thing is true in classical probabilistic theories, but in the latter case, it is assumed that the "nonlocalizability" of state information is due to our ignorance of the true local state. For example, if I put a $1 bill and a $10 bill into identical envelopes and give one to Alice, and one to Bob, then we could describe Alice's state as "With probability 1/2, she has $1, and with probability 1/2, she has $10". We could describe Bob's state as "With probability 1/2, he has $1, and with probability 1/2, he has $10". But knowing these two local states does not completely describe the global state, because the two local descriptions don't rule out the possibility that both people have $1, or that both have $10. So the total state is nonlocal, or nonfactorable. But we assume in this case that this is due to ignorance. If we knew the contents of the envelopes, then we could say definitely "Alice has $1" or "Alice has $10", and we could say definitely "Bob has $1" or "Bob has $10". Then the complete state would be just the product of Alice's state and Bob's state. The intuition behind "local realistic hidden variables theories", or a least as I understand it, is that maybe quantum entanglement, which is a nonlocal type of state, could similarly factor into a product of local states, if we only knew the details of the local states.
 
Last edited:
  • #23
stevendaryl;[LIST=1 said:
[*]At each point [itex]P[/itex] in spacetime, there are a number of variables [itex]V_j(P)[/itex] characterizing local conditions at that point. These variables might include: the values of fields, the presence or absence of various particles, physical properties of the particles (such as charge, momentum, angular momentum, etc.)
[*]These variables evolve in a local way (e.g., by local differential equations, where the evolution of [itex]V_j(P)[/itex] depends only on [itex]V_j(P')[/itex] for nearby values of [itex]P'[/itex])
[*]A measurement is just a macroscopic, coarse-grained, record of the variables [itex]V_j(P)[/itex].
[/LIST]
From Bell 1964 : 'Any such [hidden variable] theory which reproduces exactly the quantum mechanical predictions must have a grossly non local structure'
More recently this locality makes no assumptions of pre-determined outcome EPRB correlations.
So. A1(a1λ) = ± 1 and A2(a2λ) = ± 1
And these outcomes can evolve during measurement process, where:Pa1a2(A1A2lλ = P a1(A1l λ) Pa2(A2lλ). with space like separation. P of λ does not depend on (a1a2) - No joint probability distribution. The only dependence relations are from λ data from entanglement at source and after separation there is no influence between particles.
The problem with λ and Bell 1964 is that when inequalities are violated it is locality that is first to go. Is it possible that with a complete understanding of λ that include the physical quantities in above quote.Then details of the local state and approximations of state before measurement, during interaction with detector , and outcome could be understood. Understood as not only pre- existing information, but also continuous local influence by λ. That is, from source and evolution of particle state during detection.Then is a local (complete ) hidden variable theory that agrees with QM predictions possible ?
 
Last edited:
  • #24
morrobay said:
The problem with λ and Bell 1964 is that when inequalities are violated it is locality that is first to go. Is it possible that with a complete understanding of λ that include the physical quantities in above quote.Then details of the local state and approximations of state before measurement, during interaction with detector , and outcome could be understood. Understood as not only pre- existing information, but also continuous local influence by λ. That is, from source and evolution of particle state during detection.Then is a local (complete ) hidden variable theory that agrees with QM predictions possible ?

You are suggesting that the outcomes of experiments aren't determined ahead of time by the hidden variable λ, but are instead are just influenced by it in a continuous way. I don't think that would change anything--such a model would still satisfy Bell's inequality, and so it could not explain the quantum results.

You could imagine a more sophisticated local variables description of the twin-pair EPR experiment that goes like this:

  1. A twin-pair is created in a state labeled by a hidden variable [itex]\lambda[/itex]. There is a probability distribution [itex]F_1(\lambda)[/itex] governing the possible values of [itex]\lambda[/itex]. The two particles have states [itex]\sigma_A(\lambda), \sigma_B(\lambda)[/itex].
  2. One particle travels to Alice's detector. Along the way, it's state changes [itex]\sigma_A(\lambda)[/itex] to [itex]\sigma_A'[/itex], according to another probability distribution [itex]F_2(\sigma_A', \sigma_A(\lambda)[/itex] that depends on the initial state of the particle, [itex]\sigma_A(\lambda)[/itex] and the final state, [itex]\sigma_A'[/itex], but specifically does not depend on what Alice or Bob do at their detectors.
  3. Meanwhile, Alice picks a detector setting [itex]\alpha[/itex], and her detector makes a possibly nondeterministic transition to a state [itex]\sigma_{AD}[/itex] according to a probability distribution [itex]F_3(\sigma_{AD}, \alpha)[/itex] that depends on the final state of the detector and the setting chosen by Alice (but not on Bob's settings, and not on the state of the particles).
  4. When the particle is finally close enough to interact with Alice's detector, the measurement produces some outcome [itex]O_A[/itex] that depends on the state of Alice's detector and on the state of the particle at the time of interaction, according to yet another probability distribution [itex]F_4(O_A, \sigma_{AD}, \sigma_A'[/itex].
  5. So the conditional probability of Alice getting outcome [itex]O_A[/itex] given that she picks setting [itex]\alpha[/itex] is:
    [itex]P(O_A, \alpha) = \sum_\lambda \sum_{\sigma_A'} F_1(\lambda) F_2(\sigma_A', \sigma_A(\lambda)) F_3(\sigma_{AD}, \alpha) F_4(O_A, \sigma_{AD}, \sigma_A'[/itex]
  6. We can write this in the form: [itex]P(O_A, \alpha) = \sum_\lambda F_1(\lambda) F_A(O_A, \alpha, \lambda)[/itex] where [itex]F_A[/itex] is formed from [itex]F_2, F_3, F_4[/itex] by summing over the states [itex]\sigma_A', \sigma_{AD}[/itex]
  7. Similarly, for Bob, the probability for his outcome [itex]O_B[/itex] given his setting [itex]\beta[/itex] can be written as: [itex]P(O_B, \alpha) = \sum_\lambda F_1(\lambda) F_B(O_B, \beta, \lambda)[/itex]

So, you get the same form for joint probabilities that Bell assumed.

The only assumption that Bell is making is that whatever stochastic process is going on behind the scenes, probabilities of transitions depend only on local states.
 
  • #25
stevendaryl said:
I don't find Bell's reasoning mysterious, at all.

Here's a characterization of a locally realistic theory. [..]
That characterization misses how it matches a certain probability function. However, you make that clearer in your next post:

stevendaryl said:
[..] You could imagine a more sophisticated local variables description of the twin-pair EPR experiment that goes like this:

  1. A twin-pair is created in a state labeled by a hidden variable [itex]\lambda[/itex]. There is a probability distribution [itex]F_1(\lambda)[/itex] governing the possible values of [itex]\lambda[/itex]. The two particles have states [itex]\sigma_A(\lambda), \sigma_B(\lambda)[/itex].
  2. One particle travels to Alice's detector. Along the way, it's state changes [itex]\sigma_A(\lambda)[/itex] to [itex]\sigma_A'[/itex], according to another probability distribution [itex]F_2(\sigma_A', \sigma_A(\lambda)[/itex] that depends on the initial state of the particle, [itex]\sigma_A(\lambda)[/itex] and the final state, [itex]\sigma_A'[/itex], but specifically does not depend on what Alice or Bob do at their detectors.
  3. Meanwhile, Alice picks a detector setting [itex]\alpha[/itex], and her detector makes a possibly nondeterministic transition to a state [itex]\sigma_{AD}[/itex] according to a probability distribution [itex]F_3(\sigma_{AD}, \alpha)[/itex] that depends on the final state of the detector and the setting chosen by Alice (but not on Bob's settings, and not on the state of the particles).
  4. When the particle is finally close enough to interact with Alice's detector, the measurement produces some outcome [itex]O_A[/itex] that depends on the state of Alice's detector and on the state of the particle at the time of interaction, according to yet another probability distribution [itex]F_4(O_A, \sigma_{AD}, \sigma_A'[/itex].
  5. So the conditional probability of Alice getting outcome [itex]O_A[/itex] given that she picks setting [itex]\alpha[/itex] is:
    [itex]P(O_A, \alpha) = \sum_\lambda \sum_{\sigma_A'} F_1(\lambda) F_2(\sigma_A', \sigma_A(\lambda)) F_3(\sigma_{AD}, \alpha) F_4(O_A, \sigma_{AD}, \sigma_A'[/itex]
  6. We can write this in the form: [itex]P(O_A, \alpha) = \sum_\lambda F_1(\lambda) F_A(O_A, \alpha, \lambda)[/itex] where [itex]F_A[/itex] is formed from [itex]F_2, F_3, F_4[/itex] by summing over the states [itex]\sigma_A', \sigma_{AD}[/itex]
  7. Similarly, for Bob, the probability for his outcome [itex]O_B[/itex] given his setting [itex]\beta[/itex] can be written as: [itex]P(O_B, \alpha) = \sum_\lambda F_1(\lambda) F_B(O_B, \beta, \lambda)[/itex]

So, you get the same form for joint probabilities that Bell assumed.

The only assumption that Bell is making is that whatever stochastic process is going on behind the scenes, probabilities of transitions depend only on local states.
Ok, thanks that makes it clearer to me! Apparently what I thought to be two different issues, may actually be the same issue, or at least connected. That brings us back to Bell's factorization, which was based on intuition.

Yesterday evening I read some chapters in Jaynes' book in which he explains by means of some examples why in such complex cases one should never rely on intuition, but instead form one's intuition by means of strictly applying the rules.

I'll come back on that issue after studying some more - not only I need to refresh my memory on probability theory, I'll have to dig deeper than before.
 
  • #26
harrylin said:
Yesterday evening I read some chapters in Jaynes' book in which he explains by means of some examples why in such complex cases one should never rely on intuition, but instead form one's intuition by means of strictly applying the rules.

I think that there are two different, although closely related, uses of probability. One is in terms of probabilistic inference. In a probabilistic inference, you're given some collection of facts, and then you're asked to compute the probability of another fact being true. The original collection of facts can be pretty much arbitrary. You can ask questions such as: "What's the probability that Hillary Clinton will become President, given that Assad's government in Syria falls by 2014?"

Another use of probability is in the theory of stochastic processes. An initial state is chosen randomly from some set of possible initial states, then the state evolves according to a probabilistic evolution equation. It's in the context of such a stochastic process that (I believe) Bell's assumptions about factorizability apply. Not in the general case of probabilistic inference. In the case of stochastic processes, all probabilities are some combination of ignorance about what the current state of the system is, and "intrinsic" probability inherent in the stochastic evolution equations. The condition for locality for such a theory is pretty straight-forward: The probabilistic evolution equation for the "local state" at point [itex]P[/itex] can only depend on the "local state" for nearby points. I don't think that any such local stochastic process can violate Bell's factorizability condition.

Jaynes, and others, I think are talking about more general types of probabilistic inference. It's very difficult (or at least, I don't know how to do it) to say under what circumstances a certain collection of facts and conditional probabilities implies something nonlocal going on, in this general case.
 
  • #27
stevendaryl said:
I think that there are two different, although closely related, uses of probability. One is in terms of probabilistic inference. [..] You can ask questions such as: "What's the probability that Hillary Clinton will become President, given that Assad's government in Syria falls by 2014?"

Another use of probability is in the theory of stochastic processes. An initial state is chosen randomly from some set of possible initial states, then the state evolves according to a probabilistic evolution equation. It's in the context of such a stochastic process that (I believe) Bell's assumptions about factorizability apply. Not in the general case of probabilistic inference.[..] I don't think that any such local stochastic process can violate Bell's factorizability condition.

Jaynes, and others, I think are talking about more general types of probabilistic inference. It's very difficult (or at least, I don't know how to do it) to say under what circumstances a certain collection of facts and conditional probabilities implies something nonlocal going on, in this general case.
I was "out" of this for one week, so I haven't progressed with figuring out how the different criticisms may relate to the same issue, and putting my finger on that issue, so to say. But Jaynes explains in his book that the general use includes stochastic processes - it really covers everything. For example one can ask the question what is the chance to throw 6, given that it is a fair dice with 6 faces and the dice is well thrown. The general use of probabilistic inference gives the answer that if the starting assumptions are correct, then over time about equally often all the 6 faces should come "up".
 
  • #28
harrylin said:
I was "out" of this for one week, so I haven't progressed with figuring out how the different criticisms may relate to the same issue, and putting my finger on that issue, so to say. But Jaynes explains in his book that the general use includes stochastic processes - it really covers everything. For example one can ask the question what is the chance to throw 6, given that it is a fair dice with 6 faces and the dice is well thrown. The general use of probabilistic inference gives the answer that if the starting assumptions are correct, then over time about equally often all the 6 faces should come "up".

That's what I was calling "probabilistic inference", which I agree is more general than stochastic processes. My point is that it's not simple at all to figure out what implications general probabilistic inference has for locality. Your example: "given that it is a fair dice with 6 faces" illustrates the problem. How did you come to know that it is a fair dice?

I think for the special case of stochastic processes, we can say what it means for the probabilities to be generated through local, realistic dynamics, but for the more general case of probabilistic inference, I don't think we can easily say.
 
  • #29
stevendaryl said:
[..] I think for the special case of stochastic processes, we can say what it means for the probabilities to be generated through local, realistic dynamics, but for the more general case of probabilistic inference, I don't think we can easily say.
Yes I agree of course. However, likely we don't need to go there, we may just consider imaginable models. That appears to suffice to follow the debates in the current literature. And concerning that I just had a personal breakthrough: for the first time I came to understand part of the debate of Hess et al versus Gill et al. This is thanks to an Arxiv paper by Myrvold:
http://arxiv.org/abs/quant-ph/0205032

In that paper some arguments by Hess et al are explained in simplified form, complete with a probability space example. Surprisingly, some arguments by Myrvold look fundamentally wrong to me. And next I found a reaction by Hess et al which clarifies what I already understood from the paper by Myrvold. :smile:

Now, assuming that all that is merely a discussion taking place on Arxiv and not in the peer reviewed literature, it's not suited for re-discussion on this forum (which means that I'll abstain from further comments); but I think that Myrvold's paper will help me (and perhaps also some others) to understand more of peer-reviewed criticisms and counter criticisms of Bell's assumptions.
 

1. What is Bell's lambda?

Bell's lambda is a measure of the overall quality of a scientific theory. It reflects the balance between the theory's explanatory power and its complexity.

2. What is the problem with Bell's lambda?

The problem with Bell's lambda is that it is subjective and can be influenced by personal biases and preferences. It also does not take into account the context and purpose of a theory.

3. Why is Bell's lambda important?

Bell's lambda is important because it provides a quantitative measure for evaluating scientific theories. It allows for comparison between different theories and can help identify areas of improvement.

4. How is Bell's lambda calculated?

Bell's lambda is calculated by dividing the number of adjustable parameters in a theory by the number of data points the theory can explain. The closer the value is to 1, the better the theory.

5. Can Bell's lambda be used to determine the validity of a theory?

No, Bell's lambda cannot be used as the sole factor in determining the validity of a theory. Other considerations, such as empirical evidence and predictive power, must also be taken into account.

Similar threads

  • Quantum Physics
Replies
7
Views
962
Replies
80
Views
3K
Replies
4
Views
939
Replies
1
Views
1K
Replies
21
Views
3K
  • Quantum Physics
Replies
28
Views
1K
Replies
55
Views
6K
  • Quantum Physics
2
Replies
47
Views
3K
Replies
17
Views
2K
  • Quantum Physics
Replies
12
Views
2K
Back
Top