Bell's derivation; socks and Jaynes

  • Thread starter Thread starter harrylin
  • Start date Start date
  • Tags Tags
    Derivation
  • #51
Delta Kilo said:
It does it if you don't know λ: P(A|Bab) ≠ P(A|ab)
But for a given a,b,λ it doesn't. P(A|Babλ) = P(A|aλ) = { 1: A=A(a,λ), else 0 }

Well of course, if you have a completely deterministic theory (which Bell's "local realistic" theory is), and you have complete knowledge of initial conditions, then you have complete knowledge of outcomes. I was talking about the case (which is the only case of real interest if we're trying to compare a "local realistic" theory in Bell's sense with QM) where we don't know λ, since that's the case Bell and Jaynes are discussing.

(And of course the actual QM probabilities do *not* factorize as above; that is, there is *no* "local realistic", in Bell's sense, set of hidden variables λ that allows perfect prediction of outcomes.)

I'm not disputing that "λ makes Bell's factorization possible", and I don't think Jaynes was either. As I've said in previous posts, I think Jaynes was saying that requiring there to be some such set of hidden variables λ might not be the correct definition of "local realism".
 
Physics news on Phys.org
  • #52
This may just be because I don't grasp Jaynes' argument, but it seems to me that there is no need to go deep in the weeds concerning the mathematics of conditional probabilities. As far as I know, proofs of Bell's theorem (except Bell's original) generally do not even depend on the notion of conditional probability. What is Jaynes' fundamental explanation for the experimental fact that there seem to be nonlocal correlations between measurements of entangled particles, of a kind that is different than the correlations that could arise just from the local sharing of hidden variables between the two particles? Phrased in this way, all the thorny issues of Bayesian probability inference and the like go out the window.
 
  • #53
lugita15 said:
This may just be because I don't grasp Jaynes' argument, but it seems to me that there is no need to go deep in the weeds concerning the mathematics of conditional probabilities. As far as I know, proofs of Bell's theorem (except Bell's original) generally do not even depend on the notion of conditional probability.

I believe there are Bell-type results that do not involve probabilities. For example, suppose there were a scenario in which "local realism" would require, not just that probabilities obey certain inequalities, but that certain measurement results simply could *not* happen at all, whereas QM would predict that they could. I seem to remember reading about one such scenario constructed by Roger Penrose using spin-3/2 particles in The Emperor's New Mind, but I don't have my copy handy to check. As far as I can see, Jaynes' arguments wouldn't apply at all to such a scenario.
 
  • #54
PeterDonis said:
Well of course, if you have a completely deterministic theory (which Bell's "local realistic" theory is), and you have complete knowledge of initial conditions, then you have complete knowledge of outcomes. I was talking about the case (which is the only case of real interest if we're trying to compare a "local realistic" theory in Bell's sense with QM) where we don't know λ, since that's the case Bell and Jaynes are discussing.

Well, the theory doesn't have to be deterministic. λ can include any number of random variables (in fact expected to include some). Since factorization works for any given value of λ, we don't need to know it, we just need to make sure λ exists. Specifically, we assume there exists probability distribution ρ(λ) independent from a and b: ρ(λ|ab) = ρ(λ) (of course A(a,λ) and B(b,λ) must exist as well, they only make sense together).

It is hard to tell where the breakdown occurs but we can guess. According to Bell, λ can be thought of as all relevant laws of physics and all relevant initial conditions with the exception of values a and b. If only we allow A to depend on b: A=A(a,b,λ), then everything clicks into place and we have a working model (QM). That suggests it's not a problem with general setup but specifically with factorizing a and b out of λ, that is local realism assumption.
 
  • #55
PeterDonis said:
[..] We don't even understand why quantum measurements work the way they do for spin measurements on *single* particles. I take a stream of electrons all of which have come from the "up" beam of a Stern-Gerlach measuring device. I put them all through a second Stern-Gerlach device oriented left-right. As far as I can tell, all the electrons in the beam are the same going into the second device, yet they split into two beams coming out. Why? What is it that makes half the "up" electrons go left and half go right? Nobody knows.[..]
Yes, and that's why Bell didn't use electrons for his argument. But he dropped Bertlmann's socks and instead he gave an illustration with Booles's Lille and Lyon. However, some of us shortly discussed a Lille-Lyon counter example in a thread that I started a long time ago, but none of us appreciated it much; perhaps Lille-Lyon doesn't catch the detector setting aspect well. It would be more interesting to try an adapted variant of Bertlmann's socks.
So, here's the intro of an example that I had in mind. It's a shot in the dark as I don't know the outcome concerning Bell vs. Jaynes (likely it will support Bell which would "weaken" Jaynes, but I can imagine that it could by chance "invalidate" Bell):

A group of QM students get classes from Prof. Bertlmann. It's an intensive course with Morning class, Afternoon class and Evening class. The students wonder if Bell's story could actually be true and Bertlmann really wears different socks. However Bertlmann happens to wear long trousers and when he goes to sit behind his desk, his socks are out of sight.

Never mind, one student knows a little electronics and makes two devices with LED's to illuminate the socks and light detectors to determine if the sock is light or dark. He hides them on both sides under the desk, aiming at where Bertlmann's socks should appear. With a wireless control he can secretly do a measurement with the press of a button and the result is then indicated by two LED's that are visible for the students, but out of sight for Bertlmann. The next morning he fiddles a bit with the settings and then they wait for Bertlmann [to be continued]

Would such a scenario correspond to post #32 of IsometricPion? I intend to let Morning, Afternoon and Evening be selected by the students, as a and b.
 
Last edited:
  • #56
harrylin said:
Yes, and that's why Bell didn't use electrons for his argument.

I'm not sure what you mean by this. He certainly used electrons to derive the *quantum* probabilities, which are what I was talking about in the passage you quoted. Bertlmann's socks, and the heart attacks in Lille and Lyon, are stipulated to be classical objects; there is nothing in their behavior corresponding to the behavior of electrons that undergo successive spin measurements in different directions. That's the point.
 
  • #57
PeterDonis said:
I'm not sure what you mean by this. He certainly used electrons to derive the *quantum* probabilities, which are what I was talking about in the passage you quoted. Bertlmann's socks, and the heart attacks in Lille and Lyon, are stipulated to be classical objects; there is nothing in their behavior corresponding to the behavior of electrons that undergo successive spin measurements in different directions. That's the point.
That's my (and I think also your) point: it didn't make much sense for Bell to use electrons as example to defend the validity of his separation of terms; he had to use an example that we can understand - and he chose to use Lille-Lyon for that.
Now, his socks example is too simple, and none of us appreciated his Lille-Lyon example much when De Raedt presented a variant of it as counter example. And I think that we all agree that Jayne's example is also insufficient. Thus, it may be more instructive to improve Bertlmann's socks example into something like Lille-Lyon. My example keeps the physical separation and adds complexity as well as a certain "weirdness" of observed correlations at varying detector parameters. Only thing I was extremely busy until today so I have not yet worked out the probabilities. o:) It's just a shot in the dark.:-p
 
Last edited:
  • #58
IsometricPion said:
[..] So, given this interpretation of local realism (which seems to be consistent with that expressed in Bell's paper) P(AB|a,b,λ)=P(A|B,a,b,λ)P(B|a,b,λ)=P(B|A,a,b,λ)P(A|a,b,λ)=P(A|a,λ)P(B|b,λ).

I am now starting to study the outcomes of my little thought experiment in spreadsheet and it immediately gets interesting as I can now put much more meaning to the symbols and how they are used. Do you agree that the bold term should also apply on my example?

But then I encounter trouble! For what Bell next does (in his socks paper; it's instant in his first paper), is to multiply that term with dλ ρ(λ) [eq.11+12]. It looks to me that for every increment dλ there is a single λ, which appears to be a fixed set of variables because of Bell's "probability distribution" ρ(λ). That sounds pretty much fixed to me for the total experiment of many runs. If not, can someone please explain what the "probability distribution" ρ(λ) exactly means?
 
  • #59
harrylin said:
Do you agree that the bold term should also apply on my example?
Yes, assuming a local realistic theory for predicting the color of Bertlmann's socks (which I would say is the only intuitive kind in such ordinary situations).
harrylin said:
It looks to me that for every increment dλ there is a single λ, which appears to be a fixed set of variables because of Bell's "probability distribution" ρ(λ). That sounds pretty much fixed to me for the total experiment of many runs. If not, can someone please explain what the "probability distribution" ρ(λ) exactly means?
I think you're correct. In Bell's eq. 11 it is assumed that one knows the values of the variables that make up λ. His eq. 12 incorporates the fact that in actual experiments λ is not known so he multiplies the joint outcome probability by ρ(λ), the probability density for λ, and integrates with respect to λ to removing it from the equations. There is nothing that intrinsically prevents ρ(λ) from varying between runs in "real life". However, if one is numerically simulating the ensemble distribution of results of an experiment (which is what I assume you are doing), it should not be allowed to vary between runs.
 
  • #60
IsometricPion said:
Yes, assuming a local realistic theory for predicting the color of Bertlmann's socks (which I would say is the only intuitive kind in such ordinary situations).
Yes, the measurements do not affect each other ("no action at a distance"). BTW, he is in reality wearing ordinary socks. :smile:
I think you're correct. In Bell's eq. 11 it is assumed that one knows the values of the variables that make up λ. His eq. 12 incorporates the fact that in actual experiments λ is not known so he multiplies the joint outcome probability by ρ(λ), the probability density for λ, and integrates with respect to λ to removing it from the equations. There is nothing that intrinsically prevents ρ(λ) from varying between runs in "real life". However, if one is numerically simulating the ensemble distribution of results of an experiment (which is what I assume you are doing), it should not be allowed to vary between runs.
I'm about to start doing that (I need to add more columns and add a function etc.). So, I'm puzzled by your last remark; why should "real life" not be allowed in a Bell type calculation of reality? :confused:

PS. I guess that he wants to calculate the outcome for any (a, b) combination for all possible "real life" λ (thus all possible x), taking in account their frequency of occurrence. It seems plausible that λ (thus (x1,x2)) is different from one set of pair measurements to the next, and now it looks to me that Bell does account for that possibility (but can one treat anything as just a number?). And I suppose that according to Bell the total function of λ (thus X) cannot vary from one total experiment to the next, as the results are reproducible. Is that what you mean?
 
Last edited:
  • #61
harrylin said:
So, I'm puzzled by your last remark; why should "real life" not be allowed in a Bell type calculation of reality?

PS. I guess that he wants to calculate the outcome for any (a, b) combination for all possible "real life" λ (thus all possible x), taking in account their frequency of occurrence. It seems plausible that λ (thus (x1,x2)) is different from one set of pair measurements to the next, and now it looks to me that Bell does account for that possibility (but can one treat anything as just a number?). And I suppose that according to Bell the total function of λ (thus X) cannot vary from one total experiment to the next, as the results are reproducible. Is that what you mean?
I suppose real-life was a bad choice of words. At the time I was thinking of systematic effects that changed ρ(λ) from run to run but left P(AB|a,b) the same. Now that I have thought about it some more, I think a better way to put it is that if one were doing a time or space average (as would be necessary when simulating actual experimental runs, since they do not occur at the same points in space-time) ρ(λ) could vary from run to run and experiment to experiment (as long as P(AB|a,b) stays the same these would describe setting up indistinguishable experiments/runs). When producing different outcomes to obtain an ensemble distribution for a single run, ρ(λ) is fixed since it is part of the initial/boundry conditions of the run.

It is essentially the difference between a time-average and an ensemble average.

There is nothing preventing one from asserting from the start that ρ(λ) is the same for all experiments and experimental runs, it is merely a (reasonable) restriction on the set of hidden variable theories under consideration (which is almost certain to be necessary in order to make the analysis tractible).
 
  • #62
IsometricPion said:
[..] ρ(λ) could vary from run to run and experiment to experiment (as long as P(AB|a,b) stays the same these would describe setting up indistinguishable experiments/runs). When producing different outcomes to obtain an ensemble distribution for a single run, ρ(λ) is fixed since it is part of the initial/boundry conditions of the run. [..]
I'm not sure that I want to go there (at least, not yet); my problem is much more basic. It looks to me that for such an integration to be possibly valid, p(λ) - I mean P(xA,xB) - should be the same for different combinations of a and b. Isn't that a requirement?
 
Last edited:
  • #63
harrylin said:
It looks to me that for such an integration to be possibly valid, p(λ) - I mean P(xA,xB) - should be the same for different combinations of a and b. Isn't that a requirement?
I am not entirely sure what you are asking (what do the x's stand for?). If you are asking about the dependence of ρ(λ) on the settings of the detectors a,b then the answer is yes, it should remain the same. This is because ρ(λ) can only depend on λ, which is required to be independent of a,b.
 
  • #64
Well, ρ(λ) should not change from one run to another, otherwise you won't get repeatable results (I mean repeatable statistics for long runs of course, not repeatable single outcomes). If ρ(λ) does vary, it just means some random factor ζ has not been accounted for, it needs to be lumped into λ'={λ,ζ}, then ρ becomes joint distribution ρ(λ') = ρ(λ,ζ).
 
  • #65
@ Delta Kilo : Yes, that sounds reasonable, but I have a problem already with one full statistical experiment.
IsometricPion said:
[..] If you are asking about the dependence of ρ(λ) on the settings of the detectors a,b then the answer is yes, it should remain the same. This is because ρ(λ) can only depend on λ, which is required to be independent of a,b.
I'm not sure that your suggestion can actually be applied in reality to all types of λ. Even if each λ=(xA,xB) is independent of a and b, it seems to me that the probability distribution of the λ that play a role in the measurements could be affected by choices of a and b. Perhaps I'm just seeing problems that don't exist, or perhaps I'm arriving at the point that Jaynes and others actually were getting at, but didn't explain well enough.*

And certainly Bell didn't sufficiently defend properly that his integration is compatible with all possible types of λ. He simply writes in his socks paper: "We have to consider then some probability distribution ρ(λ)", but he doesn't prove the validity of that claim.

So, it may be best that I now give my example together with a small selection of results (later today I hope), and then try to work it out, perhaps with the help of some of you.

*PS: I'm now re-reading Jaynes and it does look as if his eq.15 exactly points at the problem that I now encounter.
 
Last edited:
  • #66
lugita15 said:
Speaking of this paper, does anyone know what Jaynes is talking about in the end of page 14 and going on to page 15, concerning "time-alternation theories"? He seems to be endorsing a local realist model which makes predictions contrary to QM, and he claims that experiments peformed by "H. Walther and coworkers on single atom masers are already showing some resemblance to the technology that would be required" to test such a theory. Does anyone one know whether such a test has been peformed in the decades since he wrote his paper?
I don't know what test he talks about, but it appears to refer to slightly different predictions - and that's another way that this paradox could be solved perhaps. Take for example special relativity, would you say that general relativity is "contrary" to it? Moreover, many Bell type experiments do not exactly reproduce simplified QM predictions as often portrayed and which completely neglect correlation in time, selection of entangled pairs, etc.
Anyway, I'm here still at the start of Bell's derivation and which corresponds to Jaynes point 1. :-p
 
  • #67
harrylin said:
I'm not sure that your suggestion can actually be applied in reality to all types of λ. Even if each λ=(xA,xB) is independent of a and b, it seems to me that the probability distribution of the λ that play a role in the measurements could be affected by choices of a and b.
Well, that's too bad, that was the whole point of the exercise :) Say, you have 2 photons flying from the source in opposite directions. The source generates λ and each photon carries this λ (or part of it, or some function of it, doesn't matter) with it. Once they fly apart, each photon is on its own as there is no way for any 'local realistic' (≤c) influence to reach one photon from another. Parameters a and b are chosen by experimenters and programmed into detectors while the photons are in mid-flight, again there is no way for the influence from parameter a to affect λ carried by photon B before it hits detector (and vice versa). When the photon hits detector, the outcome is determined by the λ carried by this photon and local parameter a or b.

BTW what are xA and xB exactly?

harrylin said:
And certainly Bell didn't sufficiently defend properly that his integration is compatible with all possible types of λ. He simply writes in his socks paper: "We have to consider then some probability distribution ρ(λ)", but he doesn't prove the validity of that claim.
Well, if someone does come up with working Bell-type local realistic theory, they'd better have proper probability distribution ρ(λ), (and being proper includes ρ(λ)≥0, ∫ρ(λ)dλ=1), otherwise how are they going to calculate probabilities of the outcomes?
 
  • #68
Delta Kilo said:
Well, that's too bad, that was the whole point of the exercise :) Say, you have 2 photons flying from the source in opposite directions. The source generates λ and each photon carries this λ (or part of it, or some function of it, doesn't matter) with it. Once they fly apart, each photon is on its own as there is no way for any 'local realistic' (≤c) influence to reach one photon from another. Parameters a and b are chosen by experimenters and programmed into detectors while the photons are in mid-flight, again there is no way for the influence from parameter a to affect λ carried by photon B before it hits detector (and vice versa). When the photon hits detector, the outcome is determined by the λ carried by this photon and local parameter a or b.
That is a typical example of the kind of non-spooky models that Bell already knew not to work. He claimed to be completely general in his derivation, in order to prove that no non-spooky model is possible that reproduces QM - that is, also the ones that he couldn't think of. Else the exercise was of little use. :rolleyes:
BTW what are xA and xB exactly?
x is just my notation of the value of λ at an event (here at event A, and at event B), which I introduced early in this discussion for a clearer distinction with the total unknown X during the whole experiment.
Well, if someone does come up with working Bell-type local realistic theory, they'd better have proper probability distribution ρ(λ), (and being proper includes ρ(λ)≥0, ∫ρ(λ)dλ=1), otherwise how are they going to calculate probabilities of the outcomes?
That's not what I meant; I think that the probability distribution of the λ that correspond to a choice of A,B,a,b for the data analysis (and which thus are unwittingly selected along with that choice) could depend on that choice of data. It appears to me that Bell's integration doesn't allow that.
 
Last edited:
  • #69
harrylin said:
That's not what I meant; I think that the probability distribution of the λ that correspond to a choice of A,B,a,b for the data analysis (and which thus are unwittingly selected along with that choice) could depend on that choice of data.

If you mean probability distribution ρ(λ) depends on the experimental setup, including functions A(a,λ) and B(b,λ), along with their domain, that is set of all possible values of a and b, then yes. A(a,λ), B(b,λ), ρ(λ) come together as a package. If you mean ρ(λ) depends on specific values chosen for a and b in a given run, then certainly no, a and b do not exist yet when ρ(λ) is used to generate new λ for the run, that's the point.
 
  • #70
Delta Kilo said:
[..] a and b do not exist yet when ρ(λ) is used to generate new λ for the run, that's the point.
That has nothing to do with the unknowingly selected λ for analysis - and anyway, Bell's point is that λ is not restricted, else his derivation would be of little interest. He stresses in the appendix:
nothing is said about the locality, or even localizability, of the variables λ. These variables could well include, for example, quantum mechanical state vectors, which have no particular localization in ordinary space time. It is assumed only that the outputs A and B, and the particular inputs a and b, are well localized.
 
  • #71
Bell does not say that explicitly but it follows from
Bell said:
The vital assumption [2] is that the result B for particle 2 does not depend on setting a, of the magnet of particle 1, nor A on b
Since A is a function of lambda, if lambda is allowed to depend on b then A would also depend on b.
 
  • #72
Delta Kilo said:
Bell does not say that explicitly but it follows from Since A is a function of lambda, if lambda is allowed to depend on b then A would also depend on b.
Not necessarily: P(λ) is not λ, and I suspect that P(λ) at B could depend on b without any effect on A. Anyway, I've now progressed with my example and it looks not too bad, so I'll start presenting it now.

PS. Oops, there was still something wrong with it... maybe later!
 
Last edited:
  • #73
harrylin said:
Not necessarily: P(λ) is not λ, and I suspect that P(λ) at B could depend on b without any effect on A.
There is no such thing as "P(λ) at B", there is only one λ for each run, randomly chosen using P(λ). P(λ) is the same as "P(λ) at A" same as "P(λ) at B" and therefore it cannot depend on either a or b due to locality constraints.
 
  • #74
Now I'm messing a bit with the inequality as was used by billschnieder last year, apparently it was based on:

1 + <bc> >= |<ab> - <ac>|

- https://www.physicsforums.com/showthread.php?t=499002&page=6

However, that looks a little different from the CHSH inequality that Bell presents in his socks paper... and [update:] I now found back that it corresponds to Bell's original inequality. As it appears to be the simplest, I will use that.

@ Deltakilo: I even cited for you how Bell explained that there is no such locality constraint to λ. Indeed, that would not be reasonable. Once more:
"only [..] the outputs A and B, and the particular inputs a and b, are well localized."
 
Last edited:
  • #75
So, here's the example that I had in mind. It was a shot in the dark and I'm still not sure about the outcome concerning Bell vs. Jaynes. However, with minor fiddling I already obtained a result that looks interesting. I may very well have made an error; if anyone notices errors in the data analysis, I'll be grateful to hear it.

A group of QM students gets classes from Prof. Bertlmann. It's an intensive course with morning class, afternoon class and evening class. The students wonder if Bell's story could actually be true, and Bertlmann really wears different socks. However the professor happens to wear long trousers and when he goes to sit behind his desk, his socks are out of sight.

But they are creative and one student, let's call him Carlos, looks for simple electronics designs on the web and makes two devices, each with a LED to illuminate the socks and a light detector to determine if the sock is white or black. Bob finishes soldering in the late evening and hurries off to the classroom where he hides the devices on both sides under the desk, aiming at where Bertlmann's socks should appear. With a wireless control he can secretly do a measurement with the press of a button and the result is then indicated by two indicator LED's that are visible for the students, but out of sight for Bertlmann.

The next morning Bertlmann comes in, talks a while and then sits down while they do a QM exercise. Now Carlos presses the button and both LED's light up. He interprets that to mean that both socks are white and writes down "1,1". A bit of an anti-climax really. Never mind, after Bertlmann left he resets the detectors, and decides to leave them in place.

During afternoon class Carlos hits the button once more, and what a surprise: both lights stay out - this time it's "0,0". That is very puzzling as nobody had seen Prof. Bertlmann change his socks and he had been eating with his students - in fact he was in sight all the time.

Very Spooky! :eek:

The students discuss what to do next. They decide to do measurements over 10 days and then analyse it with Bell's method. One day corresponds to the measurement of one pair of socks, and the time of day plays the role of detector angle. They found that for identical settings the left and right LED's gave the same signal. Thus for simplicity only the data of one side is given here, with a,b,c for morning, afternoon, evening:

a b c
0 0 0
1 1 0
1 1 0
1 1 0
0 0 0
1 1 0
0 0 0
1 0 1
0 1 0
1 1 1

0.6 0.6 0.2 (averages)
0.47 (total average)*


After replacing all the 0 by -1, they use the original Bell inequality:

1 + <bc> >= |<ab> - <ac>|

Taking <bc> of day 1, <ab> of day 2, <ac> of day 3 and so on they obtain:
0.67 >= 1.33

Alternatively (but not clear if that is allowed), by simply using all the data: 0.33 >= 1.33

The first impression that the results are "spooky" is therewith supported.

However, that could be just a coincidence or a calculation error. They should ask their teachers if it's OK like this and collect more data during the rest of the semester. :smile:

PS: can someone tell me please if there are no obvious mistakes, before I simulate more data.

*Note: the averages come out at about 50%, but that is pure luck and can be tuned with the detector sensitivity setting.
 
Last edited:
  • #76
I copied this from another thread, as the subject matter overlaps in some respects and it might be of interest to readers here:

gill1109 said:
DrChinese referred to Jaynes. Jaynes (1989) thought that Bell was incorrectly performing a routine factorization of joint probabilities into marginal and conditional. Apparently Jaynes did not understand that Bell was giving physical reasons (locality, realism) why it was reasonable to argue that two random variables should be conditionally *independent* given a third. When Jaynes presented his resolution of the Bell paradox at a conference, he was stunned when someone else gave a neat little proof using Fourier analysis that the singlet correlations could not be reproduced using a network of classical computers, whose communication possibilities "copy" those of the traditional Bell-CHSH experiments. I have written about this in quant-ph/0301059. Jaynes is reputed to have said "I am going to have to think about this, but I think it is going to take 30 years before we understand Stephen Gull's results, just as it has taken 20 years before we understood Bell's" (the decisive understanding having been contributed by E.T. Jaynes.

Thanks so much for taking time to share this story. For those interested, here is the direct link to your paper:

http://arxiv.org/abs/quant-ph/0301059

I like your example of Luigi and the computers. I would recommend this paper to anyone who is interested in understanding the pros AND cons of various local realistic positions - and this is a pretty strong roundup!
 
  • #77
Jaynes (1989) thought that Bell was incorrectly performing a routine factorization of joint probabilities into marginal and conditional. Apparently Jaynes did not understand that Bell was giving physical reasons (locality, realism) why it was reasonable to argue that two random variables should be conditionally *independent* given a third. When Jaynes presented his resolution of the Bell paradox at a conference, he was stunned when someone else gave a neat little proof using Fourier analysis that the singlet correlations could not be reproduced using a network of classical computers, whose communication possibilities "copy" those of the traditional Bell-CHSH experiments. I have written about this in quant-ph/0301059. Jaynes is reputed to have said "I am going to have to think about this, but I think it is going to take 30 years before we understand Stephen Gull's results, just as it has taken 20 years before we understood Bell's" (the decisive understanding having been contributed by E.T. Jaynes).
 
  • #78
gill1109 said:
Jaynes (1989) thought that Bell was incorrectly performing a routine factorization of joint probabilities into marginal and conditional. Apparently Jaynes did not understand that Bell was giving physical reasons (locality, realism) why it was reasonable to argue that two random variables should be conditionally *independent* given a third. When Jaynes presented his resolution of the Bell paradox at a conference, he was stunned when someone else gave a neat little proof using Fourier analysis that the singlet correlations could not be reproduced using a network of classical computers, whose communication possibilities "copy" those of the traditional Bell-CHSH experiments. I have written about this in quant-ph/0301059. Jaynes is reputed to have said "I am going to have to think about this, but I think it is going to take 30 years before we understand Stephen Gull's results, just as it has taken 20 years before we understood Bell's" (the decisive understanding having been contributed by E.T. Jaynes).
Thanks for the comment! :smile:

Actually, Jaynes did understand that Bell was giving a physical reason for it, because he cited Bell on that. Thus he thought that Bell thought that a logical dependence must be caused by a physical dependence. According to Jaynes, "Bell took it for granted that a conditional probability P (X |Y) expresses a physical causal influence, exerted by Y on X."

Now, it's still not entirely clear to me what to think of this, except for one thing: "reasonable to argue" is by far insufficient to deserve the name "theorem"...

Moreover, it appears that the locality condition as Bell formulated is insufficient to warrant his derivation. What other conditions are required for a valid factorisation inside the integral?

PS. that's an interesting paper, and while I have only read the introduction now, I'm quite happy to see the idea of a fifth possibility which seems a bit similar to what I have been thinking: just as the PoR may be interpreted as a "loophole" principle ("it can't be done"), also the Bell theorem/paradox could relate to such a principle. But that's food for another discussion. :smile:
 
  • #79
I prefer an alternative derivation to Bell's. The essence of all local hidden variables theories is that they allow the existence (in the theory), alongside of the outcomes of the actually performed measurements, also of the outcomes of the measurements which were not performed. These "counterfactual" outcomes are assigned to the same region of space-time as the factual outcomes, and locality is assumed in the sense that the outcomes in one wing of the experiment do not depend on the setting used in the other. This means that we a local hidden variables theory allows us to define four random variables X1, X2, Y1 and Y2, standing for the outcomes of each of the two possible measurements ("measurement 1, measurement 2") in each wing of the experiment (X and Y respectively). They take the values +/-1. It's easy to check that X1Y1 cannot exceed X1Y2+Y2X2+X2Y1-2. Therefore E(X1Y1) cannot exceed E(X1Y2)+E(Y2X2)+E(X2Y1)-2. Each of these expectation values is estimated in the CHSH experiment by the corresponding average of products of measurement outcomes belonging to the corresponding pair of settings.
 
  • #80
PeterDonis said:
I got it from Jaynes' paper, equation (14). (I was lazy and didn't use LaTeX, so I wrote "x" instead of lambda. I'll quit doing that here.) I did say in my post that I still wanted to check to see what the corresponding equations in Bell's paper looked like. Now that you have linked to Bell's paper, let's play "spot the correspondence".

You are right that the equation I gave, equation (14) in Jaynes' paper, doesn't really have a corresponding equation in Bell's paper. But Jaynes' equation (14) is not the only equation in his paper that bears on the "factorization" issue. In fact, Jaynes' (14) is really just a "sub-expression" from his equation (12), which looks like this:

P(AB|ab) = \int{P(A|a, \lambda) P(B|b, \lambda) p(\lambda) d\lambda}

This equation is basically the same as the equation you gave from Bell's paper. Bell's equation is for the expectation value of a given pair of results that are determined by a given pair of measurement settings; Jaynes' equation is for the joint probability of a given pair of results conditional on a given pair of measurement settings. They basically say the same thing.

Jaynes' point is that to arrive at his equation in the first place, Bell has to make an assumption: he has to *assume* that the integrand can be expressed in the factored form given above. In other words, the integrand Bell writes down is not the most general one possible for the given expectation value: that would be (using Bell's notation)

P(a, b) = \int{A(B, a, b, \lambda) B(a, b, \lambda) p(\lambda) d\lambda}

The question then is whether one accepts Bell's implicit reasoning (he doesn't really go into it much; he seems to think it's obvious) to justify streamlining the integrand as he does. Jaynes does not accept that reasoning, and he gives the urn scenario as an example of why not. I agree that there is one key difference in the urn scenario: the two "measurement events" are not spacelike separated. Jaynes doesn't talk about that at all.

Edit: Bell's notation is actually a bit obscure. He says that A, B stand for "results", but he actually writes them as *functions* of the measurement settings a, b and the hidden variables \lambda. He doesn't seem to have a notation for the actual *outcomes* (the values of the functions given specific values for the variables). I've used A, B above to denote the outcomes as well as the functions, since Bell's notation doesn't give any other way to do it. In Jaynes' notation things are clearer; the equivalent to the above would be:

P(AB|ab) = \int{P(A|B, a, b, \lambda) P(B|a, b, \lambda) p(\lambda) d\lambda}

Edit #2: Corrected the equations above (previously I had A in the second factor in each integrand, which is incorrect). Also, Jaynes notes that there are two possible factorizations; the full way to write the equation just above would be:

P(AB|ab) = \int{P(A|B, a, b, \lambda) P(B|a, b, \lambda) p(\lambda) d\lambda} = \int{P(B|A, a, b, \lambda) P(A|a, b, \lambda) p(\lambda) d\lambda}

This is basically Jaynes' equation (15) with \lambda integrated out.

What would the evaluation of this integral , the area , look like on a plot ? I understand that
the total area is equal to one.
Is it correct to say that the y-axis denotes correlations and the x-axis are detector settings
and the function includes cos2 or cos4 ?
And what are the units of this area ?
 
  • #81
"lambda" is everything which causes statistical dependence of the outcomes at the two locations. "Integral ... p(lambda) d lambda" can be read as "the average over lambda, of the expectation value of the product of the two outcomes given lambda". There is no assumption that lambda is a real number, or two real numbers ... it can be as complicated as you like.

The point is that the measurement results are seen as functions of the measurement settings and of a heap of variables describing the quantum system and the two measurement systems.
 
  • #82
gill1109 said:
"the average over lambda, of the expectation value of the product of the two outcomes given lambda".
To be exact, the integral P(AB|ab) is the joint probability of the outcomes A and B given detector settings a and b. Expectation value of the product can be obtained from it in the usual way:
E(a,b) = \sum_{i,j} A_i B_j P(A_iB_j|ab) = P(-1,-1|ab) + P(1,1|ab) -P(1,-1|ab) - P(-1,1|ab)

Also note, in the original Bell's paper all randomness is encapsulated in λ, so the values of P(A|aλ) and P(B|bλ) are strictly 0 or 1. Bells A(a,λ) and B(b,λ) are connected with P(A|aλ) and P(B|bλ):
P(A_i|λ) = \{ 1 if A(a,λ) = A_i, else 0 \}
A(a,λ) = \{ 1 if P(1|λ) = 1, else - 1 \}

In the "Lyons and Lille" example from Socks paper there is an extra bit of "residual randomness" left over once all influences of common factors λ and local parameters a and b are factored out. That's why there are probability distributions instead of functions. This "residual randomness" is local and independent on either a,b, or λ. It does not change anything and the usual way to deal with it is to assimilate all such "residual randomness" into λ, as it was done in Bell's EPR paper.
 
  • #83
gill1109 said:
I prefer an alternative derivation to Bell's. The essence of all local hidden variables theories is that they allow the existence (in the theory), alongside of the outcomes of the actually performed measurements, also of the outcomes of the measurements which were not performed. These "counterfactual" outcomes are assigned to the same region of space-time as the factual outcomes, and locality is assumed in the sense that the outcomes in one wing of the experiment do not depend on the setting used in the other. This means that we a local hidden variables theory allows us to define four random variables X1, X2, Y1 and Y2, standing for the outcomes of each of the two possible measurements ("measurement 1, measurement 2") in each wing of the experiment (X and Y respectively). They take the values +/-1. It's easy to check that X1Y1 cannot exceed X1Y2+Y2X2+X2Y1-2. Therefore E(X1Y1) cannot exceed E(X1Y2)+E(Y2X2)+E(X2Y1)-2. Each of these expectation values is estimated in the CHSH experiment by the corresponding average of products of measurement outcomes belonging to the corresponding pair of settings.
Isometricpion agreed with me in post #59 that Bell's derivation should apply to my thought experiment which I fully developed in post #75. It's somewhat combining Bell's sock illustration with his Lille-Lyon illustration, but in a way that in principle could be really tested in the living room. And I guess that the secret elements (which I put inside for the simulation) may be called "counterfactual", because the outcomes are defined by those elements. And the outcomes on one side are not affected by what happens on the other side (however their probabilities do of course depend on each other in the sense of Jaynes: they are correlated). Which implies, if I understand you right, that according to you the results cannot break Bell's original inequality. Correct? Or is there another unspoken requirement for heart attacks in Lille-Lyon, Bertlmann's socks and entangled electrons?
 
  • #84
harrylin said:
Isometricpion agreed with me in post #59 that Bell's derivation should apply to my thought experiment which I fully developed in post #75. It's somewhat combining Bell's sock illustration with his Lille-Lyon illustration, but in a way that in principle could be really tested in the living room. And I guess that the secret elements (which I put inside for the simulation) may be called "counterfactual", because the outcomes are defined by those elements. And the outcomes on one side are not affected by what happens on the other side (however their probabilities do of course depend on each other in the sense of Jaynes: they are correlated). Which implies, if I understand you right, that according to you the results cannot break Bell's original inequality. Correct? Or is there another unspoken requirement for heart attacks in Lille-Lyon, Bertlmann's socks and entangled electrons?

What he is saying is that it can break the inequality with smaller sample, but not by much. In fact, he says you should expect it sometimes. But in a larger randomized trial, such as Gill's Luigi's computers example, it is clear you cannot have such results. You will deviate fairly far from the CHSH boundary rather quickly. 30 SD with N=15000 might be typical.

The Lille-Lyon demonstration is kind of a joke to me, because it exploits the fair sampling assumption. As I am fond to say, you could use the same logic to assert that the true speed of light is 1 meter per second rather than c. The missing ingredient is always an explanation of WHY the true value is one thing and the observed value is something else. I don't see how you are supposed to ignore your recorded results in favor of something which is pulled out of the air.
 
  • #85
harrylin said:
The first impression that the results are "spooky" is therewith supported.

However, that could be just a coincidence or a calculation error. They should ask their teachers if it's OK like this and collect more data during the rest of the semester. :smile:

PS: can someone tell me please if there are no obvious mistakes, before I simulate more data.
If it isn't too much trouble, I would like to see the code that generated your results.

Edit: If it is too long to post here, perhaps we could get in touch by e-mail.
 
  • #86
In regards to Jaynes’ view: Bell incorrectly factored a joint probability; it may be informative to analyze the data set presented by N. David Mermin in his article: “Is the moon there when nobody looks? Reality and the quantum theory.” The following represents the summary of the data.

A = Same Switch; A’ = Different Switch; B = Same Color; B’ = Different Color

P(A) = 14/45; P(B) = 24/45
P(B/A) =14/14
P(A’) = 31/45
P(B/A’) = 10/31

We can now calculate the probability of the lights flashing the same color. This should be done two ways for the purpose of resolving which argument is correct. Bell or Jaynes.

General Multiplication Rule (Dependent Events)

1. P( A and B) = P(A)*P(B/A) = (14/45)*(14/14) = .311
2. P(A’ and B) = P(A’)*P(B/A’) = (31/45)*(10/31) = .222

P(Same color) = .311 + .222 = .533

Specific Multiplication Rule (Independent Events)

3. P(A and B) = P(A)*P(B) = (14/45)*(24/45) = .166
4. P(A’ and B) = P(A’)*P(B) = (31/45)*(24/45) = .367

P(Same Color) = .166 + .367 = .533

Wow! Both methods give the same prediction of .533. This was unexpected and there may be an underlying reason for this. Mermin’s theoretical prediction for the lights flashing the same color is 1/3*1 + 2/3*1/4 = .500. The 45 runs closely match the theoretical. However, only the general multiplication rule aligns with the theoretical calculation term for term which tends to support Jaynes’ view. Assuming the above is correct with no mistakes, what do the above findings say about Bell’s derivation using the factored form of the joint probability and ultimately about Bell’s theorem?
 
  • #87
rlduncan said:
In regards to Jaynes’ view: Bell incorrectly factored a joint probability; it may be informative to analyze the data set presented by N. David Mermin in his article: “Is the moon there when nobody looks? Reality and the quantum theory.” The following represents the summary of the data.

A = Same Switch; A’ = Different Switch; B = Same Color; B’ = Different Color

P(A) = 14/45; P(B) = 24/45
P(B/A) =14/14
P(A’) = 31/45
P(B/A’) = 10/31

We can now calculate the probability of the lights flashing the same color. This should be done two ways for the purpose of resolving which argument is correct. Bell or Jaynes.

General Multiplication Rule (Dependent Events)

1. P( A and B) = P(A)*P(B/A) = (14/45)*(14/14) = .311
2. P(A’ and B) = P(A’)*P(B/A’) = (31/45)*(10/31) = .222

P(Same color) = .311 + .222 = .533

Specific Multiplication Rule (Independent Events)

3. P(A and B) = P(A)*P(B) = (14/45)*(24/45) = .166
4. P(A’ and B) = P(A’)*P(B) = (31/45)*(24/45) = .367

P(Same Color) = .166 + .367 = .533

Wow! Both methods give the same prediction of .533. This was unexpected and there may be an underlying reason for this. Mermin’s theoretical prediction for the lights flashing the same color is 1/3*1 + 2/3*1/4 = .500. The 45 runs closely match the theoretical. However, only the general multiplication rule aligns with the theoretical calculation term for term which tends to support Jaynes’ view. Assuming the above is correct with no mistakes, what do the above findings say about Bell’s derivation using the factored form of the joint probability and ultimately about Bell’s theorem?

So let me see if I have this straight. If you apply the probability analysis (either dependent or independent in your example), you would predict .5333 (actually a minimum). The quantum prediction is .5 which agrees to actual experiments.

Well, I would say Bell's point works nicely. Focusing on his factorization is a mistake. Once you know of Bell, I think it is easier to simply require that counterfactual cases must have a probability >=0. Which is the requirement of realism, going back to EPR and the famous "elements of reality".
 
  • #88
IsometricPion said:
If it isn't too much trouble, I would like to see the code that generated your results.
Edit: If it is too long to post here, perhaps we could get in touch by e-mail.
I will post my code here if my shot in the dark completely missed - but I didn't yet automize the data treatment so I don't know yet (but I do see now that it's not clear-cut). For the moment it's simply a useful exercise for me, that helps me to better understand possible issues so that I find the right questions to ask. :-p
 
  • #89
DrChinese said:
Originally Posted by harrylin
It's somewhat combining Bell's sock illustration with his Lille-Lyon illustration, but in a way that in principle could be really tested in the living room.[..]
The Lille-Lyon demonstration is kind of a joke to me, because it exploits the fair sampling assumption. As I am fond to say, you could use the same logic to assert that the true speed of light is 1 meter per second rather than c. The missing ingredient is always an explanation of WHY the true value is one thing and the observed value is something else. I don't see how you are supposed to ignore your recorded results in favor of something which is pulled out of the air.
Sorry you lost me here; Bell presented that example to defend his separation of terms. What is your issue with it?
 
  • #90
harrylin said:
Sorry you lost me here; Bell presented that example to defend his separation of terms. What is your issue with it?

I thought you were using it to demonstrate that classical data can violate a Bell Inequality. If you weren't intending that, then my apologies. But if you were, then I will say it is not a suitable analogy. A suitable analogy would be one like particle spin or polarization.
 
  • #91
DrChinese said:
I thought you were using it to demonstrate that classical data can violate a Bell Inequality. If you weren't intending that, then my apologies. But if you were, then I will say it is not a suitable analogy. A suitable analogy would be one like particle spin or polarization.
Bell was using it to make it plausible that classical data must obey his method of probability analysis. I mentioned why I find both Lille/Lyon and particle spin useless for illustrating such things as particle spin in post #55. For me Lille-lyon is too difficult to analyse and it doesn't include the detection aspects well. What do you find unsuited about Lille-Lyon?
 
  • #92
rlduncan said:
In regards to Jaynes’ view: Bell incorrectly factored a joint probability; it may be informative to analyze the data set presented by N. David Mermin in his article: “Is the moon there when nobody looks? Reality and the quantum theory.” [..]
Now that you bring it up, I was going to bring up Mermin as a separate topic but perhaps the answer on my question is very simple: can anyone tell me how his equality of 0.5 follows from (or, as he presents it, is) Bell's inequality?
 
  • #93
DrChinese said:
So let me see if I have this straight. If you apply the probability analysis (either dependent or independent in your example), you would predict .5333 (actually a minimum). The quantum prediction is .5 which agrees to actual experiments.

Well, I would say Bell's point works nicely. Focusing on his factorization is a mistake. Once you know of Bell, I think it is easier to simply require that counterfactual cases must have a probability >=0. Which is the requirement of realism, going back to EPR and the famous "elements of reality".

Thanks for the comments.

The data shows that the events A and B are dependent not independent, an assumption made by Bell. The P(A)*P(B/A) ≠ P(A)*P(B). Can you exlain how Bell got it right using an invalid assumption?
 
  • #94
harrylin said:
Now that you bring it up, I was going to bring up Mermin as a separate topic but perhaps the answer on my question is very simple: can anyone tell me how his equality of 0.5 follows from (or, as he presents it, is) Bell's inequality?
Rather than examining Mermin, you might want to look at Nick Herbert's exposition "quantumtantra.com/bell2.html" . It's written in a style like Mermin's, but the example used is even simpler. This example was the one Bell used in talks to popular audiences, as he said that it was simplest known Bell inequality.
 
Last edited by a moderator:
  • #95
rlduncan said:
[..] The data shows that the events A and B are dependent not independent, an assumption made by Bell. The P(A)*P(B/A) ≠ P(A)*P(B). Can you exlain how Bell got it right using an invalid assumption?
It appears that you don't have lambda in your analysis. That is however necessary to test his assumption (see the discussion on the first page of this thread).
 
  • #96
lugita15 said:
Rather than examining Mermin, you might want to look at Nick Herbert's exposition "quantumtantra.com/bell2.html" . It's written in a style like Mermin's, but the example used is even simpler. This example was the one Bell used in talks to popular audiences, as he said that it was simplest known Bell inequality.
Thanks, that may very well provide the answer on my Mermin question and it looks very interesting. :smile:

PS: I think that Herbert's proof deserves to be a separate topic - it looks really good and no need for a lambda!
 
Last edited by a moderator:
  • #97
harrylin said:
Thanks, that may very well provide the answer on my Mermin question and it looks very interesting. :smile:

PS: I think that Herbert's proof deserves to be a separate topic - it looks really good and no need for a lambda!
Yes, it would be nice to have a thread on Herbert's proof.
 
  • #98
harrylin said:
[..] I think that Herbert's proof deserves to be a separate topic - it looks really good and no need for a lambda!
lugita15 said:
Yes, it would be nice to have a thread on Herbert's proof.

So, I started that topic here:
https://www.physicsforums.com/showthread.php?t=589134
 
Back
Top