Bell's derivation; socks and Jaynes

  • Thread starter Thread starter harrylin
  • Start date Start date
  • Tags Tags
    Derivation
Click For Summary
The discussion revolves around Bell's theorem, specifically using Bertlmann's socks as an illustrative example to explore the implications of local hidden variables and probability calculations. Participants express confusion over the notation and the role of the variable "lambda" in determining probabilities, particularly in relation to Jaynes' criticism of Bell's equation 11. They debate the implications of including unknown variables in probability assessments, ultimately questioning whether Bell's model accurately captures the complexities of the problem. The conversation highlights the need for clarity in definitions and notation while acknowledging the limitations of metaphors in explaining quantum mechanics. Overall, the thread emphasizes the ongoing examination of Bell's theorem and its interpretations within the context of local realism.
  • #61
harrylin said:
So, I'm puzzled by your last remark; why should "real life" not be allowed in a Bell type calculation of reality?

PS. I guess that he wants to calculate the outcome for any (a, b) combination for all possible "real life" λ (thus all possible x), taking in account their frequency of occurrence. It seems plausible that λ (thus (x1,x2)) is different from one set of pair measurements to the next, and now it looks to me that Bell does account for that possibility (but can one treat anything as just a number?). And I suppose that according to Bell the total function of λ (thus X) cannot vary from one total experiment to the next, as the results are reproducible. Is that what you mean?
I suppose real-life was a bad choice of words. At the time I was thinking of systematic effects that changed ρ(λ) from run to run but left P(AB|a,b) the same. Now that I have thought about it some more, I think a better way to put it is that if one were doing a time or space average (as would be necessary when simulating actual experimental runs, since they do not occur at the same points in space-time) ρ(λ) could vary from run to run and experiment to experiment (as long as P(AB|a,b) stays the same these would describe setting up indistinguishable experiments/runs). When producing different outcomes to obtain an ensemble distribution for a single run, ρ(λ) is fixed since it is part of the initial/boundry conditions of the run.

It is essentially the difference between a time-average and an ensemble average.

There is nothing preventing one from asserting from the start that ρ(λ) is the same for all experiments and experimental runs, it is merely a (reasonable) restriction on the set of hidden variable theories under consideration (which is almost certain to be necessary in order to make the analysis tractible).
 
Physics news on Phys.org
  • #62
IsometricPion said:
[..] ρ(λ) could vary from run to run and experiment to experiment (as long as P(AB|a,b) stays the same these would describe setting up indistinguishable experiments/runs). When producing different outcomes to obtain an ensemble distribution for a single run, ρ(λ) is fixed since it is part of the initial/boundry conditions of the run. [..]
I'm not sure that I want to go there (at least, not yet); my problem is much more basic. It looks to me that for such an integration to be possibly valid, p(λ) - I mean P(xA,xB) - should be the same for different combinations of a and b. Isn't that a requirement?
 
Last edited:
  • #63
harrylin said:
It looks to me that for such an integration to be possibly valid, p(λ) - I mean P(xA,xB) - should be the same for different combinations of a and b. Isn't that a requirement?
I am not entirely sure what you are asking (what do the x's stand for?). If you are asking about the dependence of ρ(λ) on the settings of the detectors a,b then the answer is yes, it should remain the same. This is because ρ(λ) can only depend on λ, which is required to be independent of a,b.
 
  • #64
Well, ρ(λ) should not change from one run to another, otherwise you won't get repeatable results (I mean repeatable statistics for long runs of course, not repeatable single outcomes). If ρ(λ) does vary, it just means some random factor ζ has not been accounted for, it needs to be lumped into λ'={λ,ζ}, then ρ becomes joint distribution ρ(λ') = ρ(λ,ζ).
 
  • #65
@ Delta Kilo : Yes, that sounds reasonable, but I have a problem already with one full statistical experiment.
IsometricPion said:
[..] If you are asking about the dependence of ρ(λ) on the settings of the detectors a,b then the answer is yes, it should remain the same. This is because ρ(λ) can only depend on λ, which is required to be independent of a,b.
I'm not sure that your suggestion can actually be applied in reality to all types of λ. Even if each λ=(xA,xB) is independent of a and b, it seems to me that the probability distribution of the λ that play a role in the measurements could be affected by choices of a and b. Perhaps I'm just seeing problems that don't exist, or perhaps I'm arriving at the point that Jaynes and others actually were getting at, but didn't explain well enough.*

And certainly Bell didn't sufficiently defend properly that his integration is compatible with all possible types of λ. He simply writes in his socks paper: "We have to consider then some probability distribution ρ(λ)", but he doesn't prove the validity of that claim.

So, it may be best that I now give my example together with a small selection of results (later today I hope), and then try to work it out, perhaps with the help of some of you.

*PS: I'm now re-reading Jaynes and it does look as if his eq.15 exactly points at the problem that I now encounter.
 
Last edited:
  • #66
lugita15 said:
Speaking of this paper, does anyone know what Jaynes is talking about in the end of page 14 and going on to page 15, concerning "time-alternation theories"? He seems to be endorsing a local realist model which makes predictions contrary to QM, and he claims that experiments peformed by "H. Walther and coworkers on single atom masers are already showing some resemblance to the technology that would be required" to test such a theory. Does anyone one know whether such a test has been peformed in the decades since he wrote his paper?
I don't know what test he talks about, but it appears to refer to slightly different predictions - and that's another way that this paradox could be solved perhaps. Take for example special relativity, would you say that general relativity is "contrary" to it? Moreover, many Bell type experiments do not exactly reproduce simplified QM predictions as often portrayed and which completely neglect correlation in time, selection of entangled pairs, etc.
Anyway, I'm here still at the start of Bell's derivation and which corresponds to Jaynes point 1. :-p
 
  • #67
harrylin said:
I'm not sure that your suggestion can actually be applied in reality to all types of λ. Even if each λ=(xA,xB) is independent of a and b, it seems to me that the probability distribution of the λ that play a role in the measurements could be affected by choices of a and b.
Well, that's too bad, that was the whole point of the exercise :) Say, you have 2 photons flying from the source in opposite directions. The source generates λ and each photon carries this λ (or part of it, or some function of it, doesn't matter) with it. Once they fly apart, each photon is on its own as there is no way for any 'local realistic' (≤c) influence to reach one photon from another. Parameters a and b are chosen by experimenters and programmed into detectors while the photons are in mid-flight, again there is no way for the influence from parameter a to affect λ carried by photon B before it hits detector (and vice versa). When the photon hits detector, the outcome is determined by the λ carried by this photon and local parameter a or b.

BTW what are xA and xB exactly?

harrylin said:
And certainly Bell didn't sufficiently defend properly that his integration is compatible with all possible types of λ. He simply writes in his socks paper: "We have to consider then some probability distribution ρ(λ)", but he doesn't prove the validity of that claim.
Well, if someone does come up with working Bell-type local realistic theory, they'd better have proper probability distribution ρ(λ), (and being proper includes ρ(λ)≥0, ∫ρ(λ)dλ=1), otherwise how are they going to calculate probabilities of the outcomes?
 
  • #68
Delta Kilo said:
Well, that's too bad, that was the whole point of the exercise :) Say, you have 2 photons flying from the source in opposite directions. The source generates λ and each photon carries this λ (or part of it, or some function of it, doesn't matter) with it. Once they fly apart, each photon is on its own as there is no way for any 'local realistic' (≤c) influence to reach one photon from another. Parameters a and b are chosen by experimenters and programmed into detectors while the photons are in mid-flight, again there is no way for the influence from parameter a to affect λ carried by photon B before it hits detector (and vice versa). When the photon hits detector, the outcome is determined by the λ carried by this photon and local parameter a or b.
That is a typical example of the kind of non-spooky models that Bell already knew not to work. He claimed to be completely general in his derivation, in order to prove that no non-spooky model is possible that reproduces QM - that is, also the ones that he couldn't think of. Else the exercise was of little use. :rolleyes:
BTW what are xA and xB exactly?
x is just my notation of the value of λ at an event (here at event A, and at event B), which I introduced early in this discussion for a clearer distinction with the total unknown X during the whole experiment.
Well, if someone does come up with working Bell-type local realistic theory, they'd better have proper probability distribution ρ(λ), (and being proper includes ρ(λ)≥0, ∫ρ(λ)dλ=1), otherwise how are they going to calculate probabilities of the outcomes?
That's not what I meant; I think that the probability distribution of the λ that correspond to a choice of A,B,a,b for the data analysis (and which thus are unwittingly selected along with that choice) could depend on that choice of data. It appears to me that Bell's integration doesn't allow that.
 
Last edited:
  • #69
harrylin said:
That's not what I meant; I think that the probability distribution of the λ that correspond to a choice of A,B,a,b for the data analysis (and which thus are unwittingly selected along with that choice) could depend on that choice of data.

If you mean probability distribution ρ(λ) depends on the experimental setup, including functions A(a,λ) and B(b,λ), along with their domain, that is set of all possible values of a and b, then yes. A(a,λ), B(b,λ), ρ(λ) come together as a package. If you mean ρ(λ) depends on specific values chosen for a and b in a given run, then certainly no, a and b do not exist yet when ρ(λ) is used to generate new λ for the run, that's the point.
 
  • #70
Delta Kilo said:
[..] a and b do not exist yet when ρ(λ) is used to generate new λ for the run, that's the point.
That has nothing to do with the unknowingly selected λ for analysis - and anyway, Bell's point is that λ is not restricted, else his derivation would be of little interest. He stresses in the appendix:
nothing is said about the locality, or even localizability, of the variables λ. These variables could well include, for example, quantum mechanical state vectors, which have no particular localization in ordinary space time. It is assumed only that the outputs A and B, and the particular inputs a and b, are well localized.
 
  • #71
Bell does not say that explicitly but it follows from
Bell said:
The vital assumption [2] is that the result B for particle 2 does not depend on setting a, of the magnet of particle 1, nor A on b
Since A is a function of lambda, if lambda is allowed to depend on b then A would also depend on b.
 
  • #72
Delta Kilo said:
Bell does not say that explicitly but it follows from Since A is a function of lambda, if lambda is allowed to depend on b then A would also depend on b.
Not necessarily: P(λ) is not λ, and I suspect that P(λ) at B could depend on b without any effect on A. Anyway, I've now progressed with my example and it looks not too bad, so I'll start presenting it now.

PS. Oops, there was still something wrong with it... maybe later!
 
Last edited:
  • #73
harrylin said:
Not necessarily: P(λ) is not λ, and I suspect that P(λ) at B could depend on b without any effect on A.
There is no such thing as "P(λ) at B", there is only one λ for each run, randomly chosen using P(λ). P(λ) is the same as "P(λ) at A" same as "P(λ) at B" and therefore it cannot depend on either a or b due to locality constraints.
 
  • #74
Now I'm messing a bit with the inequality as was used by billschnieder last year, apparently it was based on:

1 + <bc> >= |<ab> - <ac>|

- https://www.physicsforums.com/showthread.php?t=499002&page=6

However, that looks a little different from the CHSH inequality that Bell presents in his socks paper... and [update:] I now found back that it corresponds to Bell's original inequality. As it appears to be the simplest, I will use that.

@ Deltakilo: I even cited for you how Bell explained that there is no such locality constraint to λ. Indeed, that would not be reasonable. Once more:
"only [..] the outputs A and B, and the particular inputs a and b, are well localized."
 
Last edited:
  • #75
So, here's the example that I had in mind. It was a shot in the dark and I'm still not sure about the outcome concerning Bell vs. Jaynes. However, with minor fiddling I already obtained a result that looks interesting. I may very well have made an error; if anyone notices errors in the data analysis, I'll be grateful to hear it.

A group of QM students gets classes from Prof. Bertlmann. It's an intensive course with morning class, afternoon class and evening class. The students wonder if Bell's story could actually be true, and Bertlmann really wears different socks. However the professor happens to wear long trousers and when he goes to sit behind his desk, his socks are out of sight.

But they are creative and one student, let's call him Carlos, looks for simple electronics designs on the web and makes two devices, each with a LED to illuminate the socks and a light detector to determine if the sock is white or black. Bob finishes soldering in the late evening and hurries off to the classroom where he hides the devices on both sides under the desk, aiming at where Bertlmann's socks should appear. With a wireless control he can secretly do a measurement with the press of a button and the result is then indicated by two indicator LED's that are visible for the students, but out of sight for Bertlmann.

The next morning Bertlmann comes in, talks a while and then sits down while they do a QM exercise. Now Carlos presses the button and both LED's light up. He interprets that to mean that both socks are white and writes down "1,1". A bit of an anti-climax really. Never mind, after Bertlmann left he resets the detectors, and decides to leave them in place.

During afternoon class Carlos hits the button once more, and what a surprise: both lights stay out - this time it's "0,0". That is very puzzling as nobody had seen Prof. Bertlmann change his socks and he had been eating with his students - in fact he was in sight all the time.

Very Spooky! :eek:

The students discuss what to do next. They decide to do measurements over 10 days and then analyse it with Bell's method. One day corresponds to the measurement of one pair of socks, and the time of day plays the role of detector angle. They found that for identical settings the left and right LED's gave the same signal. Thus for simplicity only the data of one side is given here, with a,b,c for morning, afternoon, evening:

a b c
0 0 0
1 1 0
1 1 0
1 1 0
0 0 0
1 1 0
0 0 0
1 0 1
0 1 0
1 1 1

0.6 0.6 0.2 (averages)
0.47 (total average)*


After replacing all the 0 by -1, they use the original Bell inequality:

1 + <bc> >= |<ab> - <ac>|

Taking <bc> of day 1, <ab> of day 2, <ac> of day 3 and so on they obtain:
0.67 >= 1.33

Alternatively (but not clear if that is allowed), by simply using all the data: 0.33 >= 1.33

The first impression that the results are "spooky" is therewith supported.

However, that could be just a coincidence or a calculation error. They should ask their teachers if it's OK like this and collect more data during the rest of the semester. :smile:

PS: can someone tell me please if there are no obvious mistakes, before I simulate more data.

*Note: the averages come out at about 50%, but that is pure luck and can be tuned with the detector sensitivity setting.
 
Last edited:
  • #76
I copied this from another thread, as the subject matter overlaps in some respects and it might be of interest to readers here:

gill1109 said:
DrChinese referred to Jaynes. Jaynes (1989) thought that Bell was incorrectly performing a routine factorization of joint probabilities into marginal and conditional. Apparently Jaynes did not understand that Bell was giving physical reasons (locality, realism) why it was reasonable to argue that two random variables should be conditionally *independent* given a third. When Jaynes presented his resolution of the Bell paradox at a conference, he was stunned when someone else gave a neat little proof using Fourier analysis that the singlet correlations could not be reproduced using a network of classical computers, whose communication possibilities "copy" those of the traditional Bell-CHSH experiments. I have written about this in quant-ph/0301059. Jaynes is reputed to have said "I am going to have to think about this, but I think it is going to take 30 years before we understand Stephen Gull's results, just as it has taken 20 years before we understood Bell's" (the decisive understanding having been contributed by E.T. Jaynes.

Thanks so much for taking time to share this story. For those interested, here is the direct link to your paper:

http://arxiv.org/abs/quant-ph/0301059

I like your example of Luigi and the computers. I would recommend this paper to anyone who is interested in understanding the pros AND cons of various local realistic positions - and this is a pretty strong roundup!
 
  • #77
Jaynes (1989) thought that Bell was incorrectly performing a routine factorization of joint probabilities into marginal and conditional. Apparently Jaynes did not understand that Bell was giving physical reasons (locality, realism) why it was reasonable to argue that two random variables should be conditionally *independent* given a third. When Jaynes presented his resolution of the Bell paradox at a conference, he was stunned when someone else gave a neat little proof using Fourier analysis that the singlet correlations could not be reproduced using a network of classical computers, whose communication possibilities "copy" those of the traditional Bell-CHSH experiments. I have written about this in quant-ph/0301059. Jaynes is reputed to have said "I am going to have to think about this, but I think it is going to take 30 years before we understand Stephen Gull's results, just as it has taken 20 years before we understood Bell's" (the decisive understanding having been contributed by E.T. Jaynes).
 
  • #78
gill1109 said:
Jaynes (1989) thought that Bell was incorrectly performing a routine factorization of joint probabilities into marginal and conditional. Apparently Jaynes did not understand that Bell was giving physical reasons (locality, realism) why it was reasonable to argue that two random variables should be conditionally *independent* given a third. When Jaynes presented his resolution of the Bell paradox at a conference, he was stunned when someone else gave a neat little proof using Fourier analysis that the singlet correlations could not be reproduced using a network of classical computers, whose communication possibilities "copy" those of the traditional Bell-CHSH experiments. I have written about this in quant-ph/0301059. Jaynes is reputed to have said "I am going to have to think about this, but I think it is going to take 30 years before we understand Stephen Gull's results, just as it has taken 20 years before we understood Bell's" (the decisive understanding having been contributed by E.T. Jaynes).
Thanks for the comment! :smile:

Actually, Jaynes did understand that Bell was giving a physical reason for it, because he cited Bell on that. Thus he thought that Bell thought that a logical dependence must be caused by a physical dependence. According to Jaynes, "Bell took it for granted that a conditional probability P (X |Y) expresses a physical causal influence, exerted by Y on X."

Now, it's still not entirely clear to me what to think of this, except for one thing: "reasonable to argue" is by far insufficient to deserve the name "theorem"...

Moreover, it appears that the locality condition as Bell formulated is insufficient to warrant his derivation. What other conditions are required for a valid factorisation inside the integral?

PS. that's an interesting paper, and while I have only read the introduction now, I'm quite happy to see the idea of a fifth possibility which seems a bit similar to what I have been thinking: just as the PoR may be interpreted as a "loophole" principle ("it can't be done"), also the Bell theorem/paradox could relate to such a principle. But that's food for another discussion. :smile:
 
  • #79
I prefer an alternative derivation to Bell's. The essence of all local hidden variables theories is that they allow the existence (in the theory), alongside of the outcomes of the actually performed measurements, also of the outcomes of the measurements which were not performed. These "counterfactual" outcomes are assigned to the same region of space-time as the factual outcomes, and locality is assumed in the sense that the outcomes in one wing of the experiment do not depend on the setting used in the other. This means that we a local hidden variables theory allows us to define four random variables X1, X2, Y1 and Y2, standing for the outcomes of each of the two possible measurements ("measurement 1, measurement 2") in each wing of the experiment (X and Y respectively). They take the values +/-1. It's easy to check that X1Y1 cannot exceed X1Y2+Y2X2+X2Y1-2. Therefore E(X1Y1) cannot exceed E(X1Y2)+E(Y2X2)+E(X2Y1)-2. Each of these expectation values is estimated in the CHSH experiment by the corresponding average of products of measurement outcomes belonging to the corresponding pair of settings.
 
  • #80
PeterDonis said:
I got it from Jaynes' paper, equation (14). (I was lazy and didn't use LaTeX, so I wrote "x" instead of lambda. I'll quit doing that here.) I did say in my post that I still wanted to check to see what the corresponding equations in Bell's paper looked like. Now that you have linked to Bell's paper, let's play "spot the correspondence".

You are right that the equation I gave, equation (14) in Jaynes' paper, doesn't really have a corresponding equation in Bell's paper. But Jaynes' equation (14) is not the only equation in his paper that bears on the "factorization" issue. In fact, Jaynes' (14) is really just a "sub-expression" from his equation (12), which looks like this:

P(AB|ab) = \int{P(A|a, \lambda) P(B|b, \lambda) p(\lambda) d\lambda}

This equation is basically the same as the equation you gave from Bell's paper. Bell's equation is for the expectation value of a given pair of results that are determined by a given pair of measurement settings; Jaynes' equation is for the joint probability of a given pair of results conditional on a given pair of measurement settings. They basically say the same thing.

Jaynes' point is that to arrive at his equation in the first place, Bell has to make an assumption: he has to *assume* that the integrand can be expressed in the factored form given above. In other words, the integrand Bell writes down is not the most general one possible for the given expectation value: that would be (using Bell's notation)

P(a, b) = \int{A(B, a, b, \lambda) B(a, b, \lambda) p(\lambda) d\lambda}

The question then is whether one accepts Bell's implicit reasoning (he doesn't really go into it much; he seems to think it's obvious) to justify streamlining the integrand as he does. Jaynes does not accept that reasoning, and he gives the urn scenario as an example of why not. I agree that there is one key difference in the urn scenario: the two "measurement events" are not spacelike separated. Jaynes doesn't talk about that at all.

Edit: Bell's notation is actually a bit obscure. He says that A, B stand for "results", but he actually writes them as *functions* of the measurement settings a, b and the hidden variables \lambda. He doesn't seem to have a notation for the actual *outcomes* (the values of the functions given specific values for the variables). I've used A, B above to denote the outcomes as well as the functions, since Bell's notation doesn't give any other way to do it. In Jaynes' notation things are clearer; the equivalent to the above would be:

P(AB|ab) = \int{P(A|B, a, b, \lambda) P(B|a, b, \lambda) p(\lambda) d\lambda}

Edit #2: Corrected the equations above (previously I had A in the second factor in each integrand, which is incorrect). Also, Jaynes notes that there are two possible factorizations; the full way to write the equation just above would be:

P(AB|ab) = \int{P(A|B, a, b, \lambda) P(B|a, b, \lambda) p(\lambda) d\lambda} = \int{P(B|A, a, b, \lambda) P(A|a, b, \lambda) p(\lambda) d\lambda}

This is basically Jaynes' equation (15) with \lambda integrated out.

What would the evaluation of this integral , the area , look like on a plot ? I understand that
the total area is equal to one.
Is it correct to say that the y-axis denotes correlations and the x-axis are detector settings
and the function includes cos2 or cos4 ?
And what are the units of this area ?
 
  • #81
"lambda" is everything which causes statistical dependence of the outcomes at the two locations. "Integral ... p(lambda) d lambda" can be read as "the average over lambda, of the expectation value of the product of the two outcomes given lambda". There is no assumption that lambda is a real number, or two real numbers ... it can be as complicated as you like.

The point is that the measurement results are seen as functions of the measurement settings and of a heap of variables describing the quantum system and the two measurement systems.
 
  • #82
gill1109 said:
"the average over lambda, of the expectation value of the product of the two outcomes given lambda".
To be exact, the integral P(AB|ab) is the joint probability of the outcomes A and B given detector settings a and b. Expectation value of the product can be obtained from it in the usual way:
E(a,b) = \sum_{i,j} A_i B_j P(A_iB_j|ab) = P(-1,-1|ab) + P(1,1|ab) -P(1,-1|ab) - P(-1,1|ab)

Also note, in the original Bell's paper all randomness is encapsulated in λ, so the values of P(A|aλ) and P(B|bλ) are strictly 0 or 1. Bells A(a,λ) and B(b,λ) are connected with P(A|aλ) and P(B|bλ):
P(A_i|λ) = \{ 1 if A(a,λ) = A_i, else 0 \}
A(a,λ) = \{ 1 if P(1|λ) = 1, else - 1 \}

In the "Lyons and Lille" example from Socks paper there is an extra bit of "residual randomness" left over once all influences of common factors λ and local parameters a and b are factored out. That's why there are probability distributions instead of functions. This "residual randomness" is local and independent on either a,b, or λ. It does not change anything and the usual way to deal with it is to assimilate all such "residual randomness" into λ, as it was done in Bell's EPR paper.
 
  • #83
gill1109 said:
I prefer an alternative derivation to Bell's. The essence of all local hidden variables theories is that they allow the existence (in the theory), alongside of the outcomes of the actually performed measurements, also of the outcomes of the measurements which were not performed. These "counterfactual" outcomes are assigned to the same region of space-time as the factual outcomes, and locality is assumed in the sense that the outcomes in one wing of the experiment do not depend on the setting used in the other. This means that we a local hidden variables theory allows us to define four random variables X1, X2, Y1 and Y2, standing for the outcomes of each of the two possible measurements ("measurement 1, measurement 2") in each wing of the experiment (X and Y respectively). They take the values +/-1. It's easy to check that X1Y1 cannot exceed X1Y2+Y2X2+X2Y1-2. Therefore E(X1Y1) cannot exceed E(X1Y2)+E(Y2X2)+E(X2Y1)-2. Each of these expectation values is estimated in the CHSH experiment by the corresponding average of products of measurement outcomes belonging to the corresponding pair of settings.
Isometricpion agreed with me in post #59 that Bell's derivation should apply to my thought experiment which I fully developed in post #75. It's somewhat combining Bell's sock illustration with his Lille-Lyon illustration, but in a way that in principle could be really tested in the living room. And I guess that the secret elements (which I put inside for the simulation) may be called "counterfactual", because the outcomes are defined by those elements. And the outcomes on one side are not affected by what happens on the other side (however their probabilities do of course depend on each other in the sense of Jaynes: they are correlated). Which implies, if I understand you right, that according to you the results cannot break Bell's original inequality. Correct? Or is there another unspoken requirement for heart attacks in Lille-Lyon, Bertlmann's socks and entangled electrons?
 
  • #84
harrylin said:
Isometricpion agreed with me in post #59 that Bell's derivation should apply to my thought experiment which I fully developed in post #75. It's somewhat combining Bell's sock illustration with his Lille-Lyon illustration, but in a way that in principle could be really tested in the living room. And I guess that the secret elements (which I put inside for the simulation) may be called "counterfactual", because the outcomes are defined by those elements. And the outcomes on one side are not affected by what happens on the other side (however their probabilities do of course depend on each other in the sense of Jaynes: they are correlated). Which implies, if I understand you right, that according to you the results cannot break Bell's original inequality. Correct? Or is there another unspoken requirement for heart attacks in Lille-Lyon, Bertlmann's socks and entangled electrons?

What he is saying is that it can break the inequality with smaller sample, but not by much. In fact, he says you should expect it sometimes. But in a larger randomized trial, such as Gill's Luigi's computers example, it is clear you cannot have such results. You will deviate fairly far from the CHSH boundary rather quickly. 30 SD with N=15000 might be typical.

The Lille-Lyon demonstration is kind of a joke to me, because it exploits the fair sampling assumption. As I am fond to say, you could use the same logic to assert that the true speed of light is 1 meter per second rather than c. The missing ingredient is always an explanation of WHY the true value is one thing and the observed value is something else. I don't see how you are supposed to ignore your recorded results in favor of something which is pulled out of the air.
 
  • #85
harrylin said:
The first impression that the results are "spooky" is therewith supported.

However, that could be just a coincidence or a calculation error. They should ask their teachers if it's OK like this and collect more data during the rest of the semester. :smile:

PS: can someone tell me please if there are no obvious mistakes, before I simulate more data.
If it isn't too much trouble, I would like to see the code that generated your results.

Edit: If it is too long to post here, perhaps we could get in touch by e-mail.
 
  • #86
In regards to Jaynes’ view: Bell incorrectly factored a joint probability; it may be informative to analyze the data set presented by N. David Mermin in his article: “Is the moon there when nobody looks? Reality and the quantum theory.” The following represents the summary of the data.

A = Same Switch; A’ = Different Switch; B = Same Color; B’ = Different Color

P(A) = 14/45; P(B) = 24/45
P(B/A) =14/14
P(A’) = 31/45
P(B/A’) = 10/31

We can now calculate the probability of the lights flashing the same color. This should be done two ways for the purpose of resolving which argument is correct. Bell or Jaynes.

General Multiplication Rule (Dependent Events)

1. P( A and B) = P(A)*P(B/A) = (14/45)*(14/14) = .311
2. P(A’ and B) = P(A’)*P(B/A’) = (31/45)*(10/31) = .222

P(Same color) = .311 + .222 = .533

Specific Multiplication Rule (Independent Events)

3. P(A and B) = P(A)*P(B) = (14/45)*(24/45) = .166
4. P(A’ and B) = P(A’)*P(B) = (31/45)*(24/45) = .367

P(Same Color) = .166 + .367 = .533

Wow! Both methods give the same prediction of .533. This was unexpected and there may be an underlying reason for this. Mermin’s theoretical prediction for the lights flashing the same color is 1/3*1 + 2/3*1/4 = .500. The 45 runs closely match the theoretical. However, only the general multiplication rule aligns with the theoretical calculation term for term which tends to support Jaynes’ view. Assuming the above is correct with no mistakes, what do the above findings say about Bell’s derivation using the factored form of the joint probability and ultimately about Bell’s theorem?
 
  • #87
rlduncan said:
In regards to Jaynes’ view: Bell incorrectly factored a joint probability; it may be informative to analyze the data set presented by N. David Mermin in his article: “Is the moon there when nobody looks? Reality and the quantum theory.” The following represents the summary of the data.

A = Same Switch; A’ = Different Switch; B = Same Color; B’ = Different Color

P(A) = 14/45; P(B) = 24/45
P(B/A) =14/14
P(A’) = 31/45
P(B/A’) = 10/31

We can now calculate the probability of the lights flashing the same color. This should be done two ways for the purpose of resolving which argument is correct. Bell or Jaynes.

General Multiplication Rule (Dependent Events)

1. P( A and B) = P(A)*P(B/A) = (14/45)*(14/14) = .311
2. P(A’ and B) = P(A’)*P(B/A’) = (31/45)*(10/31) = .222

P(Same color) = .311 + .222 = .533

Specific Multiplication Rule (Independent Events)

3. P(A and B) = P(A)*P(B) = (14/45)*(24/45) = .166
4. P(A’ and B) = P(A’)*P(B) = (31/45)*(24/45) = .367

P(Same Color) = .166 + .367 = .533

Wow! Both methods give the same prediction of .533. This was unexpected and there may be an underlying reason for this. Mermin’s theoretical prediction for the lights flashing the same color is 1/3*1 + 2/3*1/4 = .500. The 45 runs closely match the theoretical. However, only the general multiplication rule aligns with the theoretical calculation term for term which tends to support Jaynes’ view. Assuming the above is correct with no mistakes, what do the above findings say about Bell’s derivation using the factored form of the joint probability and ultimately about Bell’s theorem?

So let me see if I have this straight. If you apply the probability analysis (either dependent or independent in your example), you would predict .5333 (actually a minimum). The quantum prediction is .5 which agrees to actual experiments.

Well, I would say Bell's point works nicely. Focusing on his factorization is a mistake. Once you know of Bell, I think it is easier to simply require that counterfactual cases must have a probability >=0. Which is the requirement of realism, going back to EPR and the famous "elements of reality".
 
  • #88
IsometricPion said:
If it isn't too much trouble, I would like to see the code that generated your results.
Edit: If it is too long to post here, perhaps we could get in touch by e-mail.
I will post my code here if my shot in the dark completely missed - but I didn't yet automize the data treatment so I don't know yet (but I do see now that it's not clear-cut). For the moment it's simply a useful exercise for me, that helps me to better understand possible issues so that I find the right questions to ask. :-p
 
  • #89
DrChinese said:
Originally Posted by harrylin
It's somewhat combining Bell's sock illustration with his Lille-Lyon illustration, but in a way that in principle could be really tested in the living room.[..]
The Lille-Lyon demonstration is kind of a joke to me, because it exploits the fair sampling assumption. As I am fond to say, you could use the same logic to assert that the true speed of light is 1 meter per second rather than c. The missing ingredient is always an explanation of WHY the true value is one thing and the observed value is something else. I don't see how you are supposed to ignore your recorded results in favor of something which is pulled out of the air.
Sorry you lost me here; Bell presented that example to defend his separation of terms. What is your issue with it?
 
  • #90
harrylin said:
Sorry you lost me here; Bell presented that example to defend his separation of terms. What is your issue with it?

I thought you were using it to demonstrate that classical data can violate a Bell Inequality. If you weren't intending that, then my apologies. But if you were, then I will say it is not a suitable analogy. A suitable analogy would be one like particle spin or polarization.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
Replies
80
Views
7K
  • · Replies 93 ·
4
Replies
93
Views
7K
  • · Replies 28 ·
Replies
28
Views
4K
Replies
215
Views
39K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 68 ·
3
Replies
68
Views
11K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
11
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K