PeterDonis said:
Jaynes' point is that to arrive at his equation in the first place, Bell has to make an assumption: he has to *assume* that the integrand can be expressed in the factored form given above.
Yes, this is exactly the assumption of local realism: the outcome P(A|a,λ) does not depend on the choice of parameter b and vice versa. Let's see how Bell explains it:
Bell said:
It seems reasonable to expect that if sufficiently many such causal factors can be identified and held fixed, the residual fluctuations will be independent, i.e.,
P(N,M|a,b,λ)=P1(M|a,λ)P2(N|b,λ) (10)
(eq (10) is the same as Bell's eq(11) and Jayne's eq (14) except M and N are renamed to A and B)
The factorization in eq. (10) means P1(...) and P2(...) are independent. Now the reason they are independent is because Bell chose it to be that way, by encapsulating all common factors in parameter λ. The underlying assumption here is that the values M and N are affected by some common factors (represented by λ), local factors a and b, and some residual randomness, independent of either global or local influences. This residual randomness is the reason we have probabilities P1, P2 at all, without it we would have
deterministic functions M(a,λ) and N(b,λ).
Eq (10) is valid for any
given a,b,λ. Say, we discovered that the number of cases in both Lyons and Lille is influenced by day of week, so we are only comparing the results on a given day (say on Friday). Then we discovered a correlation with the stock market, so we only compare the results from only those Fridays when stock market was bearish. etc. And we keep doing that that until the residual randomness is independent.
I repeat, P1 and P2 are independent by design. If they turn out not to be independent, it just means we didn't do a good enough job with λ and overlooked some common factor. There is no limit on what λ can contain, except for local factors a and b, in accordance with physical model of local realism.
Now, there is an easy way to get rid of the residual randomness, by lumping it into λ. We can introduce random variables χ and η representing residual randomness of M and N. In case of Lyons and Lille they would represent the health of the population, their susceptibility to heart attack, including random fluctuations. χ(a,λ) might be a random function which tells whether a person in Lyons is going to have a heart attack given local and global factors a and λ. M and M then become deterministic functions M=M(χ,a,λ), N=N(η,b,λ), probabilities P1(M|χ,a,λ) and P2(N|η,b,λ) become {0,1} and eq (10) is automatically satisfied. Then we just redefine λ to include χ,η: λ' = {λ,χ,η}. It does expand the meaning of λ, which now means not just common global factors but any factors at all whether local or global, but excluding a and b. This is basically what was done from the outset in eq (2) in Bell's EPR paper.
Jaynes says that the fundamentally correct equation is
P(AB|abλ) = P(A|Babλ) P(B|abλ) (15)
Well, where did that come from? It's just the axiom of conditional probability P(AB) = P(A|B) P(B) with abλ tucked in. It is of course trivially true, but the locality assumptions and the special role of λ have been thrown out with the bathwater. Basically, while (14) is a physical model of a particular EPR setup with added local realism assumption, (15) is a tautology in a form 2*2*x = 4*x which tells us absolutely nothing.
Now, let's talk about 1st of the two objections:
Jaynes said:
(1) As his words above show, Bell took it for granted that a conditional probability P(X|Y ) expresses a physical causal influence, exerted by Y on X.
I assume Jaynes refers here to the following quite:
It would be very remarkable if b proved to be a causal factor for A, or a for B; i.e., if P(A|aλ) depended on b or P(B|bλ) depended on a.
Note the subtle difference: Jaynes talks about causal dependence of one outcome random variable on another random variable, while Bell talks about dependence of random variable on free parameter. The difference is, with two random variables they may be dependent and you cannot say whether X causes Y, Y causes X or both X and Y are caused by some third factor. In Bell's case of random variable and free parameter, dependency is clearly one way: the outcome depends on the parameter but not the other way around. The parameter is a given, it does not depend on anything else. This is actually one assumption which is implied and not stated directly. Violation of this assumption represents superdeterminism loophole, which is currently being discussed in another [STRIKE]ward[/STRIKE]thread.
As an illustration of his point, Jaynes gives Bernoulli Urn example. Let's start with eq (16):
P(R1|I)=M/N
I'd say
I was introduced here to mimic Bell's λ. But what is the meaning of
I exactly?
Jaynes said:
I = "Our urn contains N balls, identical in every respect except that M of them are red, the remaining N-M white. We have no information about the location of particular balls in the urn. They are drawn out blindfolded without replacement."
So
I is not a random variable, nor a parameter. It does not have a set of values you can integrate over. It never changes. Basically it does absolutely nothing. Also note conspicuous absence of local parameter a or its equivalent. And without a, the whole thing misses the point.
Now if we are to re-introduce a and λ according to Bell's recipe, we would define a as a free local parameter which applies to the first measurement only. Say, a is a location of the ball to be picked during the first draw. Correspondingly b is the location of the ball to be picked on the second draw. λ is a random variable which by definition includes everything else which might possibly affect the outcomes. In this case λ would be exact arrangement of the balls in the urn. Clearly λ and a together completely determine which ball is drawn first: R1 = R1(a,λ). State of the urn after the first draw is γ=γ(a,λ) and second ball R2=R2(b,γ)=R2(a,b,λ). Note that expression for R2 violates Bell locality assumption and so the whole setup is clearly different from Bell's. Anyway, R1 and R2 are fully determined by a, b, and λ and therefore do not depend on anything else:
P(R1|R2abλ)=P(R1|aλ)={1: R1=R1(a,λ), 0: otherwise}
P(R2|R1abλ)=P(R2|abλ)={1: R2=R2(a,b,λ), 0: otherwise}.
Easy to see that factorization P(R1 R2|a,b,λ)= P(R1|aλ)P(R2|abλ) is in fact correct.
R1=R1(a,λ) and R2=R2(a,b,λ) above are deterministic functions, like in EPR paper. We could add some local residual randomness to them to get the equation similar to eq. (10) from Berltmann's Socks paper. For example, a and b would select x-coordinate of the ball to be drawn and y-coordinate would be picked at random. As long as random functions R1 and R2 are independent, the factorization will be valid. Again, this randomness can always be moved from R1 and R2 into λ.
So what is missing in Jaynes paper? Well, the elephant in the room of course, I mean the λ. λ is a key feature of Bell's paper and it is completely absent in Jaynes example. λ by definition encapsulates all randomness and all parameters in the system, except a and b. Once particular values of λ,a,b are fixed, everything else is predetermined. Without λ, the best posteriori estimate of conditional probability P(R1|...) would necessarily include dependency on R2 and vice versa. Once we nail down λ,a,b, all other dependencies disappear.