billschnieder said:
JesseM:
Since brevity is a virtue, I will not attempt responding to very line of your responses which is very tempting as there is almost always something to challenge in each.
In a scientific/mathematical discussion, precision is more of a virtue than brevity. In fact one of the common problems in discussions with cranks who are on a crusade to debunk some mainstream scientific theory is that they typically throw out short and rather broad (and vague) arguments which may sound plausible on the surface, but which require a lot of detailed explanation to show what is wrong with them. This problem is discussed
here, for example:
Come to think of it, there’s a certain class of rhetoric I’m going to call the “one way hash” argument. Most modern cryptographic systems in wide use are based on a certain mathematical asymmetry: You can multiply a couple of large prime numbers much (much, much, much, much) more quickly than you can factor the product back into primes. A one-way hash is a kind of “fingerprint” for messages based on the same mathematical idea: It’s really easy to run the algorithm in one direction, but much harder and more time consuming to undo. Certain bad arguments work the same way—skim online debates between biologists and earnest ID afficionados armed with talking points if you want a few examples: The talking point on one side is just complex enough that it’s both intelligible—even somewhat intuitive—to the layman and sounds as though it might qualify as some kind of insight. (If it seems too obvious, perhaps paradoxically, we’ll tend to assume everyone on the other side thought of it themselves and had some good reason to reject it.) The rebuttal, by contrast, may require explaining a whole series of preliminary concepts before it’s really possible to explain why the talking point is wrong. So the setup is “snappy, intuitively appealing argument without obvious problems” vs. “rebuttal I probably don’t have time to read, let alone analyze closely.”
billschnieder said:
The principle of common cause used by Bell as P(AB|C) = P(A|C)P(B|C) is not universally valid even if C represents complete information about all possible causes in the past light cones of A and B.
Not in general, no. But in a universe with local realist laws, it is universally valid.
billschnieder said:
This is because
if A and B are marginally correlated but uncorrelated conditioned on C, it implies that C screens off the correlation between A and B. In some cases, it is not possible to define C such that it screens off the correlation between A and B.
It is always possible to define such a C in a relativistic universe with local realist laws, if A and B happen in spacelike-separated regions: if C represents the complete information about all local physical variables in the past light cones of the regions where A and B occurred (or in spacelike slices of these past light cones taken at some time after the last moment the two past light cones intersected, as I suggested in
post 61/62 on the other thread and as illustrated in fig. 4 of
this paper on Bell's reasoning), then it is guaranteed that C will screen off correlations between A and B. Nothing in the Stanford article contradicts this, so if you disagree with it, please explain why in your own words (preferably addressing my arguments in post #41 about why to suggest otherwise would imply FTL information transmission, like telling me whether you think the example where the results of a race on Alpha Centauri were correlated with a buzzer going off on Earth is compatible with local realism and relativity). If you think the Stanford Encyclopedia article
does contradict it, can you tell me specifically which part and why? In your quote from the Stanford Encyclopedia saying why common cause principles can fail, the first part was about "molecular chaos" and assumptions about the exact microscopic state of macroscopic systems:
This explains why the three principles we have discussed sometimes fail. For the demand of initial microscopic chaos is a demand that microscopic conditions are uniformly distributed (in canonical coordinates) in the areas of state-space that are compatible with the fundamental laws of physics. If there are fundamental (equal time) laws of physics that rule out certain areas in state-space, which thus imply that there are (equal time) correlations among certain quantities, this is no violation of initial microscopic chaos. But the three common cause principles that we discussed will fail for such correlations.
Note that of the three common cause principles discussed, none (including the one by Penrose and Parcival) actually allowed C to involve the full details about every microscopic physical fact at some time in the past light cone of A or B. This is why assumptions like "microscopic chaos" are necessary--because you don't
know the full microscopic conditions, you have to make assumptions like the one discussed in
section 3.3:
Nonetheless such arguments are pretty close to being correct: microscopic chaos does imply that a very large and useful class of microscopic conditions are independently distributed. For instance, assuming a uniform distribution of microscopic states in macroscopic cells, it follows that the microscopic states of two spatially separated regions will be independently distributed, given any macroscopic states in the two regions. Thus microscopic chaos and spatial separation is sufficient to provide independence of microscopic factors.
Also note earlier in the same section where they write:
there will be no screener off of the correlations between D and E other than some incredibly complex and inaccessible microscopic determinant. Thus common cause principles fail if one uses quantities D and E rather than quantities A and B to characterize the later state of the system.
So here common cause principles only fail if you aren't allowed to use the full set of microscopic conditions which might contribute to the likelihood of different observable outcomes, they acknowledge that if you did have such information it
could screen off correlations in these outcomes.
The next part of the Stanford article that you quoted dealt with QM:
Similarly, quantum mechanics implies that for certain quantum states there will be correlations between the results of measurements that can have no common cause which screens all these correlations off. But this does not violate initial microscopic chaos. Initial microscopic chaos is a principle that tells one how to distribute probabilities over quantum states in certain circumstances; it does not tell one what the probabilities of values of observables given certain quantum states should be. And if they violate common cause principles, so be it. There is no fundamental law of nature that is, or implies, a common cause principle. The extent of the truth of common cause principles is approximate and derivative, not fundamental.
What you seem to miss here is that the idea that quantum mechanics violates common cause principles is
explicitly based on assuming that Bell is correct and that the observed statistics in QM are incompatible with local realism. From
section 2.1:
One might think that this violation of common cause principles is a reason to believe that there must then be more to the prior state of the particles than the quantum state; there must be ‘hidden variables’ that screen off such correlations. (And we have seen above that such hidden variables must determine the results of the measurements if they are to screen of the correlations.) However, one can show, given some extremely plausible assumptions, that there can not be any such hidden variables. There do exist hidden variable theories which account for such correlations in terms of instantaneous non-local dependencies. Since such dependencies are instantaneous (in some frame of reference) they violate Reichenbach's common cause principle, which demands a prior common cause which screens off the correlations. (For more detail, see, for instance, van Fraassen 1982, Elby 1992, Grasshoff, Portmann & Wuethrich (2003) [in the Other Internet Resources section], and the entries on Bell's theorem and on Bohmian mechanics in this encyclopedia.)
So, in no way does this suggest they'd dispute that in a universe that
did obey local realist laws, it would be possible to find a type of "common cause" involving detailed specification of every microscopic variable in the past light cones of A and B which would screen off correlations between A and B. What they're saying is that the actual statistics seen in QM rule out the possibility that our universe actually obeys such local realist laws.
billschnieder said:
2) Not all correlations necessarily have a common cause and suggesting that they must is not appropriate.
I never suggested that all correlations have a common cause, unless "common cause" is defined so broadly as to include the complete set of microscopic conditions in two non-overlapping light cones (two totally disjoint sets of events, in other words). For example, if aliens outside our cosmological horizon (so that their past light cone
never overlaps with our past light cone at any moment since the Big Bang) were measuring some fundamental physical constant (say, the fine structure constant) which we were also measuring, the results of our experiments would be correlated due to the same laws of physics governing our experiments, not due to any events in our past which could be described as a "common cause". But it's you who's bringing up the language of "causes", not me; I'm just talking about information which causes you to alter probability estimates, and that's all that's necessary for Bell's proof. In this example, if our universe obeyed local realist laws, and an omniscient being gave us a complete specification of all local physical variables in the past light cone of the alien's measurement (or in some complete spacelike slice of that past light cone) along with a complete specification of the laws of physics and an ultra-powerful computer that we could use to evolve these past conditions forward to make predictions about what will happen in the region of spacetime where the aliens make the measurement, then our estimate of the probabilities of different outcomes in that region would
not be altered in the slightest if we learned additional information about events in our own region of spacetime, including the results of an experiment similar to the alien's.
billschnieder said:
3) Either God is calculating on both sides of the equation or he is not. You can not have God on one side and the experimenters on another.
It is theoretical physicists calculating the equations based on
imagining what would have to be true if they had access to certain information H which is impossible to find in practice, given certain
assumptions about the way the fundamental laws of physics work. But since they don't actually know the value of H, they may have to sum over all possible values of H that would be compatible with these assumptions about fundamental laws. For example, do you deny that under the assumption of local realism, where H is taken to represent full information about all local physical variables in the past light cones of A and B, the following equation should hold?
P(AB) = sum over all possible values of H: P(AB|H)*P(H)
Note that this is the type of equation that allowed me to reach the final conclusion in the scratch lotto card example from post #18; I assumed that the perfect correlation when Alice and Bob scratched the same box was explained by the fact that they always received a pair of cards with an identical set of "hidden fruits" behind each box, and then I showed that P(Alice and Bob find the same fruit when they scratch different boxes|H) was always greater than or equal to 1/3 (assuming they choose which box to scratch randomly with a 1/3 probability of each on a given trial, and we're just looking at the subset of trials where they happened to choose different boxes) for all possible values of H:
H1: box1: cherry, box2: cherry, box3: cherry
H2: box1: cherry, box2: cherry, box3: lemon
H3: box1: cherry, box2: lemon, box3: cherry
H4: box1: cherry, box2: lemon, box3: lemon
H5: box1: lemon, box2: cherry, box3: cherry
H6: box1: lemon, box2: cherry, box3: lemon
H7: box1: lemon, box2: lemon, box3: cherry
H8: box1: lemon, box2: lemon, box3: lemon
If the probability is greater than or equal to 1/3 for each possible value of H, then obviously regardless of the specific values of P(H1) and P(H2) and so forth, the probability on the left of this equation:
p(Alice and Bob find the same fruit when they scratch different boxes) = sum over all possible values of H: P(Alice and Bob find the same fruit when they scratch different boxes|H)*P(H)
...must end up being greater than or equal to 1/3 as well. Therefore if we find the actual frequency of finding the same fruit when they choose different boxes is 1/4, we have falsified the original theory that they are receiving cards with an identical set of predetermined "hidden fruit" behind each box.
In this example, even if the theory about hidden fruit had been correct, I don't actually
know the full set of hidden fruit on each trial (say the cards self-destruct as soon as one box is scratched). So, any part of the equation involving H is imagining what would have to be true from the perspective of a "God" who
did have knowledge of all the hidden fruit. And yet you see the final conclusion is about the actual probabilities Alice and Bob observe on trials where they choose different boxes to scratch. Please tell me whether your general arguments about it being illegitimate to have a human perspective on one side of an equation and "God's" perspective on another would apply to the above as well (i.e. whether you disagree with the claim that the premise that each card has an identical set of hidden fruits should imply a probability of 1/3 or more that they'll find the same fruit on trials where they randomly select different boxes).
billschnieder said:
Therefore if God is the one calculating the inequality you can not expect a human experimenter who knows nothing of about H, to collect data consistent with the inequality.
Using P(AB|H) = P(A|H)P(B|H) to derive an inequality means that the context of the inequalities is one in which there is no longer any correlation between A and B, since it has been screened-off by H. Therefore for data to be comparable to the inequalities, it must be screened of with H. Note that P(AB) = P(A|H)P(B|H) is not a valid equation.
It's true that this is not a valid equation, but if P(AB|H)=P(A|H)P(B|H) applies to the situation we are considering, then P(AB) = sum over all possible values of H: P(A|H)P(B|H)P(H) is a valid equation, and it's directly analogous to equation (14) in http://hexagon.physics.wisc.edu/teaching/2010s%20ph531%20quantum%20mechanics/interesting%20papers/bell%20on%20epr%20paradox%20physics%201%201964.pdf .