Trying to Understand Bell's reasoning

  • Thread starter Thread starter billschnieder
  • Start date Start date
Click For Summary
Bell's reasoning centers on the implications of quantum mechanics (QM) and local hidden variable theories, asserting that if Alice measures +1, Bob must measure -1, indicating predetermined outcomes. The discussion critiques Bell's assumption that the joint probability of outcomes can be expressed as the product of individual probabilities, arguing that knowledge of Alice's result alters the probability of Bob's outcome. This leads to the conclusion that Bell's inequalities, which have been experimentally violated, suggest that the physical reality is not locally causal. The debate highlights the necessity of understanding the implications of hidden variables and their relationship to measurement outcomes in QM. Ultimately, the argument questions the validity of Bell's initial assumptions regarding local causality and the representation of probabilities.
  • #61
JesseM said:
It is relevant, yes. And once you realize why, in a local realist universe, it must be true that P(AB|H)=P(A|H)P(B|H) for the right choice of H, then you can also show that if there is a perfect correlation between measurement results when the experimenters choose the same detector angles, then in a local realist universe the only way to explain this is if H predetermines what measurement results they will get for all possible angles. But the general conclusion that P(AB|H)=P(A|H)P(B|H) for the right choice of H doesn't require us to start from that assumption, so if bill is unconvinced on this point it's best to try to show why the conclusion would be true even if we don't assume identical detector settings = identical results.
You seem to understand that the individual measurements and the joint measurements are dealing with two different hidden parameters.

Then it should be clear why P(AB|H) = P(A|H) P(B|H) is a formal requirement that doesn't fit the experimental situation.
 
Physics news on Phys.org
  • #62
ThomasT said:
You seem to understand that the individual measurements and the joint measurements are dealing with two different hidden parameters.

Then it should be clear why P(AB|H) = P(A|H) P(B|H) is a formal requirement that doesn't fit the experimental situation.
P(AB|H)=P(A|H)P(B|AH) is a general statistical identity that should hold regardless of the meanings of A, B, and H, agreed? So to get from that to P(AB|H)=P(A|H)P(B|H), you just need to prove that in this physical scenario, P(B|AH)=P(B|H), agreed? If you agree, then just let H represent an exhaustive description of all the local variables (hidden and others) at every point in spacetime which lies in the past light cone of the region where measurement B occurred. If measurement A is at a spacelike separation from B, then isn't it clear that according to local realism, knowledge of A cannot alter your estimate of the probability of B if you were already basing that estimate on H, which encompasses every microscopic physical fact in the past light cone of B? To suggest otherwise would imply FTL information transmission, as I argued in post #41.
 
  • #63
(continuing an unfinished reply to billschnieder's post #47)
billschnieder said:
P(AB|H) = P(A|H)*P(B|H)
Clearly means that conditioned on H, there is no correlation between A and B. It is therefore impossible to for H to cause any correlations whatsoever with this equation. Now can you explain how it is possible for an experimenter to collect data consistent with this equation, without knowing the exact nature of H?
I suppose it depends what you mean by "cause" the correlations, but it is completely consistent with this equation that P(AB) could be different than P(A)*P(B). And "collect data consistent with this equation" is ambiguous since the experimenter can't actually know H--again, the only experimental data is about A and B, H represents some set of objective physical facts that must have definite truth-values in a universe with locally realist laws, but there is no claim that we can actually determine the specific values encompassed by H in practice. That's where the idea of an "omniscient being" comes in. Do you disagree that in a local realist universe, we can make coherent statements about what would have to be true of H if H represents something like "the complete set of fundamental physical variables associated with each point in spacetime that lies within the past light cone of some measurement-event B", even if we don't actually know what the values of all those variables are?
billschnieder said:
It is only possible for A and B to be marginally correlated while at the same time uncorrelated conditioned on H, if H is NOT the cause of the correlation.
"Cause of" needs some kind of precise definition, it's not a statistical term. But intuitively this claim seems pretty silly. For example, being a smoker increases your risk of dying of lung cancer, and also increases your risk of having yellow teeth, and most people would say that smoking has a causal influence on both. Meanwhile, even if there is a marginal correlation between yellow teeth and lung cancer (people who have yellow teeth are more likely to get lung cancer and vice versa), most people would probably bet that this was a case where "correlation is not causation"--yellow teeth don't have a direct causal influence on lung cancer or vice versa. Suppose we find there is a marginal correlation between yellow teeth and lung cancer, but also that P(lung cancer|smoker & yellow teeth) is not any higher than P(lung cancer|smoker) (this is a little over simplistic since heavy smokers are more likely to get both lung cancer and yellow teeth than light smokers, but imagine we are dealing with a society where all smokers smoke exactly the same amount per day). Would you say this proves that smoking cannot have been the cause of the correlation between lung cancer and yellow teeth?

In any case, Bell's theorem doesn't require any discussion of "causality" beyond the basic notion that in a relativistic local realist theory, there should be no FTL information transmission, i.e. information about events in one region A cannot give you any further information about events in a region B at spacelike separation from B, beyond what you already could have determined about events in B by looking at information about events in B's past light cone.
billschnieder said:
Are you sure you understand that it? Can you explain how the hidden variables H are supposed to be responsible for the correlation between A and B, and yet conditioned on H there is no correlation between A and B. I do not see anything you have written so far in this thread or the other one answers this question.
Since I don't really understand what your objection to this is in the first place, I can only "explain" by pointing to various examples where this is true. The smoking/yellow teeth/lung cancer one above is a simple intuitive example, but I've also given you numerical examples which you just ignored. For example, in the scratch lotto card example from post #18, we saw the marginal correlation that whenever Alice chose a given box (say, box 2 on her card) to scratch, if Bob also chose the same box (box 2 on his card to scratch), they always found the same fruit; but this correlation could be explained by the fact that the source always sent them a pair of cards that had the same combination of "hidden fruits" under each of the three boxes on each card. And then later in that post I also gave an example where two flashlights had hidden internal mechanisms that determined the probabilities they would turn on, with one sent to Alice and one to Bob; if you don't know which hidden mechanisms are in each flashlight, there is a marginal correlation between the events of each one turning on (I explicitly calculated P(A|B) and showed it was different from P(A)), but if you do have the information H about which hidden mechanism was in each one's flashlight before they tried to turn them on, then conditioned on H there is no correlation between A and B (and I explicitly calculated P(A|BH) and showed it was identical to P(A|H)).
billschnieder said:
In case you are not sure about the terminology, in probability theory, P(AB) is the joint marginal probability of A and B which is the probability of A and B regardless of whether anything else is true or not. P(AB|H) is the joint conditional probability of A and B conditioned on H, which is the probability of A and B given that H is true. There is no such thing as the absolute probability.
Fair enough. But I think you pretty clearly understood from context what I meant by "absolute probability".
billschnieder said:
I agree, there are cases in which a correlation may exist between A and B marginally, but will not exist when conditioned on another variable, like in some of the example you have give.
You are saying there may be cases where A and B are marginally correlated, but not correlated when conditioned on H? And yet you also just got through saying that you can't understand how H can be responsible for the marginal correlation between A and B, and yet they are not correlated when conditioned on H? The only real difference I see between the two is that word "responsible for", which isn't any sort of statistical terminology as far as I know. What do you mean by it? Are you talking about some intuitive notion of causality, or of one fact being "the explanation for" another? As I said before, following Bell's theorem does not require introducing such vague notions, it's just about analyzing whether one fact can provide information about the probability of some event beyond what you already knew from other facts.

Still it would help me understand you better if you would explain what "responsible for" means in the context of specific examples like the ones I provided. If Alice and Bob in the lotto card example always find the same fruit on trials where they choose to scratch the same box (a perfect marginal correlation), but I happen to know that on every single trial the source sent them both cards with an identical set of "hidden fruits" under the three boxes, can I say that this fact about the hidden fruits is "responsible for" the marginal correlation they observed?
billschnieder said:
The question is very clear. Let me put it to you in point form and you can give specific answers to which points you disagree with.

1) You say in the specific example treated by Bell, P(B|AH) = P(B|H). It is not me saying it. Do you disagree?
I agree.
billschnieder said:
2) The above statement (1) implies that in the specific example treated by Bell, where the symbols A, B and H have identical meaning, P(B|AH) and P(B|H) are mathematical identities. Do you disagree?
Your use of "mathematical identity" is confusing here--if some statement about probabilities can't be proven purely from the axioms of probability theory, but depends upon the specific physical definitions of the variables, I would say that it's not a mathematical identity, by definition. For example P(T)=1-P(H) is not a mathematical identity, but it's a valid equation if T and H represent heads or tails for a fair coin. Do you define "mathematical identity" differently?

Perhaps you are assuming that any relevant physical facts are added as additional axioms to the basic axioms of probability theory so that from then on we are doing a purely mathematical proof with this extended axiomatic system. Still your notion of "mathematical identities" is ambiguous. In a formal proof we have a series of lines containing theorems, each of which are derived from some combination of axioms and previously-proved theorems using rules of inference, until we get to the final line with the theorem that we wanted to prove. If I prove theorem #12 from a combination of theorem #3 and theorem #5 using some ruler of inference, would you say theorem 12 is a "mathematical identity" with 3 and 5? If so, when you say:
billschnieder said:
4) The above statement (3), implies that if using P(A|H)*P(B|H) results in one set of inequalities, the mathematically identical statement P(A|H)*P(B|AH) should result in the same set of inequalities where the symbols A, B and H have identical meaning. Do you disagree?
If yes to the above, I do disagree. For example, take a look at the various simple logic proofs given in http://marauder.millersville.edu/~bikenaga/mathproof/rules-of-inference/rules-of-inference.pdf . An example from pp. 6-7:

Axioms:
i. P AND Q
ii. P -> ~(Q AND R)
iii. S -> R

Prove: ~S

Proof:

1. P AND Q (axiom i)
2. P (Decomposing a conjunction--1)
3. Q (Decomposing a conjunction--1)
4. P -> ~(Q AND R) (axiom ii)
5. ~(Q AND R) (modus ponens--3,4)
6. ~Q OR ~R (DeMorgan--5)
7. ~R (disjunctive syllogism--3,6)
8. S -> R (axiom iii)
9. ~S (modes tollens--7,8)

Would you say statement 5 above is "mathematically identical" to statements 3 and 4? Even if you are using a definition of "mathematically identical" where that is true, why should it imply that you can reach the final conclusion 9 from statements 3 and 4 without going through the intermediate step of 5? 5 may be an essential step in reaching the conclusion from the axioms, saying that all the statements are "mathematically identical" to previous ones and therefore any given intermediate step should be unnecessary is just playing word games, that's not how mathematical proofs work.
 
Last edited by a moderator:
  • #64
JesseM said:
...Still it would help me understand you better if you would explain what "responsible for" means in the context of specific examples like the ones I provided. If Alice and Bob in the lotto card example always find the same fruit on trials where they choose to scratch the same box (a perfect marginal correlation), but I happen to know that on every single trial the source sent them both cards with an identical set of "hidden fruits" under the three boxes, can I say that this fact about the hidden fruits is "responsible for" the marginal correlation they observed?

... saying that all the statements are "mathematically identical" and therefore to previous ones and therefore any given intermediate step should be unnecessary is just playing word games,...

You are my hero, I can't believe you have stayed in this long. :smile:

P.S. I am kicking back and relaxing while you are doing all the heavy lifting.
 
  • #65
DrChinese said:
That would be an attempt to restore local realism, which just won't be possible. Recall that you can entangle particles at other levels as well, such as momentum/position or energy/time. Although it shows up as one thing for spin, you cannot explain it in the manner you mention.

Even for spin, if you look at it long enough, you realize that there is no solution to the mathematical problems. Bell's Inequality is violated because there is no local realistic solution possible.

Ahh, yes thank you, when I wrote that my intended thought was not to restore realism. I appreciate you mentioning it, helps me word my thoughts better.

As I see it, there is no current known explanation for why this 'action at a distance occurs' right? And I can be wayyyyy off I'm sure, I'm just a layman approaching this. The original thought I didn't write was that photons are from/in another dimension.
 
  • #66
madhatter106 said:
Ahh, yes thank you, when I wrote that my intended thought was not to restore realism. I appreciate you mentioning it, helps me word my thoughts better.

As I see it, there is no current known explanation for why this 'action at a distance occurs' right? And I can be wayyyyy off I'm sure, I'm just a layman approaching this. The original thought I didn't write was that photons are from/in another dimension.

Hey, maybe they are in another dimension. Who knows? What's a dimension here or there among friends?

There is no known mechanism for entanglement, just a formalism. So the formalism is the explanation at this point.
 
  • #67
JesseM:
Since brevity is a virtue, I will not attempt responding to very line of your responses which is very tempting as there is almost always something to challenge in each. Here is a crystallization of my reponse to everything you have posted so far.

1) The principle of common cause used by Bell as P(AB|C) = P(A|C)P(B|C) is not universally valid even if C represents complete information about all possible causes in the past light cones of A and B. This is because
if A and B are marginally correlated but uncorrelated conditioned on C, it implies that C screens off the correlation between A and B. In some cases, it is not possible to define C such that it screens off the correlation between A and B.

Stanford Encyclopaedia of Phylosophy said:
Under Conclusions:
If there are fundamental (equal time) laws of physics that rule out certain areas in state-space, which thus imply that there are (equal time) correlations among certain quantities, this is no violation of initial microscopic chaos. But the three common cause principles that we discussed will fail for such correlations. Similarly, quantum mechanics implies that for certain quantum states there will be correlations between the results of measurements that can have no common cause which screens all these correlations off. But this does not violate initial microscopic chaos. Initial microscopic chaos is a principle that tells one how to distribute probabilities over quantum states in certain circumstances; it does not tell one what the probabilities of values of observables given certain quantum states should be. And if they violate common cause principles, so be it. There is no fundamental law of nature that is, or implies, a common cause principle. The extent of the truth of common cause principles is approximate and derivative, not fundamental.
Therefore, Bell's choice of the PCC as a definition for hidden variable theorems by which to suplement QM is not appropriate.

2) Not all correlations necessarily have a common cause and suggesting that they must is not appropriate.

3) Either God is calculating on both sides of the equation or he is not. You can not have God on one side and the experimenters on another. Therefore if God is the one calculating the inequality you can not expect a human experimenter who knows nothing of about H, to collect data consistent with the inequality.

Using P(AB|H) = P(A|H)P(B|H) to derive an inequality means that the context of the inequalities is one in which there is no longer any correlation between A and B, since it has been screened-off by H. Therefore for data to be comparable to the inequalities, it must be screened of with H. Note that P(AB) = P(A|H)P(B|H) is not a valid equation. You can not collect data without screening of (ie P(AB) ) and use it to compare with inequalities derived from screened-off probabilities P(AB|H).
 
  • #68
billschnieder said:
JesseM:
Since brevity is a virtue, I will not attempt responding to very line of your responses which is very tempting as there is almost always something to challenge in each.
In a scientific/mathematical discussion, precision is more of a virtue than brevity. In fact one of the common problems in discussions with cranks who are on a crusade to debunk some mainstream scientific theory is that they typically throw out short and rather broad (and vague) arguments which may sound plausible on the surface, but which require a lot of detailed explanation to show what is wrong with them. This problem is discussed here, for example:
Come to think of it, there’s a certain class of rhetoric I’m going to call the “one way hash” argument. Most modern cryptographic systems in wide use are based on a certain mathematical asymmetry: You can multiply a couple of large prime numbers much (much, much, much, much) more quickly than you can factor the product back into primes. A one-way hash is a kind of “fingerprint” for messages based on the same mathematical idea: It’s really easy to run the algorithm in one direction, but much harder and more time consuming to undo. Certain bad arguments work the same way—skim online debates between biologists and earnest ID afficionados armed with talking points if you want a few examples: The talking point on one side is just complex enough that it’s both intelligible—even somewhat intuitive—to the layman and sounds as though it might qualify as some kind of insight. (If it seems too obvious, perhaps paradoxically, we’ll tend to assume everyone on the other side thought of it themselves and had some good reason to reject it.) The rebuttal, by contrast, may require explaining a whole series of preliminary concepts before it’s really possible to explain why the talking point is wrong. So the setup is “snappy, intuitively appealing argument without obvious problems” vs. “rebuttal I probably don’t have time to read, let alone analyze closely.”
billschnieder said:
The principle of common cause used by Bell as P(AB|C) = P(A|C)P(B|C) is not universally valid even if C represents complete information about all possible causes in the past light cones of A and B.
Not in general, no. But in a universe with local realist laws, it is universally valid.
billschnieder said:
This is because
if A and B are marginally correlated but uncorrelated conditioned on C, it implies that C screens off the correlation between A and B. In some cases, it is not possible to define C such that it screens off the correlation between A and B.
It is always possible to define such a C in a relativistic universe with local realist laws, if A and B happen in spacelike-separated regions: if C represents the complete information about all local physical variables in the past light cones of the regions where A and B occurred (or in spacelike slices of these past light cones taken at some time after the last moment the two past light cones intersected, as I suggested in post 61/62 on the other thread and as illustrated in fig. 4 of this paper on Bell's reasoning), then it is guaranteed that C will screen off correlations between A and B. Nothing in the Stanford article contradicts this, so if you disagree with it, please explain why in your own words (preferably addressing my arguments in post #41 about why to suggest otherwise would imply FTL information transmission, like telling me whether you think the example where the results of a race on Alpha Centauri were correlated with a buzzer going off on Earth is compatible with local realism and relativity). If you think the Stanford Encyclopedia article does contradict it, can you tell me specifically which part and why? In your quote from the Stanford Encyclopedia saying why common cause principles can fail, the first part was about "molecular chaos" and assumptions about the exact microscopic state of macroscopic systems:
This explains why the three principles we have discussed sometimes fail. For the demand of initial microscopic chaos is a demand that microscopic conditions are uniformly distributed (in canonical coordinates) in the areas of state-space that are compatible with the fundamental laws of physics. If there are fundamental (equal time) laws of physics that rule out certain areas in state-space, which thus imply that there are (equal time) correlations among certain quantities, this is no violation of initial microscopic chaos. But the three common cause principles that we discussed will fail for such correlations.
Note that of the three common cause principles discussed, none (including the one by Penrose and Parcival) actually allowed C to involve the full details about every microscopic physical fact at some time in the past light cone of A or B. This is why assumptions like "microscopic chaos" are necessary--because you don't know the full microscopic conditions, you have to make assumptions like the one discussed in section 3.3:
Nonetheless such arguments are pretty close to being correct: microscopic chaos does imply that a very large and useful class of microscopic conditions are independently distributed. For instance, assuming a uniform distribution of microscopic states in macroscopic cells, it follows that the microscopic states of two spatially separated regions will be independently distributed, given any macroscopic states in the two regions. Thus microscopic chaos and spatial separation is sufficient to provide independence of microscopic factors.
Also note earlier in the same section where they write:
there will be no screener off of the correlations between D and E other than some incredibly complex and inaccessible microscopic determinant. Thus common cause principles fail if one uses quantities D and E rather than quantities A and B to characterize the later state of the system.
So here common cause principles only fail if you aren't allowed to use the full set of microscopic conditions which might contribute to the likelihood of different observable outcomes, they acknowledge that if you did have such information it could screen off correlations in these outcomes.

The next part of the Stanford article that you quoted dealt with QM:
Similarly, quantum mechanics implies that for certain quantum states there will be correlations between the results of measurements that can have no common cause which screens all these correlations off. But this does not violate initial microscopic chaos. Initial microscopic chaos is a principle that tells one how to distribute probabilities over quantum states in certain circumstances; it does not tell one what the probabilities of values of observables given certain quantum states should be. And if they violate common cause principles, so be it. There is no fundamental law of nature that is, or implies, a common cause principle. The extent of the truth of common cause principles is approximate and derivative, not fundamental.
What you seem to miss here is that the idea that quantum mechanics violates common cause principles is explicitly based on assuming that Bell is correct and that the observed statistics in QM are incompatible with local realism. From section 2.1:
One might think that this violation of common cause principles is a reason to believe that there must then be more to the prior state of the particles than the quantum state; there must be ‘hidden variables’ that screen off such correlations. (And we have seen above that such hidden variables must determine the results of the measurements if they are to screen of the correlations.) However, one can show, given some extremely plausible assumptions, that there can not be any such hidden variables. There do exist hidden variable theories which account for such correlations in terms of instantaneous non-local dependencies. Since such dependencies are instantaneous (in some frame of reference) they violate Reichenbach's common cause principle, which demands a prior common cause which screens off the correlations. (For more detail, see, for instance, van Fraassen 1982, Elby 1992, Grasshoff, Portmann & Wuethrich (2003) [in the Other Internet Resources section], and the entries on Bell's theorem and on Bohmian mechanics in this encyclopedia.)
So, in no way does this suggest they'd dispute that in a universe that did obey local realist laws, it would be possible to find a type of "common cause" involving detailed specification of every microscopic variable in the past light cones of A and B which would screen off correlations between A and B. What they're saying is that the actual statistics seen in QM rule out the possibility that our universe actually obeys such local realist laws.
billschnieder said:
2) Not all correlations necessarily have a common cause and suggesting that they must is not appropriate.
I never suggested that all correlations have a common cause, unless "common cause" is defined so broadly as to include the complete set of microscopic conditions in two non-overlapping light cones (two totally disjoint sets of events, in other words). For example, if aliens outside our cosmological horizon (so that their past light cone never overlaps with our past light cone at any moment since the Big Bang) were measuring some fundamental physical constant (say, the fine structure constant) which we were also measuring, the results of our experiments would be correlated due to the same laws of physics governing our experiments, not due to any events in our past which could be described as a "common cause". But it's you who's bringing up the language of "causes", not me; I'm just talking about information which causes you to alter probability estimates, and that's all that's necessary for Bell's proof. In this example, if our universe obeyed local realist laws, and an omniscient being gave us a complete specification of all local physical variables in the past light cone of the alien's measurement (or in some complete spacelike slice of that past light cone) along with a complete specification of the laws of physics and an ultra-powerful computer that we could use to evolve these past conditions forward to make predictions about what will happen in the region of spacetime where the aliens make the measurement, then our estimate of the probabilities of different outcomes in that region would not be altered in the slightest if we learned additional information about events in our own region of spacetime, including the results of an experiment similar to the alien's.
billschnieder said:
3) Either God is calculating on both sides of the equation or he is not. You can not have God on one side and the experimenters on another.
It is theoretical physicists calculating the equations based on imagining what would have to be true if they had access to certain information H which is impossible to find in practice, given certain assumptions about the way the fundamental laws of physics work. But since they don't actually know the value of H, they may have to sum over all possible values of H that would be compatible with these assumptions about fundamental laws. For example, do you deny that under the assumption of local realism, where H is taken to represent full information about all local physical variables in the past light cones of A and B, the following equation should hold?

P(AB) = sum over all possible values of H: P(AB|H)*P(H)

Note that this is the type of equation that allowed me to reach the final conclusion in the scratch lotto card example from post #18; I assumed that the perfect correlation when Alice and Bob scratched the same box was explained by the fact that they always received a pair of cards with an identical set of "hidden fruits" behind each box, and then I showed that P(Alice and Bob find the same fruit when they scratch different boxes|H) was always greater than or equal to 1/3 (assuming they choose which box to scratch randomly with a 1/3 probability of each on a given trial, and we're just looking at the subset of trials where they happened to choose different boxes) for all possible values of H:

H1: box1: cherry, box2: cherry, box3: cherry
H2: box1: cherry, box2: cherry, box3: lemon
H3: box1: cherry, box2: lemon, box3: cherry
H4: box1: cherry, box2: lemon, box3: lemon
H5: box1: lemon, box2: cherry, box3: cherry
H6: box1: lemon, box2: cherry, box3: lemon
H7: box1: lemon, box2: lemon, box3: cherry
H8: box1: lemon, box2: lemon, box3: lemon

If the probability is greater than or equal to 1/3 for each possible value of H, then obviously regardless of the specific values of P(H1) and P(H2) and so forth, the probability on the left of this equation:

p(Alice and Bob find the same fruit when they scratch different boxes) = sum over all possible values of H: P(Alice and Bob find the same fruit when they scratch different boxes|H)*P(H)

...must end up being greater than or equal to 1/3 as well. Therefore if we find the actual frequency of finding the same fruit when they choose different boxes is 1/4, we have falsified the original theory that they are receiving cards with an identical set of predetermined "hidden fruit" behind each box.

In this example, even if the theory about hidden fruit had been correct, I don't actually know the full set of hidden fruit on each trial (say the cards self-destruct as soon as one box is scratched). So, any part of the equation involving H is imagining what would have to be true from the perspective of a "God" who did have knowledge of all the hidden fruit. And yet you see the final conclusion is about the actual probabilities Alice and Bob observe on trials where they choose different boxes to scratch. Please tell me whether your general arguments about it being illegitimate to have a human perspective on one side of an equation and "God's" perspective on another would apply to the above as well (i.e. whether you disagree with the claim that the premise that each card has an identical set of hidden fruits should imply a probability of 1/3 or more that they'll find the same fruit on trials where they randomly select different boxes).
billschnieder said:
Therefore if God is the one calculating the inequality you can not expect a human experimenter who knows nothing of about H, to collect data consistent with the inequality.

Using P(AB|H) = P(A|H)P(B|H) to derive an inequality means that the context of the inequalities is one in which there is no longer any correlation between A and B, since it has been screened-off by H. Therefore for data to be comparable to the inequalities, it must be screened of with H. Note that P(AB) = P(A|H)P(B|H) is not a valid equation.
It's true that this is not a valid equation, but if P(AB|H)=P(A|H)P(B|H) applies to the situation we are considering, then P(AB) = sum over all possible values of H: P(A|H)P(B|H)P(H) is a valid equation, and it's directly analogous to equation (14) in http://hexagon.physics.wisc.edu/teaching/2010s%20ph531%20quantum%20mechanics/interesting%20papers/bell%20on%20epr%20paradox%20physics%201%201964.pdf .
 
Last edited by a moderator:
  • #69
Please tell me whether your general arguments about it being illegitimate to have a human perspective on one side of an equation and "God's" perspective on another would apply to the above as well
The point is that certain assumptions are made about the data when deriving the inequalities, that must be valid in the data-taking process. God is not taking the data, so the human experimenters must take those assumptions into account if their data is to be comparable to the inequalities.

Consider a certain disease that strikes persons in different ways depending on circumstances. Assume that we deal with sets of patients born in Africa, Asia and Europe (denoted a,b,c). Assume further that doctors in three cities Lyon, Paris, and Lille (denoted 1,2,3) are are assembling information about the disease. The doctors perform their investigations on randomly chosen but identical days (n) for all three where n = 1,2,3,...,N for a total of N days. The patients are denoted Alo(n) where l is the city, o is the birthplace and n is the day. Each patient is then given a diagnosis of A = +1/-1 based on presence or absence of the disease. So if a patient from Europe examined in Lille on the 10th day of the study was negative, A3c(10) = -1.

According to the Bell-type Leggett-Garg inequality

Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

In the case under consideration, our doctors can combine their results as follows

A1a(n)A2b(n) + A1a(n)A3c(n) + A2b(n)A3c(n)

It can easily be verified that by combining any possible diagnosis results, the Legett-Garg inequalitiy will not be violated as the result of the above expression will always be >= -1, so long as the cyclicity (XY+XZ+YZ) is maintained. Therefore the average result will also satisfy that inequality and we can therefore drop the indices and write the inequality only based on place of origin as follows:

<AaAb> + <AaAc> + <AbAc> >= -1

Now consider a variation of the study in which only two doctors perform the investigation. The doctor in Lille examines only patients of type (a) and (b) and the doctor in Lyon examines only patients of type (b) and (c). Note that patients of type (b) are examined twice as much. The doctors not knowing, or having any reason to suspect that the date or location of examinations has any influence decide to designate their patients only based on place of origin.

After numerous examinations they combine their results and find that

<AaAb> + <AaAc> + <AbAc> = -3

They also find that the single outcomes Aa, Ab, Ac, appear randomly distributed around +1/-1 and they are completely baffled. How can single outcomes be completely random while the products are not random. After lengthy discussions they conclude that there must be superluminal influence between the two cities.

But there are other more reasonable reasons. Note that by measuring in only two citites they have removed the cyclicity intended in the original inequality. It can easily be verified that the following scenario will result in what they observed:

- on even dates Aa = +1 and Ac = -1 in both cities while Ab = +1 in Lille and Ab = -1 in Lyon
- on odd days all signs are reversed

In the above case
<A1aA2b> + <A1aA2c> + <A1bA2c> >= -3
which is consistent with what they saw. Note that this equation does NOT maintain the cyclicity (XY+XZ+YZ) of the original inequality for the situation in which only two cities are considered and one group of patients is measured more than once. But by droping the indices for the cities, it gives the false impression that the cyclicity is maintained.

The reason for the discrepancy is that the data is not indexed properly in order to provide a data structure that is consistent with the inequalities as derived.Specifically, the inequalities require cyclicity in the data and since experimenters can not possibly know all the factors in play in order to know how to index the data to preserve the cyclicity, it is unreasonable to expect their data to match the inequalities.

For a fuller treatment of this example, see Hess et al, Possible experience: From Boole to Bell. EPL. 87, No 6, 60007(1-6) (2009)

Note that in deriving Bell's inequalities, Bell used Aa(l), Ab(l) Ac(l), where the hidden variables (l) are the same for all three angles. For this to correspond to the Aspect-type experimental situation, the hidden variables must be exactly the same for all the angles, which is an unreasonable assumption because each particle could have it's own hidden variables with the measurement equipment each having their own hidden variables, and the time of measurement after emission itself a hidden variable. So it is more likely than not that the hidden variables will be different for each measurement. However, in actual experiments the photons are only measured in pairs (a,b), (a,c) and (b,c). The experimenters, not knowing the exact nature of the hidden variables, can not possibly collect the data in a way that ensures the cyclicity is preserved. Therefore, it is not possible to perform an experiment that can be compared with Bell's inequalities.
 
Last edited:
  • #70
billschnieder said:
Consider a certain disease that strikes persons in different ways depending on circumstances.

...

The reason for the discrepancy is that the data is not indexed properly in order to provide a data structure that is consistent with the inequalities as derived.Specifically, the inequalities require cyclicity in the data and since experimenters can not possibly know all the factors in play in order to know how to index the data to preserve the cyclicity, it is unreasonable to expect their data to match the inequalities.

For a fuller treatment of this example, see Hess et al, Possible experience: From Boole to Bell. EPL. 87, No 6, 60007(1-6) (2009)

Great example, pretty much demonstrates everything I have been saying all along:

a) There is a dataset which is realistic, i.e. you can create a dataset which presents data for properties not actually observed;

But...

b) The sample is NOT representative of the full universe, something which is not a problem with Bell since it assumes the full universe in its thinking; i.e. your example is irrelevant - if you want to attack the Fair Sampling Assumption then you should label your thread as such since one has nothing to do with the other;

c) By way of comparison to Bell, it does not agree with the predictions of QM; i.e. obviously QM does not say anything about doctors and patients in your example; however, you would need something to compare it to and you don't really do that, your example is just playing around with logic.

It would be nice if instead of attempting to attack Bell, you would work on first understanding Bell. Then once you understand it, look for holes.
 
  • #71
DrChinese said:
Great example, pretty much demonstrates everything I have been saying all along:

a) There is a dataset which is realistic, i.e. you can create a dataset which presents data for properties not actually observed;
Huh?

b) The sample is NOT representative of the full universe, something which is not a problem with Bell since it assumes the full universe in its thinking; i.e. your example is irrelevant
Look at Bell's equation (15), he writes

1 + P(b,c) >= | P(a,b) - P(a,c)|

Do you see the cyclicity I mentioned in my previous post? Bell Assumes that the b in P(b,c) is the same b in P(a,b), and the a in P(a,b) is the same a in P(a,c) and the c in P(a,c) is the same c in P(b,c). The inequalities will fail if these assumptions do not hold. One way to avoid these would have been to start the equations using P(AB|H) = P(A|H)P(B|AH), the term P(B|AH) reminds us not to confuse P(B|AH) and P(B|CH).

Now fast forward to Aspect type experiments, each pair of photons is analysed under different circumstances, therefore for each iteration, you need to index the data according to at least factors such as, time of measurement, local hidden variables of measuring instrument, local hidden variables specific to each photon of the pair, NOT just the angles as Bell did. Adding just one of these parameters breaks the cyclicity. So it is very clear to anyone , that Bell's inequalities as derived only works for data that has been indexed to preserve the cyclicity. Sure this proves that the fair sampling assumption is not valid unless steps have been taken to ensure a fair sampling. But it is impossible to do that, as it will require knowledge of all hidden variables at play in order to design the experiment. The failure of the fair sampling assumption is just a symptom of a more serious issue with Bell's ansatz.

c) By way of comparison to Bell, it does not agree with the predictions of QM; i.e. obviously QM does not say anything about doctors and patients in your example.
I assume you know about the Bell-type Leggett-Garg inequality (LGI). The doctors and patients example violates the LGI and so does QM. Remember that violation of LGI is supposed to prove that realism is false even at the macro realm. Using the LGI, which is a Bell-type inequality in this example is proper and relevant to illustrate the problem with Bell's inequalities.
 
  • #72
billschnieder said:
1. Huh?

2. Now fast forward to Aspect type experiments, each pair of photons is analysed under different circumstances, therefore for each iteration, you need to index the data according to at least factors such as, time of measurement, local hidden variables of measuring instrument, local hidden variables specific to each photon of the pair, NOT just the angles as Bell did. Adding just one of these parameters breaks the cyclicity. So it is very clear to anyone , that Bell's inequalities as derived only works for data that has been indexed to preserve the cyclicity. Sure this proves that the fair sampling assumption is not valid unless steps have been taken to ensure a fair sampling. But it is impossible to do that, as it will require knowledge of all hidden variables at play in order to design the experiment. The failure of the fair sampling assumption is just a symptom of a more serious issue with Bell's ansatz.

1. My point is: in your example, you are presenting a specific dataset. It satisfies realism. I am asking you to present a dataset for angle settings 0, 120, 240 degrees. Once you present it, it will be clear that the expected QM relationship does not hold. Once you acknowledge that, you have accepted what Bell has told us. It isn't that hard.

2. I assume you are not aware that there have been, in fact, experiments (such as Rowe) in which no sampling is required (essentially 100% detection). The entire dataset is sampled. Of course, you do not need to "prove" that the experimenter has obtained an unbiased dataset of ALL POSSIBLE events in the universe (i.e. for all time) unless you are changing scientific standards. By such logic (if you are asserting that all experiments are subsets of a larger universe of events and are not representative), all experimental proof would be considered invalid.

Meanwhile, do you doubt that there is ever a day in which these results would not be repeated? While in your example, the results repeat occasionally. If you don't pick the right cyclic combination of events, you won't get your result.

On the other hand, if you say your example proves a breakdown of Bell's logic: Again, you have missed the point of Bell entirely. Go back and re-read 1 above.
 
  • #73
Very interesting posts!

The answer to why Bell chose P(AB|H) = P(A|H)P(B|H) may may lie in the fact that Bell’s gedanken experiment is completely random, that is, 50% of the time the lights flash the same color and 50% time a different color. Does the randomness screen off the dependence on H?

Below are my thoughts on the experiment.

In a completely random experiment (no correlations) there are 36 possible outcomes. To summarize:

1) Six are same color-same switch
2) Six are different color-same switch
3) Twelve are same color-different switch
4) Twelve are different color-different switch

From above (2), six are different colors when the switches are the same (theta=0). They are: 11RG, 11GR, 22RG, 22GR, 33RG, and 33GR. In order to match Bell’s gedanken experiment these must be converted by the correlation process to the same color.

(6)(cos 0) = (6)(1) =6, a 100% conversion

When added to the runs (1) that are same color-same switch gives twelve total or 12/36 or 1/3 of the runs will have the same switch setting and the same color.

To conserve the random behavior of the gedanken experiment another opposite correlation must occur where exactly six runs of same color but different switch settings are converted to a different color. There are twelve of these runs: 12RR, 12GG, 21RR, 21GG, 13RR, 13GG, 31RR, 31GG, 23RR, 23GG, 32RR, and 32GG. Therefore, on an average six of these must be converted to a different color. This now leaves 6/24 runs that have same color but different switches.

(12)(cos 120) (12)(.5) = 6, a 50% conversion

This produces the randomness of the experiment where 50% of the time all runs will flash the same color, yet when the switches have the same setting then 100% of the time the lights flash the same color. That is:

12/36x12/12 + 24/36x6/24 = ½

Do the opposite correlations described above cancel the dependence on H and explain the choice of the equation

P(AB|H) = P(A|H)P(B|H).
 
  • #74
DrChinese said:
1. My point is: in your example, you are presenting a specific dataset. It satisfies realism. I am asking you to present a dataset for angle settings 0, 120, 240 degrees. Once you present it, it will be clear that the expected QM relationship does not hold. Once you acknowledge that, you have accepted what Bell has told us. It isn't that hard.

2. I assume you are not aware that there have been, in fact, experiments (such as Rowe) in which no sampling is required (essentially 100% detection). The entire dataset is sampled. Of course, you do not need to "prove" that the experimenter has obtained an unbiased dataset of ALL POSSIBLE events in the universe (i.e. for all time) unless you are changing scientific standards. By such logic (if you are asserting that all experiments are subsets of a larger universe of events and are not representative), all experimental proof would be considered invalid.

Meanwhile, do you doubt that there is ever a day in which these results would not be repeated? While in your example, the results repeat occasionally. If you don't pick the right cyclic combination of events, you won't get your result.

On the other hand, if you say your example proves a breakdown of Bell's logic: Again, you have missed the point of Bell entirely. Go back and re-read 1 above.

I don't think you have understood my critique. It's hard to figure out what you are talking about because it is not at all relevant to what I have been discussing. I have explained why it is IMPOSSIBLE to perform an experiment comparable to Bell's inequality. The critique is valid even if no experiment is ever performed. So I don't see the point of bringing up Rowe because, Rowe can not do the impossible. Rowe does not know the nature and behaviour of ALL hidden variables at play, so it is IMPOSSIBLE for him to have preserved the cyclicity. Detection efficiency is irrelevant to this discussion. I already mentioned in post #1 that we can assume that there is no loophole in the experiment. The issue discussed here is not a problem with experiments but with the formulation used in deriving the inequalities.

You ask me to provide a dataset for three angles. It doesn't take much to convert the doctors example into one with photons. Make a,b,c equivalent to the angles, 1,2,3 equivalent to the stations and n equivalent to the iteration of the experiment. Since in Aspect type experiments, only two photons are ever analysed per iteration, the situation is similar to the one in which only two doctors are involved. You get exactly the same results. I don't know what other dataset you are asking for.
 
  • #75
rlduncan said:
Very interesting posts!

The answer to why Bell chose P(AB|H) = P(A|H)P(B|H) may may lie in the fact that Bell’s gedanken experiment is completely random, that is, 50% of the time the lights flash the same color and 50% time a different color. Does the randomness screen off the dependence on H?
You bring up an interesting point. IFF the hidden variables are completely randomly distributed in space-time, it may be possible to rescue Bell's formulation. But do you think that is a reasonable assumption for quantum particles? Assume for a moment that there is a space-time harmonic hidden variable for photons, with an unknown phase and frequency. Can you device an algorithm to enable you to sample the hidden variable such that the resulting data is random, without any knowledge that the signal is harmonic or knowledge of the phase or frequency?
 
  • #76
It would be easy to simulate Bell's experiment by rolling of the dice in which the faces alternate green and red and specific instruction are given when observing the upper most face. These instructions can be realized from my first post.
 
  • #77
billschnieder said:
I don't think you have understood my critique. It's hard to figure out what you are talking about because it is not at all relevant to what I have been discussing. I have explained why it is IMPOSSIBLE to perform an experiment comparable to Bell's inequality. The critique is valid even if no experiment is ever performed. So I don't see the point of bringing up Rowe because, Rowe can not do the impossible. Rowe does not know the nature and behaviour of ALL hidden variables at play, so it is IMPOSSIBLE for him to have preserved the cyclicity. Detection efficiency is irrelevant to this discussion. I already mentioned in post #1 that we can assume that there is no loophole in the experiment. The issue discussed here is not a problem with experiments but with the formulation used in deriving the inequalities.

You ask me to provide a dataset for three angles. It doesn't take much to convert the doctors example into one with photons. Make a,b,c equivalent to the angles, 1,2,3 equivalent to the stations and n equivalent to the iteration of the experiment. Since in Aspect type experiments, only two photons are ever analysed per iteration, the situation is similar to the one in which only two doctors are involved. You get exactly the same results. I don't know what other dataset you are asking for.

You critique is hardly new, and has been thoroughly rejected. You do everything possible to ignore dealing directly with the issues that make Bell important. Still. Yes, I agree that every experiment ever performed by any experimenter anywhere may fall victim to the idea of the "periodic, cycle" subset problem. This of course has absolutely nothing to with Bell. Perhaps actually the speed of light is 2c after all! Wow, you have your finger on something. So I say you are wasting everyone's time if your assertion is that tests like Rowe are invalid for the reason: they too are subsets and fall victim to a hidden bias. That would not be fair science. You are basically saying: evidence is NOT evidence.

The problem with this logic is that it STILL means that QM is incompatible with local realism. As has been pointed out by scientists everywhere, maybe QM is wrong (however unlikely). That STILL does not change Bell. Do you follow any of this? Because you seem like a bright, fairly well read person.

If you want to debate Bell test, which would be fine, you need to start by acknowledging what Bell itself says. Then work from there. Clearly, no local realistic theory will be compatible with QM and QM is well supported. There are many, such as Hess (I believe you referenced him earlier) who attack Bell tests. They occasionally attack Bell too. But the only place they have ever gained any real traction is by attacking the Fair Sampling Assumption. However, this assumption acknowledges the validity of Bell. This argument is completely different than the one you assert. Specifically, if the Fair Sampling Assumption is invalid, then QM is in fact WRONG. Bell, however, is still RIGHT.

ON THE OTHER HAND: if you want to debate whether the Fair Sampling Assumption can be modeled into a Bell test: I would happily debate that point. As it happens, I have spent a significant amount of time tearing into the De Raedt model (if you know that). After an extended analysis, I think I have learned the secret to disassembing anything you would care to throw at me. But a couple of points: I will discuss something along the line of a photon test using PDC, but will not discuss doctors in Africa. Let's discuss the issues that make Bell relevant, and that is not hypothetical tests. There are real datasets to discuss. And there are a few more twists to model a local realistic theory these days - since we know from Bell that the predictions of QM will be incorrect.
 
  • #78
billschnieder said:
The point is that certain assumptions are made about the data when deriving the inequalities, that must be valid in the data-taking process. God is not taking the data, so the human experimenters must take those assumptions into account if their data is to be comparable to the inequalities.

Consider a certain disease that strikes persons in different ways depending on circumstances. Assume that we deal with sets of patients born in Africa, Asia and Europe (denoted a,b,c). Assume further that doctors in three cities Lyon, Paris, and Lille (denoted 1,2,3) are are assembling information about the disease. The doctors perform their investigations on randomly chosen but identical days (n) for all three where n = 1,2,3,...,N for a total of N days. The patients are denoted Alo(n) where l is the city, o is the birthplace and n is the day. Each patient is then given a diagnosis of A = +1/-1 based on presence or absence of the disease. So if a patient from Europe examined in Lille on the 10th day of the study was negative, A3c(10) = -1.

According to the Bell-type Leggett-Garg inequality

Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

In the case under consideration, our doctors can combine their results as follows

A1a(n)A2b(n) + A1a(n)A3c(n) + A2b(n)A3c(n)

It can easily be verified that by combining any possible diagnosis results, the Legett-Garg inequalitiy will not be violated as the result of the above expression will always be >= -1, so long as the cyclicity (XY+XZ+YZ) is maintained. Therefore the average result will also satisfy that inequality and we can therefore drop the indices and write the inequality only based on place of origin as follows:

<AaAb> + <AaAc> + <AbAc> >= -1

Now consider a variation of the study in which only two doctors perform the investigation. The doctor in Lille examines only patients of type (a) and (b) and the doctor in Lyon examines only patients of type (b) and (c). Note that patients of type (b) are examined twice as much. The doctors not knowing, or having any reason to suspect that the date or location of examinations has any influence decide to designate their patients only based on place of origin.

After numerous examinations they combine their results and find that

<AaAb> + <AaAc> + <AbAc> = -3

They also find that the single outcomes Aa, Ab, Ac, appear randomly distributed around +1/-1 and they are completely baffled. How can single outcomes be completely random while the products are not random. After lengthy discussions they conclude that there must be superluminal influence between the two cities.

But there are other more reasonable reasons. Note that by measuring in only two citites they have removed the cyclicity intended in the original inequality. It can easily be verified that the following scenario will result in what they observed:

- on even dates Aa = +1 and Ac = -1 in both cities while Ab = +1 in Lille and Ab = -1 in Lyon
- on odd days all signs are reversed

In the above case
<A1aA2b> + <A1aA2c> + <A1bA2c> >= -3
which is consistent with what they saw. Note that this equation does NOT maintain the cyclicity (XY+XZ+YZ) of the original inequality for the situation in which only two cities are considered and one group of patients is measured more than once. But by droping the indices for the cities, it gives the false impression that the cyclicity is maintained.

The reason for the discrepancy is that the data is not indexed properly in order to provide a data structure that is consistent with the inequalities as derived.Specifically, the inequalities require cyclicity in the data and since experimenters can not possibly know all the factors in play in order to know how to index the data to preserve the cyclicity, it is unreasonable to expect their data to match the inequalities.

For a fuller treatment of this example, see Hess et al, Possible experience: From Boole to Bell. EPL. 87, No 6, 60007(1-6) (2009)
I'm not familiar with the Leggett-Garg inequality, and wikipedia's explanation is not very clear. But I would imagine any derivation of the inequality assumes certain conditions hold, like the experimenters being equally likely to choose any detection setting on each trial perhaps, and that a violation of these conditions is responsible for the violation of the inequality in your example above...is that incorrect?
billschnieder said:
Note that in deriving Bell's inequalities, Bell used Aa(l), Ab(l) Ac(l), where the hidden variables (l) are the same for all three angles.
If l represents something like the value of all local physical variables in the past light cone of the region where the measurement (and the decision of what angle to set the detector) was made, then the measurement angle cannot have a retroactive effect on the value of l, although it is possible that the value of l will itself affect the experimenter's choice of detector angle. Is it the latter possibility you're worried about? The proof of Bell's theorem does usually include a "no-conspiracy" assumption where it's assumed that the probability the particles will have different possible predetermined spins on each detector angle is independent of the probability that the experimenter will choose different possible detector angles.
billschnieder said:
For this to correspond to the Aspect-type experimental situation, the hidden variables must be exactly the same for all the angles, which is an unreasonable assumption because each particle could have it's own hidden variables with the measurement equipment each having their own hidden variables, and the time of measurement after emission itself a hidden variable.
In the case of certain inequalities like the one that says the probability of identical results when different angles are chosen must be greater than or equal to 1/3, it's assumed that there's a perfect correlation between the results whenever the experimenters choose the same angle; you can prove that the only way this is possible in a local realist universe is if the hidden variables already completely predetermine what results will be found for each detector setting, so if the hidden variables are restricted to the past light cones of the measurement regions then any additional hidden variables in the measurement regions can't affect the outcome. I discussed this in post 61/62 of the other thread, along with the "no conspiracy" assumption. Other inequalities like the CHSH inequality and the one you mentioned don't require an assumption of perfect correlations, in these cases I'd have to think more about whether hidden variables associated with the measurement apparatus might affect the outcome, but Bell's original paper did derive an inequality based on the assumption of perfect correlations for identical measurement settings.

But here we're going somewhat astray from the original question of whether P(AB|H)=P(A|H)P(B|H) is justified. Are you ever going to address my arguments about past light cones in post #41 using your own words, rather than just trying to discount it with quotes from the Stanford Encyclopedia article which turned out to be irrelevant?
 
  • #79
DrChinese said:
You critique is hardly new, and has been thoroughly rejected. You do everything possible to ignore dealing directly with the issues that make Bell important.
On the contrary, I have. I do not see that you have responded to any of the issues I have raised:

1) I have demonstrated, I believe very convincingly that it is possible to violate Bell-type inequalities by simply collecting data in such a way that the cyclicity is not preserved.
2) I have shown using an example which is macroscopic,so that there is no doubt in any reasonable persons' mind that it is local and real. Yet by not knowing the exact nature of the hidden elements of reality in play, it is very easy to violate the Bell-type inequalities.
3) Further more, I have given specific reasons why the inequality was violated, by providing an explanation for how the hidden elements are generating the data, that is locally causal. It is therefore very clear that violation of the inequalities in the example I provided is NOT due to spooky action at a distance.
4) For this macroscopic example, I have used the Bell-type inequality normally used in macroscopic situations (the Leggett–Garg inequality), which is violated by QM and was supposed to prove that the time evolution of a system cannot be understood classically. My example which is locally causal, real and classical also violates the inequality, that should be a big hint -- QM and the local reality agree with each other here. Remember that QM and Aspect type experiments also agree with each other.
5) The evidence is very clear to me. On the one hand we have QM and experiments, which agree with each other. On the other hand we have Bell-type inequalities violated by everything. There is only one odd-man in the mix and it is neither QM nor the experiments. Evidence is evidence.
6) Seeing that Bell-type inequalities are the odd-man out, my interest in this thread was to discuss how (a) Bell's ansatz represents the situation he is trying to model, and (b) how the inequalities derived from the ansatz are comparable to actual experiments performed. The argument mentioned in my first post can therefore be expanded as follows:

i) Bell's ansatz (equation 2 in his paper) correctly represent all local-causal theories
ii) Bell's ansatz necessarily leads to Bell's inequalities
iii) Aspect-type Experiments are comparable to Bell's inequalities
iv) Aspect-type Experiments violate Bell's inequalities
Conclusion: Therefore the real physical situation of Aspect-type experiments is not locally causal.

In order for the conclusion to be valid, all the premises (i to iv) must be valid. Failure of anyone is sufficient to kill the argument. We have discussed the validity of (i), JesseM says it is justified I say it is not, for reasons I have explained and we can agree to disagree there. However, even if JesseM is right and I don't believe he is, (iii) is not justified as I have shown. Therefore the conclusion is not valid.

I have already admitted that (ii) and (iv) have been proven. So bringing up Rowe doesn't say anything new I have not already admitted. You only need to look at equation (2) in Rowe's paper to see that the same issue with cyclicity and incomplete indexing applies. Do you understand the difference between incomplete indexing and incomplete detection. You could collect 100% of data with detection efficiency of 100% and still violate the inequality if the data is not indexed to maintain cyclicity for all hidden elements of reality in play.

You may disagree with everything I have said but any reasonable person will agree that failure of anyone of those premises (i to iv), invalidates the argument. It is therefore proper to discuss the validity of each one.

Through out this thread, I and many others have used examples with cards, balls, fruits etc to explain a point because it is easier to visualize. The doctors and patients example is no different.
 
  • #80
billschnieder said:
I have already admitted that (ii) and (iv) have been proven. So bringing up Rowe doesn't say anything new I have not already admitted. You only need to look at equation (2) in Rowe's paper to see that the same issue with cyclicity and incomplete indexing applies. Do you understand the difference between incomplete indexing and incomplete detection. You could collect 100% of data with detection efficiency of 100% and still violate the inequality if the data is not indexed to maintain cyclicity for all hidden elements of reality in play.

And I have already said that all science falls to the same argument you present here. You may as well be claiming that General Relativity is wrong and Newtonian gravity is correct, and that there is a cyclic component that makes it "appear" as if GR is correct. Do you not see the absurdity?

When you discover the hidden cycle, you can collect the prizes due. Meanwhile, you may want to consider WHY PDC photons pairs with KNOWN polarization don't act as you predict they should. That should be a strong hint that you are factually wrong even in this absurd claim.
 
  • #81
billschnieder:

In other words: saying that you can find a "cyclic" solution to prove Bell wrong is easy. I am challenging you to come up with an ACTUAL candidate to back up your claim. Then you will see that Bell is no laughing matter. This is MY claim, that I can take down any example you provide. Remember, no doctors in Africa; we are talking about entangled (and perhaps unentangled) PDC photon pairs.

You must be able to provide a) perfect correlations (Bell mentions this). You must be able to provide b) detection rates that are rotationally invariant (to match the predictions of QM). The results must c) form a random pattern with p(H)=p(V)=50%. And of course, you must be able to provide d) reasonable agreement with the cos^(theta) rule for the subset with respect e) for Bell's Inequality in the full universe.

Simply show me the local realistic dataset/formulae.
======================================

I already did this exercise with a model that has had years of work put in it, the De Raedt model. It failed, but perhaps you will fare better. Good luck! But please note, unsubstantiated claims (especially going against established science) are not well received around here. You have placed one out here, and you should either retract it or defend it.
 
  • #82
JesseM said:
I'm not familiar with the Leggett-Garg inequality, and wikipedia's explanation is not very clear. But I would imagine any derivation of the inequality assumes certain conditions hold, like the experimenters being equally likely to choose any detection setting on each trial perhaps, and that a violation of these conditions is responsible for the violation of the inequality in your example above...is that incorrect?

What we have been discussing is the assumption that whatever is causing the correlations is randomly distributed in the data taking process, in other words, it has been screened-off. The short of it is that, your justification for using the PCC as a definition for local causality requires that it is always possible to screen-off the correlation. But experimentally it is impossible to screen-off a correlation if you have no clue what is causing it.

It doesn't look like we are going to agree on any of this. So we can agree to disagree.

If l represents something like the value of all local physical variables in the past light cone of the region where the measurement (and the decision of what angle to set the detector) was made, then the measurement angle cannot have a retroactive effect on the value of l, although it is possible that the value of l will itself affect the experimenter's choice of detector angle. Is it the latter possibility you're worried about?
Defining I vaguely as all physical variables in the past light cone of the region of measurement, fails to consider the fact that not all subsets of I may be at play in the case of A, or B. Some subsets of I may actually be working against the result while others are working for it. Those supporting the result at A may be working against the result at B and vice versa. That is why it is mentioned in the stanford encyclopedia that it is not always possible to define I such that it screens off the correlation. It is even harder if you have no idea what is actually happening.

Another way of looking at it is as follows. If your intention is to say I is so broad as to represent all pre-existing facts in the local universe, then there is no reason to even include it because the marginal probability says the same thing. What in your opinion then will be the effective difference between P(A) and P(A|I)? There will be none and your equation returns to P(AB) = P(A)P(B) which contradicts the fact that there is a marginal correlation between A and B.

Remember that P(A) = P(A,H) + P(A,notH)

If H is defined so broadly such that P(H) = 1, then P(notH) = 0, and P(A) = P(AH).

As I already explained, in probability theory, lack of causal relationship is not a sufficient justification to assume lack of logical relationship. In the Bell situation, we are not just interested in an angle but a joint result between two angles. It is not possible to determine that a coincidence has happened without taking the other result into account, so including a P(B|AH) term is not because we think A is causing B but because we will be handling the data in a joint manner. I don't expect us to agree on this. So we can agree to disagree.

Other inequalities like the CHSH inequality and the one you mentioned don't require an assumption of perfect correlations, in these cases I'd have to think more about whether hidden variables associated with the measurement apparatus might affect the outcome, but Bell's original paper did derive an inequality based on the assumption of perfect correlations for identical measurement settings.
The key word is "cyclicity" here. Now let's look at various inequalities:

Bell's equation (15):
1 + P(b,c) >= | P(a,b) - P(a,c)|
a,b, c each occur in two of the three terms. Each time together with a different partner. However in actual experiments, the (b,c) pair is analyzed at a different time from the (a,b) pair so the bs are not the same. Just because the experimenter sets a macroscopic angle does not mean that the complete microscopic state of the instrument, which he has no control over is in the same state.

CHSH:
|q(d1,y2) - q(a1,y2)| + |q(d1,b2)+q(a1,b2)| <= 2
d1, y2, a1, b2 each occur in two of the four terms. Same argument above applies.

Leggett-Garg:
Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

I have already explained this one.

But here we're going somewhat astray from the original question of whether P(AB|H)=P(A|H)P(B|H) is justified.
Not at all, see my last post for an explanation why. Your arguments boil down to the assertion that PCC is universally valid for locally causal hidden variable theories. I disagree with that, you disagree with me. I have presented my arguments, you have presented yours so be it. We can agree to disagree about that, there is no need to keep going "no it doesn't ... yes it does ... no it doesn't ... etc".
 
  • #83
billschnieder said:
...The key word is "cyclicity" here. Now let's look at various inequalities:

Bell's equation (15):
1 + P(b,c) >= | P(a,b) - P(a,c)|
a,b, c each occur in two of the three terms. Each time together with a different partner. However in actual experiments, the (b,c) pair is analyzed at a different time from the (a,b) pair so the bs are not the same.

Oops... Are you sure about that?

If b is not the same b... How does it work out for the (b, b) case?

:eek:

You see, b and b can be tested at all different times too! According to your model, you won't get Alice's b and Bob's b to be the same. Bell mentions this requirement. Better luck next time.
 
  • #84
JesseM said:
P(AB|H)=P(A|H)P(B|AH) is a general statistical identity that should hold regardless of the meanings of A, B, and H, agreed? So to get from that to P(AB|H)=P(A|H)P(B|H), you just need to prove that in this physical scenario, P(B|AH)=P(B|H), agreed? If you agree, then just let H represent an exhaustive description of all the local variables (hidden and others) at every point in spacetime which lies in the past light cone of the region where measurement B occurred. If measurement A is at a spacelike separation from B, then isn't it clear that according to local realism, knowledge of A cannot alter your estimate of the probability of B if you were already basing that estimate on H, which encompasses every microscopic physical fact in the past light cone of B? To suggest otherwise would imply FTL information transmission, as I argued in post #41.

Based on H, which includes all values for |a-b|, the angular difference between the polarizer settings, and all values for |La - Lb|, the emission-produced angular difference between the optical vectors of the disturbances incident on the polarizer settings, a and b, respectively, then when, eg., |a-b| = 0 and |La - Lb| = 0, then P(B|AH) /= P(B|H).

In this case, we can, with certainty, say that if A = 1, then B = 1, and if A = 0, then B = 0. So, our knowledge of the result at A can alter our estimate of the probability of B without implying FTL information transmission.

P(B|AH) /= P(B|H) also holds for |a-b| = 90 degrees (with |La - Lb| = 0) without implying FTL information transmission.

The confluence of the consideration in the OP and the observation that the individual and joint measurement contexts involve different variables doesn't seem to allow the conclusion that violation of BIs imply ftl info transmission, but rather allows only that there's a disparity between Bell's ansatz and the experimental situations to which it's applied based on an inappropriate modelling requirement.

For an accurate account of the joint detection rate, P(AB) must be expressed in terms of the joint variables which determine it. Assuming that |La - Lb| = 0 for all entangled pairs, then the effective independent variable in a local account of the joint measurement context becomes |a-b|. Hence the nonseparability of the qm treatment of an experimental situation where crossed polarizers are jointly analyzing a single-valued optical vector.
 
  • #85
ThomasT said:
Based on H, which includes all values for |a-b|, the angular difference between the polarizer settings, and all values for |La - Lb|, the emission-produced angular difference between the optical vectors of the disturbances incident on the polarizer settings, a and b, respectively, then when, eg., |a-b| = 0 and |La - Lb| = 0, then P(B|AH) /= P(B|H).

In this case, we can, with certainty, say that if A = 1, then B = 1, and if A = 0, then B = 0. So, our knowledge of the result at A can alter our estimate of the probability of B without implying FTL information transmission.

P(B|AH) /= P(B|H) also holds for |a-b| = 90 degrees (with |La - Lb| = 0) without implying FTL information transmission.

The confluence of the consideration in the OP and the observation that the individual and joint measurement contexts involve different variables doesn't seem to allow the conclusion that violation of BIs imply ftl info transmission, but rather allows only that there's a disparity between Bell's ansatz and the experimental situations to which it's applied based on an inappropriate modelling requirement.

For an accurate account of the joint detection rate, P(AB) must be expressed in terms of the joint variables which determine it. Assuming that |La - Lb| = 0 for all entangled pairs, then the effective independent variable in a local account of the joint measurement context becomes |a-b|. Hence the nonseparability of the qm treatment of an experimental situation where crossed polarizers are jointly analyzing a single-valued optical vector.

ThomasT: Here is an important issue with your assumptions. Suppose I take a group of photon pairs that have the joint detection probabilities, common causes, and other relationships you describe above. This group, I will call NPE. Since they satisfy your assumptions, without any argument from me, they should produce Entangled State stats (cos^2(A-B) ). However, when we run an experiment on them, they actually produce Product State stats!

On the other hand, we take a group of photon pairs closely resembling your assumptions, but which I say do NOT fit exactly. We will call this group PE. These DO produce Entangled State stats.

NPE=Non-Polarization Entangled
PE=Polarization Entangled

Why doesn't the NPE group produce Entangled State stats? This is a serious deficiency in every local hidden variable account I have reviewed to date. If I produce a group that satisfies your assumptions without question, then that group should produce according to your predictions without question. That just doesn't happen. I hope this will spur you to re-think your approach.
 
  • #86
DrChinese said:
ThomasT: Here is an important issue with your assumptions. Suppose I take a group of photon pairs that have the joint detection probabilities, common causes, and other relationships you describe above. This group, I will call NPE. Since they satisfy your assumptions, without any argument from me, they should produce Entangled State stats (cos^2(A-B) ). However, when we run an experiment on them, they actually produce Product State stats!

On the other hand, we take a group of photon pairs closely resembling your assumptions, but which I say do NOT fit exactly. We will call this group PE. These DO produce Entangled State stats.

NPE=Non-Polarization Entangled
PE=Polarization Entangled

Why doesn't the NPE group produce Entangled State stats? This is a serious deficiency in every local hidden variable account I have reviewed to date. If I produce a group that satisfies your assumptions without question, then that group should produce according to your predictions without question. That just doesn't happen. I hope this will spur you to re-think your approach.
The assumptions are that the relationship between the disturbances incident on the polarizers is created during the emission process, and that |La - Lb| is effectively 0 for all entangled pairs. The only way these assumptions can be satisfied is by actually creating the relationship between the disturbances during the emission process. And these assumptions are compatible with PE stats.

The NPE group doesn't satisfy the above assumptions.
 
  • #87
ThomasT said:
The assumptions are that the relationship between the disturbances incident on the polarizers is created during the emission process, and that |La - Lb| is effectively 0 for all entangled pairs. The only way these assumptions can be satisfied is by actually creating the relationship between the disturbances during the emission process. And these assumptions are compatible with PE stats.

The NPE group doesn't satisfy the above assumptions.

And why not? You have them created simultaneously from the same process. The polarization value you mention is exactly 0 for ALL pairs. And they are entangled, just not polarization entangled. Care to explain your position?
 
  • #88
DrChinese said:
And why not? You have them created simultaneously from the same process. The polarization value you mention is exactly 0 for ALL pairs. And they are entangled, just not polarization entangled. Care to explain your position?
If they're not polarization entangled, then |La - Lb| > 0.
 
  • #89
ThomasT said:
If they're not polarization entangled, then |La - Lb| > 0.

Oops, that is not correct at all. For Type I PDC, they are HH>. For Type II, they are HV>.

That means, when you observe one to be H, you know with certainty what the polarization of the other is. Couldn't match your requirement any better. Spin conserved and known to have a definite value. This is simply a special case of your assumption. According to your hypothesis, these should produce Entangled State statistics. But they don't.

(Now of course, you deny that there is such a thing as Entanglement in the sense that it can be a state which survives the creation of the photon pair. But I don't.)
 
  • #90
looking at the overall big picture here, and this may indeed be a stretch. with the probability that future events effect past events and the entanglement observed in photons. I wonder about the neurological association, the synaptic cleft is 20nm right? so where dealing about quantum effects in memory.

From the perspective of how we associate time internally, it's based on memory recall to the past, if your memory recall or recording ability is altered your sense of time is too. Is it possible that the entanglement goes further than the experiment tests? So that if a future measurement changes the past then it also changes your memory due to the overall entanglement? how would one even know it occurred?

looking at the photon in each time frame the energy should be the same or it violates the conservation of energy. Then it's everywhere in each time frame, if it's everywhere then there is no such thing as discreet time for a photon. Am I twisting things up too much here?
 
Last edited:

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
861
  • · Replies 16 ·
Replies
16
Views
3K
Replies
80
Views
7K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 71 ·
3
Replies
71
Views
5K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 59 ·
2
Replies
59
Views
7K