Trying to Understand Bell's reasoning

  • Thread starter Thread starter billschnieder
  • Start date Start date
  • #51
billschnieder said:
Consistent with EPR, I can predict the observed color in a specific context if I know everything about all the elements of reality that are part of the specific context. Yet I can not say the color of the sun exists prior to realization of the specific context. Therefore observables having definite values prior to observation is definitely not the EPR definition. The EPR definition, is "existence of elements of reality which deterministically result in the observables" such that it is possible to predict in advance, what would obtain given all the parameters of a specific context.

I don't know if you are talking about semantics or substance.

According to EPR, if I can predict with certainty the result of a measurement of Bob without first observing or disturbing Bob, then there is an element of reality in Bob's observable. There need be no determinism involved regarding Bob, and the outcome could be completely random with no apparent cause. It only needs to be predictable in advance. I would say, by most standards, that means it has a specific value. I don't need to know anything about the context other than what it takes to predict, either. Now, how do you read EPR differently? My reading is about as exact as can be short of quoting EPR, and I assume you have read it. What is there to question here?
 
Physics news on Phys.org
  • #52
JesseM said:
I never once said that observables have definite values prior to observation...

This was the EPR conclusion. :smile: And where Bell started from. So it is relevant.
 
  • #53
Tell me if I got this right, from what I understand this isn't about the probability of permutations from an unknown group but the permutations of a 'known' group.

I take it like this, I have a ball that is 1/2 black 1/2 white and when split it will form 1 black and 1 white ball. according to classical physics you are always to going to end up with that arrangement measured or not, whereas QM states that until measured you could have 2 of the same color and that by measuring the one it automatically sets the other.

I'm a bit confused as to why this is problem? by having the 'envelope' of the experiment a known contained value it doesn't remove the probability that you'll need to measure at least one variable to know the other.

Is the confusing bit that there is a possibility that when measuring one value that it still does not mean that what you measured will determine the other, so that if let's say you measure a white ball and assume the other is black but upon receiving the information that the other is white as well it changes your previously measured value since you've become aware of the other state? and that change would have to alter the measurement in 'negative' values.

strange as it seems that makes sense to me, I though probably do not have a conventional view of photons, which allows me to accept that possibility however odd. I think in the classical form that assumption can be made but not from looking at the individual eq. but the whole picture to infer the possibility. I probably sound crazy, I'm just going on how my meager view of physics is to look at the entire range together instead of separately.
 
  • #54
madhatter106 said:
Tell me if I got this right, from what I understand this isn't about the probability of permutations from an unknown group but the permutations of a 'known' group.

I take it like this, I have a ball that is 1/2 black 1/2 white and when split it will form 1 black and 1 white ball. according to classical physics you are always to going to end up with that arrangement measured or not, whereas QM states that until measured you could have 2 of the same color and that by measuring the one it automatically sets the other.

If Alice is one color, Bob is always the expected color. That is not what is in question. And as long as you look at the issue that way - as EPR did - there is the possibility of a classical solution.

The issue has to do with when you look at shades, i.e. angles that are not 90 or 180 degrees apart. At various settings, 0/120/240 being a great one to study, things stop making sense. You must look at that example in detail (or one like it) to understand anything. Or go to the "DrChinese Easy Math" page (just google that) and it lays it out. You already follow the 1/3 bit, so the next part is to realize that is an upper limit and that QM (and experiment) give a value of 1/4. As a result, the EPR logic (elements of reality) is refuted.
 
  • #55
DrChinese said:
If Alice is one color, Bob is always the expected color. That is not what is in question. And as long as you look at the issue that way - as EPR did - there is the possibility of a classical solution.

The issue has to do with when you look at shades, i.e. angles that are not 90 or 180 degrees apart. At various settings, 0/120/240 being a great one to study, things stop making sense. You must look at that example in detail (or one like it) to understand anything. Or go to the "DrChinese Easy Math" page (just google that) and it lays it out. You already follow the 1/3 bit, so the next part is to realize that is an upper limit and that QM (and experiment) give a value of 1/4. As a result, the EPR logic (elements of reality) is refuted.

I did went through most of it last night and thank you, it was a good read. When I see the example of 0/120/240 I instantly go back to trig and the periodic function of those values. the cos^2 value ratio in respect to theta is integral to the outcome.

does the question become why at ratios other than 1:1:sqrt2 do things stop making sense? graphing that ratio will always be a straight line by it's definition and the other a wave with periodic rates. so anything other than right angles will have anomalous results, esp as cosine or sine theta approaches infinity right?

So fundamentally the EM field has some hidden attribute that when the charge is not perpendicular there are strange results. this would be akin to saying that there is another 'variable' between b and e on the EM field that affects those states, yes?
 
  • #56
billschnieder said:
... Bell's ansatz can not even represent the situation he is attempting to model to start with and the argument therefore fails.
I agree with this statement, but not necessarily for the reason you gave. (I don't fully understand it yet, having read through the thread quickly.)

Bell's formulation is sufficient to rule out a certain set of lhv theories, but it doesn't imply anything about Nature except that the disparity between Bell's ansatz (and thus Bell inequalities) and the experimental situations does make violations of Bell inequalities useful as indicators of entanglement.

Here's some observations:

1. The hidden variable, H in your notation, is irrelevant in the joint context. Coincidence rate, P(A,B), is solely a function of Theta, the angular difference of the polarizers.

You wrote (replying to another poster):
And yet, those same hidden variables are supposed to be responsible for the correlation. This is the issue that concerns me. Giving examples in which A and B are marginally dependent but conditionally independent with respect to H as you have given, does not address the issue here at all. Instead it goes to show that in your examples, the correlation is definitely due to something other than the hidden variables! Do you understand this?

I think I understand this.

2. The relevant hidden variable is the relationship (wrt some common motional property, usually spin and polarization because of the relative frequency of optical Bell tests) between the entangled entities. This relationship is the physical entanglement, and it is the deep cause of the observed correlations. This relationship, the entanglement, varies so slightly from pair to pair that it's , effectively, a constant, and can only be produced via quantum processes -- and this is accounted for in the QM treatment via an emission model applied to a particular preparation. In other words, QM assumes a local common cause for the entanglement.

3. Bell's locality condition reduces to P(A,B) = P(A)P(B) , which is the definition of statistical independence.

4. The observed statistical dependence is essentially due to three local (c-limited) processes: a) the production of entanglement via emission, b) the filtration of the entangled entities by a global measurement parameter, the angular difference of the crossed polarizers, and c) the data matching process, the final link in a local causal chain that ultimately produces the statistical dependence.

The requirements set forth by Bell for an lhv theory of entanglement seem to be at odds with the reality of the experimental situation(s) that produce the correlations that allow the conclusion that entanglement has been produced. So, it shouldn't be surprising that inequalities based on Bell's formulation are violated by Bell tests as well as the predictions of QM.

However, despite the problematic nature of lhv accounts of entanglement, the foundation of a c-limited, locally causal understanding of entanglement is at hand.
 
  • #57
madhatter106 said:
So fundamentally the EM field has some hidden attribute that when the charge is not perpendicular there are strange results. this would be akin to saying that there is another 'variable' between b and e on the EM field that affects those states, yes?

That would be an attempt to restore local realism, which just won't be possible. Recall that you can entangle particles at other levels as well, such as momentum/position or energy/time. Although it shows up as one thing for spin, you cannot explain it in the manner you mention.

Even for spin, if you look at it long enough, you realize that there is no solution to the mathematical problems. Bell's Inequality is violated because there is no local realistic solution possible.
 
  • #58
JesseM:
Yes or no, agree or disagree that P(B|L) = P(B|LA) given the definition of L as encompassing all facts about fundamental physical variables (local 'elements of reality') in the past light cone of B?


The equation
P(AB|L) = P(A|L)P(B|L)
Is NOT NECESSARILY true given the definition of L as encompassing all facts about fundamental physical variables in the past light cones of A and B. Note the emphasis! So don't give me an example in which it is true and claim that it is always true. In case you are not aware, your claim that the above equation is ALWAYS true, for local elements of reality is what is well known as the prinicple of common cause (PCC). There are numerous treatments showing the problems with it and I don't need to go into that here. Look up Simpson's paradox and Bernstein's Paradox.

This is discussed in the Stanford Encyclopedia of Philosophy available online here: http://seop.leeds.ac.uk/entries/physics-Rpcc/

The simple reason the above is not always true is because it is not always possible to specify L such that it "screens off" the correlation as those paradoxes mentioned above indicate.

ThomasT:
The requirements set forth by Bell for an lhv theory of entanglement seem to be at odds with the reality of the experimental situation(s) that produce the correlations that allow the conclusion that entanglement has been produced.


I agree.
P(AB|L) = P(A|L)P(B|L)
Means that there is no longer any correlation between A and B conditioned on L, because it has been screened-off by L. Deriving Bell's inequalities using the above equation implies that the only data (A, B) capable of being compared with the inequalities must be uncorrelated. In order to collect such data will require the experimenters to know exactly the nature of the hidden variables in order to collect it. Therefore Bell's inequalities apply only to independent or uncorrelated data.
 
Last edited:
  • #59
billschnieder said:
JesseM:
Yes or no, agree or disagree that P(B|L) = P(B|LA) given the definition of L as encompassing all facts about fundamental physical variables (local 'elements of reality') in the past light cone of B?


The equation
P(AB|L) = P(A|L)P(B|L)
Is NOT NECESSARILY true given the definition of L as encompassing all facts about fundamental physical variables in the past light cones of A and B. Note the emphasis! So don't give me an example in which it is true and claim that it is always true.
Given local realism, yes it is. Do you disagree that if P(B|L) was not equal to P(B|LA), that would imply P(A|L) is not equal to P(A|BL), meaning that learning B gives us some additional information about what happened at A, beyond whatever information we could have learned from anything in the past light cone of B (proof in post #41)? Do you disagree that this is a type of FTL information transmission, since we're learning about an event outside our past lightcone that can't be derived from any information in our past light cone? If you do disagree that this is FTL information transmission, can you explain how you would define FTL information transmission which presumably must be forbidden in a local relativistic theory?
billschnieder said:
In case you are not aware, your claim that the above equation is ALWAYS true, for local elements of reality is what is well known as the prinicple of common cause (PCC). There are numerous treatments showing the problems with it and I don't need to go into that here. Look up Simpson's paradox and Bernstein's Paradox.

This is discussed in the Stanford Encyclopedia of Philosophy available online here: http://seop.leeds.ac.uk/entries/physics-Rpcc/
Can you find any sources that claim the principle of common cause would be violated in a relativistic universe with local realist laws, where the "cause" can stand for every possible local microscopic fact in the past light cone of one of the two events? Most of the problems discussed in the article you link to above arise from the fact that they are trying to find "causes" that are vague macro-descriptions which don't specify all the precise microscopic details which might influence the correlations. Note in the "conclusions" section where they say:
One should also not be interested in common cause principles which allow any conditions, no matter how microscopic, scattered and unnatural, to count as common causes. For, as we have seen, this would trivialize such principles in deterministic worlds, and would hide from view the remarkable fact that when one has a correlation among fairly natural localized quantities that are not related as cause and effect, almost always one can find a fairly natural, localized prior common cause that screens off the correlation. The explanation of this remarkable fact, which was suggested in the previous section, is that Reichenbach's common cause principle, and the causal Markov condition, must hold if the determinants, other than the causes, are independently distributed for each value of the causes. The fundamental assumptions of statistical mechanics imply that this independence will hold in a large class of cases given a judicious choice of quantities characterizing the causes and effects. In view of this, it is indeed more puzzling why common cause principles fail in cases like those described above, such as the coordinated flights of certain flocks of birds, equilibrium correlations, order arising out of chaos, etc. The answer is that in such cases the interactions between the parts of these systems are so complicated, and there are so many causes acting on the systems, that the only way one can get independence of further determinants is by specifying so many causes as to make this a practical impossibility. This, in any case, would amount to allowing just about any scattered and unnatural set of factors to count as common causes, thereby trivializing common cause principles.
So, the types of problems with the "principle of common cause" when we are restricted to these sorts of macroscopically describable causes don't apply to the "principle of common cause" when we are talking about every microscopic physical fact in the past light cone of a particular event. Something similar seems to be true with Simpson's paradox and Bernstein's paradox--for example, look at the last two pages of http://scistud.umkc.edu/psa98/papers/uffink.pdf (presented at http://scistud.umkc.edu/psa98/papers/abstracts.html#uffink), which says:
Also, in order to evade the Simpson paradox, it seems that one can save the principle by specifying that the cause C is a sufficient causal factor with respect to a class of events. It would be reasonable to take this class to include at least all events in the past of C, perhaps also those outside of C's causal future. However, this means one needs to introduce concepts from the space-time background in the principle.

...

Remarkably, a variant of the principle of the common cause taking explicity account of relativistic space-time has been around for a long time, although it is seldom discussed in the philosophical literature. It is Penrose and Percifal's (1962) principle of conditional independence.

These authors consider two spacelike separated bounded regions A and B in spacetime, and let C be any region which dissects the union of the past-light cones of A and B into two parts, one containing A and the other containing B. The P(A&B|C) = P(A|C)*P(B|C) where A, B, C are the histories of the regions A, B and C, i.e. complete specifications of all events in those regions.

For our discussion, the salient points in which this formulation differs rom other formulations are, first, in this version only non-local correlations are to be explained ... Thirdly, conditional independence is demanded only upon conditionalizing upon the entire history of a region C. This entails that the problems such as Simpson's paradox connected with incomplete specifications of the factors cannot appear.
The paper is titled The Principle of the Common Cause faces the Bernstein Paradox, so presumably when the author says "problems such as Simpson's paradox" this is meant to apply to Bernstein's paradox as well.

Also note that the Stanford Encyclopedia of Philosophy article actually discusses Penrose and Percifal's argument about picking a region C which divides the past light cones of A and B, in section 1.3, 'the law of conditional independence'. They state the conclusion of the "law of conditional independence", namely P(A&B|C) = P(A|C)*P(B|C), without attempting to dispute that it should hold in a classical relativistic universe. But they treat this "law of conditional independence" as a different claim from "Reichenbach's common cause principle" which is the main topic of the article (again seemingly because Reichenbach's principle is based on distinct macroscopically identifiable 'causes'), saying "This is a time asymmetric principle which is clearly closely related to Reichenbach's common cause principle and the causal Markov condition ... one cannot derive anything like Reichenbach's common cause principle or the causal Markov condition from the law of conditional independence, and one therefore would not inherit the richness of applications of these principles, especially the causal Markov condition, even if one were to accept the law of conditional independence."

So again, if you think that my version of the common cause principle could fail in a relativistic universe with local realist laws, even if the "common cause" is defined as the complete set of microscopic physical facts in one measurement's past light cone (or in a region of spacetime which divides the overlap of the two past light cones of each measurement as with the 'law of conditional independence' formulation by Penrose and Percifal above), you need to either find authors who specifically talk about such detailed specifications, or else actually make the argument yourself rather than trying to dismiss it with vague references to the literature.
 
Last edited:
  • #60
JesseM said:
I never once said that observables have definite values prior to observation...
DrChinese said:
This was the EPR conclusion. :smile: And where Bell started from. So it is relevant.
It is relevant, yes. And once you realize why, in a local realist universe, it must be true that P(AB|H)=P(A|H)P(B|H) for the right choice of H, then you can also show that if there is a perfect correlation between measurement results when the experimenters choose the same detector angles, then in a local realist universe the only way to explain this is if H predetermines what measurement results they will get for all possible angles. But the general conclusion that P(AB|H)=P(A|H)P(B|H) for the right choice of H doesn't require us to start from that assumption, so if bill is unconvinced on this point it's best to try to show why the conclusion would be true even if we don't assume identical detector settings = identical results.
 
  • #61
JesseM said:
It is relevant, yes. And once you realize why, in a local realist universe, it must be true that P(AB|H)=P(A|H)P(B|H) for the right choice of H, then you can also show that if there is a perfect correlation between measurement results when the experimenters choose the same detector angles, then in a local realist universe the only way to explain this is if H predetermines what measurement results they will get for all possible angles. But the general conclusion that P(AB|H)=P(A|H)P(B|H) for the right choice of H doesn't require us to start from that assumption, so if bill is unconvinced on this point it's best to try to show why the conclusion would be true even if we don't assume identical detector settings = identical results.
You seem to understand that the individual measurements and the joint measurements are dealing with two different hidden parameters.

Then it should be clear why P(AB|H) = P(A|H) P(B|H) is a formal requirement that doesn't fit the experimental situation.
 
  • #62
ThomasT said:
You seem to understand that the individual measurements and the joint measurements are dealing with two different hidden parameters.

Then it should be clear why P(AB|H) = P(A|H) P(B|H) is a formal requirement that doesn't fit the experimental situation.
P(AB|H)=P(A|H)P(B|AH) is a general statistical identity that should hold regardless of the meanings of A, B, and H, agreed? So to get from that to P(AB|H)=P(A|H)P(B|H), you just need to prove that in this physical scenario, P(B|AH)=P(B|H), agreed? If you agree, then just let H represent an exhaustive description of all the local variables (hidden and others) at every point in spacetime which lies in the past light cone of the region where measurement B occurred. If measurement A is at a spacelike separation from B, then isn't it clear that according to local realism, knowledge of A cannot alter your estimate of the probability of B if you were already basing that estimate on H, which encompasses every microscopic physical fact in the past light cone of B? To suggest otherwise would imply FTL information transmission, as I argued in post #41.
 
  • #63
(continuing an unfinished reply to billschnieder's post #47)
billschnieder said:
P(AB|H) = P(A|H)*P(B|H)
Clearly means that conditioned on H, there is no correlation between A and B. It is therefore impossible to for H to cause any correlations whatsoever with this equation. Now can you explain how it is possible for an experimenter to collect data consistent with this equation, without knowing the exact nature of H?
I suppose it depends what you mean by "cause" the correlations, but it is completely consistent with this equation that P(AB) could be different than P(A)*P(B). And "collect data consistent with this equation" is ambiguous since the experimenter can't actually know H--again, the only experimental data is about A and B, H represents some set of objective physical facts that must have definite truth-values in a universe with locally realist laws, but there is no claim that we can actually determine the specific values encompassed by H in practice. That's where the idea of an "omniscient being" comes in. Do you disagree that in a local realist universe, we can make coherent statements about what would have to be true of H if H represents something like "the complete set of fundamental physical variables associated with each point in spacetime that lies within the past light cone of some measurement-event B", even if we don't actually know what the values of all those variables are?
billschnieder said:
It is only possible for A and B to be marginally correlated while at the same time uncorrelated conditioned on H, if H is NOT the cause of the correlation.
"Cause of" needs some kind of precise definition, it's not a statistical term. But intuitively this claim seems pretty silly. For example, being a smoker increases your risk of dying of lung cancer, and also increases your risk of having yellow teeth, and most people would say that smoking has a causal influence on both. Meanwhile, even if there is a marginal correlation between yellow teeth and lung cancer (people who have yellow teeth are more likely to get lung cancer and vice versa), most people would probably bet that this was a case where "correlation is not causation"--yellow teeth don't have a direct causal influence on lung cancer or vice versa. Suppose we find there is a marginal correlation between yellow teeth and lung cancer, but also that P(lung cancer|smoker & yellow teeth) is not any higher than P(lung cancer|smoker) (this is a little over simplistic since heavy smokers are more likely to get both lung cancer and yellow teeth than light smokers, but imagine we are dealing with a society where all smokers smoke exactly the same amount per day). Would you say this proves that smoking cannot have been the cause of the correlation between lung cancer and yellow teeth?

In any case, Bell's theorem doesn't require any discussion of "causality" beyond the basic notion that in a relativistic local realist theory, there should be no FTL information transmission, i.e. information about events in one region A cannot give you any further information about events in a region B at spacelike separation from B, beyond what you already could have determined about events in B by looking at information about events in B's past light cone.
billschnieder said:
Are you sure you understand that it? Can you explain how the hidden variables H are supposed to be responsible for the correlation between A and B, and yet conditioned on H there is no correlation between A and B. I do not see anything you have written so far in this thread or the other one answers this question.
Since I don't really understand what your objection to this is in the first place, I can only "explain" by pointing to various examples where this is true. The smoking/yellow teeth/lung cancer one above is a simple intuitive example, but I've also given you numerical examples which you just ignored. For example, in the scratch lotto card example from post #18, we saw the marginal correlation that whenever Alice chose a given box (say, box 2 on her card) to scratch, if Bob also chose the same box (box 2 on his card to scratch), they always found the same fruit; but this correlation could be explained by the fact that the source always sent them a pair of cards that had the same combination of "hidden fruits" under each of the three boxes on each card. And then later in that post I also gave an example where two flashlights had hidden internal mechanisms that determined the probabilities they would turn on, with one sent to Alice and one to Bob; if you don't know which hidden mechanisms are in each flashlight, there is a marginal correlation between the events of each one turning on (I explicitly calculated P(A|B) and showed it was different from P(A)), but if you do have the information H about which hidden mechanism was in each one's flashlight before they tried to turn them on, then conditioned on H there is no correlation between A and B (and I explicitly calculated P(A|BH) and showed it was identical to P(A|H)).
billschnieder said:
In case you are not sure about the terminology, in probability theory, P(AB) is the joint marginal probability of A and B which is the probability of A and B regardless of whether anything else is true or not. P(AB|H) is the joint conditional probability of A and B conditioned on H, which is the probability of A and B given that H is true. There is no such thing as the absolute probability.
Fair enough. But I think you pretty clearly understood from context what I meant by "absolute probability".
billschnieder said:
I agree, there are cases in which a correlation may exist between A and B marginally, but will not exist when conditioned on another variable, like in some of the example you have give.
You are saying there may be cases where A and B are marginally correlated, but not correlated when conditioned on H? And yet you also just got through saying that you can't understand how H can be responsible for the marginal correlation between A and B, and yet they are not correlated when conditioned on H? The only real difference I see between the two is that word "responsible for", which isn't any sort of statistical terminology as far as I know. What do you mean by it? Are you talking about some intuitive notion of causality, or of one fact being "the explanation for" another? As I said before, following Bell's theorem does not require introducing such vague notions, it's just about analyzing whether one fact can provide information about the probability of some event beyond what you already knew from other facts.

Still it would help me understand you better if you would explain what "responsible for" means in the context of specific examples like the ones I provided. If Alice and Bob in the lotto card example always find the same fruit on trials where they choose to scratch the same box (a perfect marginal correlation), but I happen to know that on every single trial the source sent them both cards with an identical set of "hidden fruits" under the three boxes, can I say that this fact about the hidden fruits is "responsible for" the marginal correlation they observed?
billschnieder said:
The question is very clear. Let me put it to you in point form and you can give specific answers to which points you disagree with.

1) You say in the specific example treated by Bell, P(B|AH) = P(B|H). It is not me saying it. Do you disagree?
I agree.
billschnieder said:
2) The above statement (1) implies that in the specific example treated by Bell, where the symbols A, B and H have identical meaning, P(B|AH) and P(B|H) are mathematical identities. Do you disagree?
Your use of "mathematical identity" is confusing here--if some statement about probabilities can't be proven purely from the axioms of probability theory, but depends upon the specific physical definitions of the variables, I would say that it's not a mathematical identity, by definition. For example P(T)=1-P(H) is not a mathematical identity, but it's a valid equation if T and H represent heads or tails for a fair coin. Do you define "mathematical identity" differently?

Perhaps you are assuming that any relevant physical facts are added as additional axioms to the basic axioms of probability theory so that from then on we are doing a purely mathematical proof with this extended axiomatic system. Still your notion of "mathematical identities" is ambiguous. In a formal proof we have a series of lines containing theorems, each of which are derived from some combination of axioms and previously-proved theorems using rules of inference, until we get to the final line with the theorem that we wanted to prove. If I prove theorem #12 from a combination of theorem #3 and theorem #5 using some ruler of inference, would you say theorem 12 is a "mathematical identity" with 3 and 5? If so, when you say:
billschnieder said:
4) The above statement (3), implies that if using P(A|H)*P(B|H) results in one set of inequalities, the mathematically identical statement P(A|H)*P(B|AH) should result in the same set of inequalities where the symbols A, B and H have identical meaning. Do you disagree?
If yes to the above, I do disagree. For example, take a look at the various simple logic proofs given in http://marauder.millersville.edu/~bikenaga/mathproof/rules-of-inference/rules-of-inference.pdf . An example from pp. 6-7:

Axioms:
i. P AND Q
ii. P -> ~(Q AND R)
iii. S -> R

Prove: ~S

Proof:

1. P AND Q (axiom i)
2. P (Decomposing a conjunction--1)
3. Q (Decomposing a conjunction--1)
4. P -> ~(Q AND R) (axiom ii)
5. ~(Q AND R) (modus ponens--3,4)
6. ~Q OR ~R (DeMorgan--5)
7. ~R (disjunctive syllogism--3,6)
8. S -> R (axiom iii)
9. ~S (modes tollens--7,8)

Would you say statement 5 above is "mathematically identical" to statements 3 and 4? Even if you are using a definition of "mathematically identical" where that is true, why should it imply that you can reach the final conclusion 9 from statements 3 and 4 without going through the intermediate step of 5? 5 may be an essential step in reaching the conclusion from the axioms, saying that all the statements are "mathematically identical" to previous ones and therefore any given intermediate step should be unnecessary is just playing word games, that's not how mathematical proofs work.
 
Last edited by a moderator:
  • #64
JesseM said:
...Still it would help me understand you better if you would explain what "responsible for" means in the context of specific examples like the ones I provided. If Alice and Bob in the lotto card example always find the same fruit on trials where they choose to scratch the same box (a perfect marginal correlation), but I happen to know that on every single trial the source sent them both cards with an identical set of "hidden fruits" under the three boxes, can I say that this fact about the hidden fruits is "responsible for" the marginal correlation they observed?

... saying that all the statements are "mathematically identical" and therefore to previous ones and therefore any given intermediate step should be unnecessary is just playing word games,...

You are my hero, I can't believe you have stayed in this long. :smile:

P.S. I am kicking back and relaxing while you are doing all the heavy lifting.
 
  • #65
DrChinese said:
That would be an attempt to restore local realism, which just won't be possible. Recall that you can entangle particles at other levels as well, such as momentum/position or energy/time. Although it shows up as one thing for spin, you cannot explain it in the manner you mention.

Even for spin, if you look at it long enough, you realize that there is no solution to the mathematical problems. Bell's Inequality is violated because there is no local realistic solution possible.

Ahh, yes thank you, when I wrote that my intended thought was not to restore realism. I appreciate you mentioning it, helps me word my thoughts better.

As I see it, there is no current known explanation for why this 'action at a distance occurs' right? And I can be wayyyyy off I'm sure, I'm just a layman approaching this. The original thought I didn't write was that photons are from/in another dimension.
 
  • #66
madhatter106 said:
Ahh, yes thank you, when I wrote that my intended thought was not to restore realism. I appreciate you mentioning it, helps me word my thoughts better.

As I see it, there is no current known explanation for why this 'action at a distance occurs' right? And I can be wayyyyy off I'm sure, I'm just a layman approaching this. The original thought I didn't write was that photons are from/in another dimension.

Hey, maybe they are in another dimension. Who knows? What's a dimension here or there among friends?

There is no known mechanism for entanglement, just a formalism. So the formalism is the explanation at this point.
 
  • #67
JesseM:
Since brevity is a virtue, I will not attempt responding to very line of your responses which is very tempting as there is almost always something to challenge in each. Here is a crystallization of my reponse to everything you have posted so far.

1) The principle of common cause used by Bell as P(AB|C) = P(A|C)P(B|C) is not universally valid even if C represents complete information about all possible causes in the past light cones of A and B. This is because
if A and B are marginally correlated but uncorrelated conditioned on C, it implies that C screens off the correlation between A and B. In some cases, it is not possible to define C such that it screens off the correlation between A and B.

Stanford Encyclopaedia of Phylosophy said:
Under Conclusions:
If there are fundamental (equal time) laws of physics that rule out certain areas in state-space, which thus imply that there are (equal time) correlations among certain quantities, this is no violation of initial microscopic chaos. But the three common cause principles that we discussed will fail for such correlations. Similarly, quantum mechanics implies that for certain quantum states there will be correlations between the results of measurements that can have no common cause which screens all these correlations off. But this does not violate initial microscopic chaos. Initial microscopic chaos is a principle that tells one how to distribute probabilities over quantum states in certain circumstances; it does not tell one what the probabilities of values of observables given certain quantum states should be. And if they violate common cause principles, so be it. There is no fundamental law of nature that is, or implies, a common cause principle. The extent of the truth of common cause principles is approximate and derivative, not fundamental.
Therefore, Bell's choice of the PCC as a definition for hidden variable theorems by which to suplement QM is not appropriate.

2) Not all correlations necessarily have a common cause and suggesting that they must is not appropriate.

3) Either God is calculating on both sides of the equation or he is not. You can not have God on one side and the experimenters on another. Therefore if God is the one calculating the inequality you can not expect a human experimenter who knows nothing of about H, to collect data consistent with the inequality.

Using P(AB|H) = P(A|H)P(B|H) to derive an inequality means that the context of the inequalities is one in which there is no longer any correlation between A and B, since it has been screened-off by H. Therefore for data to be comparable to the inequalities, it must be screened of with H. Note that P(AB) = P(A|H)P(B|H) is not a valid equation. You can not collect data without screening of (ie P(AB) ) and use it to compare with inequalities derived from screened-off probabilities P(AB|H).
 
  • #68
billschnieder said:
JesseM:
Since brevity is a virtue, I will not attempt responding to very line of your responses which is very tempting as there is almost always something to challenge in each.
In a scientific/mathematical discussion, precision is more of a virtue than brevity. In fact one of the common problems in discussions with cranks who are on a crusade to debunk some mainstream scientific theory is that they typically throw out short and rather broad (and vague) arguments which may sound plausible on the surface, but which require a lot of detailed explanation to show what is wrong with them. This problem is discussed here, for example:
Come to think of it, there’s a certain class of rhetoric I’m going to call the “one way hash” argument. Most modern cryptographic systems in wide use are based on a certain mathematical asymmetry: You can multiply a couple of large prime numbers much (much, much, much, much) more quickly than you can factor the product back into primes. A one-way hash is a kind of “fingerprint” for messages based on the same mathematical idea: It’s really easy to run the algorithm in one direction, but much harder and more time consuming to undo. Certain bad arguments work the same way—skim online debates between biologists and earnest ID afficionados armed with talking points if you want a few examples: The talking point on one side is just complex enough that it’s both intelligible—even somewhat intuitive—to the layman and sounds as though it might qualify as some kind of insight. (If it seems too obvious, perhaps paradoxically, we’ll tend to assume everyone on the other side thought of it themselves and had some good reason to reject it.) The rebuttal, by contrast, may require explaining a whole series of preliminary concepts before it’s really possible to explain why the talking point is wrong. So the setup is “snappy, intuitively appealing argument without obvious problems” vs. “rebuttal I probably don’t have time to read, let alone analyze closely.”
billschnieder said:
The principle of common cause used by Bell as P(AB|C) = P(A|C)P(B|C) is not universally valid even if C represents complete information about all possible causes in the past light cones of A and B.
Not in general, no. But in a universe with local realist laws, it is universally valid.
billschnieder said:
This is because
if A and B are marginally correlated but uncorrelated conditioned on C, it implies that C screens off the correlation between A and B. In some cases, it is not possible to define C such that it screens off the correlation between A and B.
It is always possible to define such a C in a relativistic universe with local realist laws, if A and B happen in spacelike-separated regions: if C represents the complete information about all local physical variables in the past light cones of the regions where A and B occurred (or in spacelike slices of these past light cones taken at some time after the last moment the two past light cones intersected, as I suggested in post 61/62 on the other thread and as illustrated in fig. 4 of this paper on Bell's reasoning), then it is guaranteed that C will screen off correlations between A and B. Nothing in the Stanford article contradicts this, so if you disagree with it, please explain why in your own words (preferably addressing my arguments in post #41 about why to suggest otherwise would imply FTL information transmission, like telling me whether you think the example where the results of a race on Alpha Centauri were correlated with a buzzer going off on Earth is compatible with local realism and relativity). If you think the Stanford Encyclopedia article does contradict it, can you tell me specifically which part and why? In your quote from the Stanford Encyclopedia saying why common cause principles can fail, the first part was about "molecular chaos" and assumptions about the exact microscopic state of macroscopic systems:
This explains why the three principles we have discussed sometimes fail. For the demand of initial microscopic chaos is a demand that microscopic conditions are uniformly distributed (in canonical coordinates) in the areas of state-space that are compatible with the fundamental laws of physics. If there are fundamental (equal time) laws of physics that rule out certain areas in state-space, which thus imply that there are (equal time) correlations among certain quantities, this is no violation of initial microscopic chaos. But the three common cause principles that we discussed will fail for such correlations.
Note that of the three common cause principles discussed, none (including the one by Penrose and Parcival) actually allowed C to involve the full details about every microscopic physical fact at some time in the past light cone of A or B. This is why assumptions like "microscopic chaos" are necessary--because you don't know the full microscopic conditions, you have to make assumptions like the one discussed in section 3.3:
Nonetheless such arguments are pretty close to being correct: microscopic chaos does imply that a very large and useful class of microscopic conditions are independently distributed. For instance, assuming a uniform distribution of microscopic states in macroscopic cells, it follows that the microscopic states of two spatially separated regions will be independently distributed, given any macroscopic states in the two regions. Thus microscopic chaos and spatial separation is sufficient to provide independence of microscopic factors.
Also note earlier in the same section where they write:
there will be no screener off of the correlations between D and E other than some incredibly complex and inaccessible microscopic determinant. Thus common cause principles fail if one uses quantities D and E rather than quantities A and B to characterize the later state of the system.
So here common cause principles only fail if you aren't allowed to use the full set of microscopic conditions which might contribute to the likelihood of different observable outcomes, they acknowledge that if you did have such information it could screen off correlations in these outcomes.

The next part of the Stanford article that you quoted dealt with QM:
Similarly, quantum mechanics implies that for certain quantum states there will be correlations between the results of measurements that can have no common cause which screens all these correlations off. But this does not violate initial microscopic chaos. Initial microscopic chaos is a principle that tells one how to distribute probabilities over quantum states in certain circumstances; it does not tell one what the probabilities of values of observables given certain quantum states should be. And if they violate common cause principles, so be it. There is no fundamental law of nature that is, or implies, a common cause principle. The extent of the truth of common cause principles is approximate and derivative, not fundamental.
What you seem to miss here is that the idea that quantum mechanics violates common cause principles is explicitly based on assuming that Bell is correct and that the observed statistics in QM are incompatible with local realism. From section 2.1:
One might think that this violation of common cause principles is a reason to believe that there must then be more to the prior state of the particles than the quantum state; there must be ‘hidden variables’ that screen off such correlations. (And we have seen above that such hidden variables must determine the results of the measurements if they are to screen of the correlations.) However, one can show, given some extremely plausible assumptions, that there can not be any such hidden variables. There do exist hidden variable theories which account for such correlations in terms of instantaneous non-local dependencies. Since such dependencies are instantaneous (in some frame of reference) they violate Reichenbach's common cause principle, which demands a prior common cause which screens off the correlations. (For more detail, see, for instance, van Fraassen 1982, Elby 1992, Grasshoff, Portmann & Wuethrich (2003) [in the Other Internet Resources section], and the entries on Bell's theorem and on Bohmian mechanics in this encyclopedia.)
So, in no way does this suggest they'd dispute that in a universe that did obey local realist laws, it would be possible to find a type of "common cause" involving detailed specification of every microscopic variable in the past light cones of A and B which would screen off correlations between A and B. What they're saying is that the actual statistics seen in QM rule out the possibility that our universe actually obeys such local realist laws.
billschnieder said:
2) Not all correlations necessarily have a common cause and suggesting that they must is not appropriate.
I never suggested that all correlations have a common cause, unless "common cause" is defined so broadly as to include the complete set of microscopic conditions in two non-overlapping light cones (two totally disjoint sets of events, in other words). For example, if aliens outside our cosmological horizon (so that their past light cone never overlaps with our past light cone at any moment since the Big Bang) were measuring some fundamental physical constant (say, the fine structure constant) which we were also measuring, the results of our experiments would be correlated due to the same laws of physics governing our experiments, not due to any events in our past which could be described as a "common cause". But it's you who's bringing up the language of "causes", not me; I'm just talking about information which causes you to alter probability estimates, and that's all that's necessary for Bell's proof. In this example, if our universe obeyed local realist laws, and an omniscient being gave us a complete specification of all local physical variables in the past light cone of the alien's measurement (or in some complete spacelike slice of that past light cone) along with a complete specification of the laws of physics and an ultra-powerful computer that we could use to evolve these past conditions forward to make predictions about what will happen in the region of spacetime where the aliens make the measurement, then our estimate of the probabilities of different outcomes in that region would not be altered in the slightest if we learned additional information about events in our own region of spacetime, including the results of an experiment similar to the alien's.
billschnieder said:
3) Either God is calculating on both sides of the equation or he is not. You can not have God on one side and the experimenters on another.
It is theoretical physicists calculating the equations based on imagining what would have to be true if they had access to certain information H which is impossible to find in practice, given certain assumptions about the way the fundamental laws of physics work. But since they don't actually know the value of H, they may have to sum over all possible values of H that would be compatible with these assumptions about fundamental laws. For example, do you deny that under the assumption of local realism, where H is taken to represent full information about all local physical variables in the past light cones of A and B, the following equation should hold?

P(AB) = sum over all possible values of H: P(AB|H)*P(H)

Note that this is the type of equation that allowed me to reach the final conclusion in the scratch lotto card example from post #18; I assumed that the perfect correlation when Alice and Bob scratched the same box was explained by the fact that they always received a pair of cards with an identical set of "hidden fruits" behind each box, and then I showed that P(Alice and Bob find the same fruit when they scratch different boxes|H) was always greater than or equal to 1/3 (assuming they choose which box to scratch randomly with a 1/3 probability of each on a given trial, and we're just looking at the subset of trials where they happened to choose different boxes) for all possible values of H:

H1: box1: cherry, box2: cherry, box3: cherry
H2: box1: cherry, box2: cherry, box3: lemon
H3: box1: cherry, box2: lemon, box3: cherry
H4: box1: cherry, box2: lemon, box3: lemon
H5: box1: lemon, box2: cherry, box3: cherry
H6: box1: lemon, box2: cherry, box3: lemon
H7: box1: lemon, box2: lemon, box3: cherry
H8: box1: lemon, box2: lemon, box3: lemon

If the probability is greater than or equal to 1/3 for each possible value of H, then obviously regardless of the specific values of P(H1) and P(H2) and so forth, the probability on the left of this equation:

p(Alice and Bob find the same fruit when they scratch different boxes) = sum over all possible values of H: P(Alice and Bob find the same fruit when they scratch different boxes|H)*P(H)

...must end up being greater than or equal to 1/3 as well. Therefore if we find the actual frequency of finding the same fruit when they choose different boxes is 1/4, we have falsified the original theory that they are receiving cards with an identical set of predetermined "hidden fruit" behind each box.

In this example, even if the theory about hidden fruit had been correct, I don't actually know the full set of hidden fruit on each trial (say the cards self-destruct as soon as one box is scratched). So, any part of the equation involving H is imagining what would have to be true from the perspective of a "God" who did have knowledge of all the hidden fruit. And yet you see the final conclusion is about the actual probabilities Alice and Bob observe on trials where they choose different boxes to scratch. Please tell me whether your general arguments about it being illegitimate to have a human perspective on one side of an equation and "God's" perspective on another would apply to the above as well (i.e. whether you disagree with the claim that the premise that each card has an identical set of hidden fruits should imply a probability of 1/3 or more that they'll find the same fruit on trials where they randomly select different boxes).
billschnieder said:
Therefore if God is the one calculating the inequality you can not expect a human experimenter who knows nothing of about H, to collect data consistent with the inequality.

Using P(AB|H) = P(A|H)P(B|H) to derive an inequality means that the context of the inequalities is one in which there is no longer any correlation between A and B, since it has been screened-off by H. Therefore for data to be comparable to the inequalities, it must be screened of with H. Note that P(AB) = P(A|H)P(B|H) is not a valid equation.
It's true that this is not a valid equation, but if P(AB|H)=P(A|H)P(B|H) applies to the situation we are considering, then P(AB) = sum over all possible values of H: P(A|H)P(B|H)P(H) is a valid equation, and it's directly analogous to equation (14) in http://hexagon.physics.wisc.edu/teaching/2010s%20ph531%20quantum%20mechanics/interesting%20papers/bell%20on%20epr%20paradox%20physics%201%201964.pdf .
 
Last edited by a moderator:
  • #69
Please tell me whether your general arguments about it being illegitimate to have a human perspective on one side of an equation and "God's" perspective on another would apply to the above as well
The point is that certain assumptions are made about the data when deriving the inequalities, that must be valid in the data-taking process. God is not taking the data, so the human experimenters must take those assumptions into account if their data is to be comparable to the inequalities.

Consider a certain disease that strikes persons in different ways depending on circumstances. Assume that we deal with sets of patients born in Africa, Asia and Europe (denoted a,b,c). Assume further that doctors in three cities Lyon, Paris, and Lille (denoted 1,2,3) are are assembling information about the disease. The doctors perform their investigations on randomly chosen but identical days (n) for all three where n = 1,2,3,...,N for a total of N days. The patients are denoted Alo(n) where l is the city, o is the birthplace and n is the day. Each patient is then given a diagnosis of A = +1/-1 based on presence or absence of the disease. So if a patient from Europe examined in Lille on the 10th day of the study was negative, A3c(10) = -1.

According to the Bell-type Leggett-Garg inequality

Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

In the case under consideration, our doctors can combine their results as follows

A1a(n)A2b(n) + A1a(n)A3c(n) + A2b(n)A3c(n)

It can easily be verified that by combining any possible diagnosis results, the Legett-Garg inequalitiy will not be violated as the result of the above expression will always be >= -1, so long as the cyclicity (XY+XZ+YZ) is maintained. Therefore the average result will also satisfy that inequality and we can therefore drop the indices and write the inequality only based on place of origin as follows:

<AaAb> + <AaAc> + <AbAc> >= -1

Now consider a variation of the study in which only two doctors perform the investigation. The doctor in Lille examines only patients of type (a) and (b) and the doctor in Lyon examines only patients of type (b) and (c). Note that patients of type (b) are examined twice as much. The doctors not knowing, or having any reason to suspect that the date or location of examinations has any influence decide to designate their patients only based on place of origin.

After numerous examinations they combine their results and find that

<AaAb> + <AaAc> + <AbAc> = -3

They also find that the single outcomes Aa, Ab, Ac, appear randomly distributed around +1/-1 and they are completely baffled. How can single outcomes be completely random while the products are not random. After lengthy discussions they conclude that there must be superluminal influence between the two cities.

But there are other more reasonable reasons. Note that by measuring in only two citites they have removed the cyclicity intended in the original inequality. It can easily be verified that the following scenario will result in what they observed:

- on even dates Aa = +1 and Ac = -1 in both cities while Ab = +1 in Lille and Ab = -1 in Lyon
- on odd days all signs are reversed

In the above case
<A1aA2b> + <A1aA2c> + <A1bA2c> >= -3
which is consistent with what they saw. Note that this equation does NOT maintain the cyclicity (XY+XZ+YZ) of the original inequality for the situation in which only two cities are considered and one group of patients is measured more than once. But by droping the indices for the cities, it gives the false impression that the cyclicity is maintained.

The reason for the discrepancy is that the data is not indexed properly in order to provide a data structure that is consistent with the inequalities as derived.Specifically, the inequalities require cyclicity in the data and since experimenters can not possibly know all the factors in play in order to know how to index the data to preserve the cyclicity, it is unreasonable to expect their data to match the inequalities.

For a fuller treatment of this example, see Hess et al, Possible experience: From Boole to Bell. EPL. 87, No 6, 60007(1-6) (2009)

Note that in deriving Bell's inequalities, Bell used Aa(l), Ab(l) Ac(l), where the hidden variables (l) are the same for all three angles. For this to correspond to the Aspect-type experimental situation, the hidden variables must be exactly the same for all the angles, which is an unreasonable assumption because each particle could have it's own hidden variables with the measurement equipment each having their own hidden variables, and the time of measurement after emission itself a hidden variable. So it is more likely than not that the hidden variables will be different for each measurement. However, in actual experiments the photons are only measured in pairs (a,b), (a,c) and (b,c). The experimenters, not knowing the exact nature of the hidden variables, can not possibly collect the data in a way that ensures the cyclicity is preserved. Therefore, it is not possible to perform an experiment that can be compared with Bell's inequalities.
 
Last edited:
  • #70
billschnieder said:
Consider a certain disease that strikes persons in different ways depending on circumstances.

...

The reason for the discrepancy is that the data is not indexed properly in order to provide a data structure that is consistent with the inequalities as derived.Specifically, the inequalities require cyclicity in the data and since experimenters can not possibly know all the factors in play in order to know how to index the data to preserve the cyclicity, it is unreasonable to expect their data to match the inequalities.

For a fuller treatment of this example, see Hess et al, Possible experience: From Boole to Bell. EPL. 87, No 6, 60007(1-6) (2009)

Great example, pretty much demonstrates everything I have been saying all along:

a) There is a dataset which is realistic, i.e. you can create a dataset which presents data for properties not actually observed;

But...

b) The sample is NOT representative of the full universe, something which is not a problem with Bell since it assumes the full universe in its thinking; i.e. your example is irrelevant - if you want to attack the Fair Sampling Assumption then you should label your thread as such since one has nothing to do with the other;

c) By way of comparison to Bell, it does not agree with the predictions of QM; i.e. obviously QM does not say anything about doctors and patients in your example; however, you would need something to compare it to and you don't really do that, your example is just playing around with logic.

It would be nice if instead of attempting to attack Bell, you would work on first understanding Bell. Then once you understand it, look for holes.
 
  • #71
DrChinese said:
Great example, pretty much demonstrates everything I have been saying all along:

a) There is a dataset which is realistic, i.e. you can create a dataset which presents data for properties not actually observed;
Huh?

b) The sample is NOT representative of the full universe, something which is not a problem with Bell since it assumes the full universe in its thinking; i.e. your example is irrelevant
Look at Bell's equation (15), he writes

1 + P(b,c) >= | P(a,b) - P(a,c)|

Do you see the cyclicity I mentioned in my previous post? Bell Assumes that the b in P(b,c) is the same b in P(a,b), and the a in P(a,b) is the same a in P(a,c) and the c in P(a,c) is the same c in P(b,c). The inequalities will fail if these assumptions do not hold. One way to avoid these would have been to start the equations using P(AB|H) = P(A|H)P(B|AH), the term P(B|AH) reminds us not to confuse P(B|AH) and P(B|CH).

Now fast forward to Aspect type experiments, each pair of photons is analysed under different circumstances, therefore for each iteration, you need to index the data according to at least factors such as, time of measurement, local hidden variables of measuring instrument, local hidden variables specific to each photon of the pair, NOT just the angles as Bell did. Adding just one of these parameters breaks the cyclicity. So it is very clear to anyone , that Bell's inequalities as derived only works for data that has been indexed to preserve the cyclicity. Sure this proves that the fair sampling assumption is not valid unless steps have been taken to ensure a fair sampling. But it is impossible to do that, as it will require knowledge of all hidden variables at play in order to design the experiment. The failure of the fair sampling assumption is just a symptom of a more serious issue with Bell's ansatz.

c) By way of comparison to Bell, it does not agree with the predictions of QM; i.e. obviously QM does not say anything about doctors and patients in your example.
I assume you know about the Bell-type Leggett-Garg inequality (LGI). The doctors and patients example violates the LGI and so does QM. Remember that violation of LGI is supposed to prove that realism is false even at the macro realm. Using the LGI, which is a Bell-type inequality in this example is proper and relevant to illustrate the problem with Bell's inequalities.
 
  • #72
billschnieder said:
1. Huh?

2. Now fast forward to Aspect type experiments, each pair of photons is analysed under different circumstances, therefore for each iteration, you need to index the data according to at least factors such as, time of measurement, local hidden variables of measuring instrument, local hidden variables specific to each photon of the pair, NOT just the angles as Bell did. Adding just one of these parameters breaks the cyclicity. So it is very clear to anyone , that Bell's inequalities as derived only works for data that has been indexed to preserve the cyclicity. Sure this proves that the fair sampling assumption is not valid unless steps have been taken to ensure a fair sampling. But it is impossible to do that, as it will require knowledge of all hidden variables at play in order to design the experiment. The failure of the fair sampling assumption is just a symptom of a more serious issue with Bell's ansatz.

1. My point is: in your example, you are presenting a specific dataset. It satisfies realism. I am asking you to present a dataset for angle settings 0, 120, 240 degrees. Once you present it, it will be clear that the expected QM relationship does not hold. Once you acknowledge that, you have accepted what Bell has told us. It isn't that hard.

2. I assume you are not aware that there have been, in fact, experiments (such as Rowe) in which no sampling is required (essentially 100% detection). The entire dataset is sampled. Of course, you do not need to "prove" that the experimenter has obtained an unbiased dataset of ALL POSSIBLE events in the universe (i.e. for all time) unless you are changing scientific standards. By such logic (if you are asserting that all experiments are subsets of a larger universe of events and are not representative), all experimental proof would be considered invalid.

Meanwhile, do you doubt that there is ever a day in which these results would not be repeated? While in your example, the results repeat occasionally. If you don't pick the right cyclic combination of events, you won't get your result.

On the other hand, if you say your example proves a breakdown of Bell's logic: Again, you have missed the point of Bell entirely. Go back and re-read 1 above.
 
  • #73
Very interesting posts!

The answer to why Bell chose P(AB|H) = P(A|H)P(B|H) may may lie in the fact that Bell’s gedanken experiment is completely random, that is, 50% of the time the lights flash the same color and 50% time a different color. Does the randomness screen off the dependence on H?

Below are my thoughts on the experiment.

In a completely random experiment (no correlations) there are 36 possible outcomes. To summarize:

1) Six are same color-same switch
2) Six are different color-same switch
3) Twelve are same color-different switch
4) Twelve are different color-different switch

From above (2), six are different colors when the switches are the same (theta=0). They are: 11RG, 11GR, 22RG, 22GR, 33RG, and 33GR. In order to match Bell’s gedanken experiment these must be converted by the correlation process to the same color.

(6)(cos 0) = (6)(1) =6, a 100% conversion

When added to the runs (1) that are same color-same switch gives twelve total or 12/36 or 1/3 of the runs will have the same switch setting and the same color.

To conserve the random behavior of the gedanken experiment another opposite correlation must occur where exactly six runs of same color but different switch settings are converted to a different color. There are twelve of these runs: 12RR, 12GG, 21RR, 21GG, 13RR, 13GG, 31RR, 31GG, 23RR, 23GG, 32RR, and 32GG. Therefore, on an average six of these must be converted to a different color. This now leaves 6/24 runs that have same color but different switches.

(12)(cos 120) (12)(.5) = 6, a 50% conversion

This produces the randomness of the experiment where 50% of the time all runs will flash the same color, yet when the switches have the same setting then 100% of the time the lights flash the same color. That is:

12/36x12/12 + 24/36x6/24 = ½

Do the opposite correlations described above cancel the dependence on H and explain the choice of the equation

P(AB|H) = P(A|H)P(B|H).
 
  • #74
DrChinese said:
1. My point is: in your example, you are presenting a specific dataset. It satisfies realism. I am asking you to present a dataset for angle settings 0, 120, 240 degrees. Once you present it, it will be clear that the expected QM relationship does not hold. Once you acknowledge that, you have accepted what Bell has told us. It isn't that hard.

2. I assume you are not aware that there have been, in fact, experiments (such as Rowe) in which no sampling is required (essentially 100% detection). The entire dataset is sampled. Of course, you do not need to "prove" that the experimenter has obtained an unbiased dataset of ALL POSSIBLE events in the universe (i.e. for all time) unless you are changing scientific standards. By such logic (if you are asserting that all experiments are subsets of a larger universe of events and are not representative), all experimental proof would be considered invalid.

Meanwhile, do you doubt that there is ever a day in which these results would not be repeated? While in your example, the results repeat occasionally. If you don't pick the right cyclic combination of events, you won't get your result.

On the other hand, if you say your example proves a breakdown of Bell's logic: Again, you have missed the point of Bell entirely. Go back and re-read 1 above.

I don't think you have understood my critique. It's hard to figure out what you are talking about because it is not at all relevant to what I have been discussing. I have explained why it is IMPOSSIBLE to perform an experiment comparable to Bell's inequality. The critique is valid even if no experiment is ever performed. So I don't see the point of bringing up Rowe because, Rowe can not do the impossible. Rowe does not know the nature and behaviour of ALL hidden variables at play, so it is IMPOSSIBLE for him to have preserved the cyclicity. Detection efficiency is irrelevant to this discussion. I already mentioned in post #1 that we can assume that there is no loophole in the experiment. The issue discussed here is not a problem with experiments but with the formulation used in deriving the inequalities.

You ask me to provide a dataset for three angles. It doesn't take much to convert the doctors example into one with photons. Make a,b,c equivalent to the angles, 1,2,3 equivalent to the stations and n equivalent to the iteration of the experiment. Since in Aspect type experiments, only two photons are ever analysed per iteration, the situation is similar to the one in which only two doctors are involved. You get exactly the same results. I don't know what other dataset you are asking for.
 
  • #75
rlduncan said:
Very interesting posts!

The answer to why Bell chose P(AB|H) = P(A|H)P(B|H) may may lie in the fact that Bell’s gedanken experiment is completely random, that is, 50% of the time the lights flash the same color and 50% time a different color. Does the randomness screen off the dependence on H?
You bring up an interesting point. IFF the hidden variables are completely randomly distributed in space-time, it may be possible to rescue Bell's formulation. But do you think that is a reasonable assumption for quantum particles? Assume for a moment that there is a space-time harmonic hidden variable for photons, with an unknown phase and frequency. Can you device an algorithm to enable you to sample the hidden variable such that the resulting data is random, without any knowledge that the signal is harmonic or knowledge of the phase or frequency?
 
  • #76
It would be easy to simulate Bell's experiment by rolling of the dice in which the faces alternate green and red and specific instruction are given when observing the upper most face. These instructions can be realized from my first post.
 
  • #77
billschnieder said:
I don't think you have understood my critique. It's hard to figure out what you are talking about because it is not at all relevant to what I have been discussing. I have explained why it is IMPOSSIBLE to perform an experiment comparable to Bell's inequality. The critique is valid even if no experiment is ever performed. So I don't see the point of bringing up Rowe because, Rowe can not do the impossible. Rowe does not know the nature and behaviour of ALL hidden variables at play, so it is IMPOSSIBLE for him to have preserved the cyclicity. Detection efficiency is irrelevant to this discussion. I already mentioned in post #1 that we can assume that there is no loophole in the experiment. The issue discussed here is not a problem with experiments but with the formulation used in deriving the inequalities.

You ask me to provide a dataset for three angles. It doesn't take much to convert the doctors example into one with photons. Make a,b,c equivalent to the angles, 1,2,3 equivalent to the stations and n equivalent to the iteration of the experiment. Since in Aspect type experiments, only two photons are ever analysed per iteration, the situation is similar to the one in which only two doctors are involved. You get exactly the same results. I don't know what other dataset you are asking for.

You critique is hardly new, and has been thoroughly rejected. You do everything possible to ignore dealing directly with the issues that make Bell important. Still. Yes, I agree that every experiment ever performed by any experimenter anywhere may fall victim to the idea of the "periodic, cycle" subset problem. This of course has absolutely nothing to with Bell. Perhaps actually the speed of light is 2c after all! Wow, you have your finger on something. So I say you are wasting everyone's time if your assertion is that tests like Rowe are invalid for the reason: they too are subsets and fall victim to a hidden bias. That would not be fair science. You are basically saying: evidence is NOT evidence.

The problem with this logic is that it STILL means that QM is incompatible with local realism. As has been pointed out by scientists everywhere, maybe QM is wrong (however unlikely). That STILL does not change Bell. Do you follow any of this? Because you seem like a bright, fairly well read person.

If you want to debate Bell test, which would be fine, you need to start by acknowledging what Bell itself says. Then work from there. Clearly, no local realistic theory will be compatible with QM and QM is well supported. There are many, such as Hess (I believe you referenced him earlier) who attack Bell tests. They occasionally attack Bell too. But the only place they have ever gained any real traction is by attacking the Fair Sampling Assumption. However, this assumption acknowledges the validity of Bell. This argument is completely different than the one you assert. Specifically, if the Fair Sampling Assumption is invalid, then QM is in fact WRONG. Bell, however, is still RIGHT.

ON THE OTHER HAND: if you want to debate whether the Fair Sampling Assumption can be modeled into a Bell test: I would happily debate that point. As it happens, I have spent a significant amount of time tearing into the De Raedt model (if you know that). After an extended analysis, I think I have learned the secret to disassembing anything you would care to throw at me. But a couple of points: I will discuss something along the line of a photon test using PDC, but will not discuss doctors in Africa. Let's discuss the issues that make Bell relevant, and that is not hypothetical tests. There are real datasets to discuss. And there are a few more twists to model a local realistic theory these days - since we know from Bell that the predictions of QM will be incorrect.
 
  • #78
billschnieder said:
The point is that certain assumptions are made about the data when deriving the inequalities, that must be valid in the data-taking process. God is not taking the data, so the human experimenters must take those assumptions into account if their data is to be comparable to the inequalities.

Consider a certain disease that strikes persons in different ways depending on circumstances. Assume that we deal with sets of patients born in Africa, Asia and Europe (denoted a,b,c). Assume further that doctors in three cities Lyon, Paris, and Lille (denoted 1,2,3) are are assembling information about the disease. The doctors perform their investigations on randomly chosen but identical days (n) for all three where n = 1,2,3,...,N for a total of N days. The patients are denoted Alo(n) where l is the city, o is the birthplace and n is the day. Each patient is then given a diagnosis of A = +1/-1 based on presence or absence of the disease. So if a patient from Europe examined in Lille on the 10th day of the study was negative, A3c(10) = -1.

According to the Bell-type Leggett-Garg inequality

Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

In the case under consideration, our doctors can combine their results as follows

A1a(n)A2b(n) + A1a(n)A3c(n) + A2b(n)A3c(n)

It can easily be verified that by combining any possible diagnosis results, the Legett-Garg inequalitiy will not be violated as the result of the above expression will always be >= -1, so long as the cyclicity (XY+XZ+YZ) is maintained. Therefore the average result will also satisfy that inequality and we can therefore drop the indices and write the inequality only based on place of origin as follows:

<AaAb> + <AaAc> + <AbAc> >= -1

Now consider a variation of the study in which only two doctors perform the investigation. The doctor in Lille examines only patients of type (a) and (b) and the doctor in Lyon examines only patients of type (b) and (c). Note that patients of type (b) are examined twice as much. The doctors not knowing, or having any reason to suspect that the date or location of examinations has any influence decide to designate their patients only based on place of origin.

After numerous examinations they combine their results and find that

<AaAb> + <AaAc> + <AbAc> = -3

They also find that the single outcomes Aa, Ab, Ac, appear randomly distributed around +1/-1 and they are completely baffled. How can single outcomes be completely random while the products are not random. After lengthy discussions they conclude that there must be superluminal influence between the two cities.

But there are other more reasonable reasons. Note that by measuring in only two citites they have removed the cyclicity intended in the original inequality. It can easily be verified that the following scenario will result in what they observed:

- on even dates Aa = +1 and Ac = -1 in both cities while Ab = +1 in Lille and Ab = -1 in Lyon
- on odd days all signs are reversed

In the above case
<A1aA2b> + <A1aA2c> + <A1bA2c> >= -3
which is consistent with what they saw. Note that this equation does NOT maintain the cyclicity (XY+XZ+YZ) of the original inequality for the situation in which only two cities are considered and one group of patients is measured more than once. But by droping the indices for the cities, it gives the false impression that the cyclicity is maintained.

The reason for the discrepancy is that the data is not indexed properly in order to provide a data structure that is consistent with the inequalities as derived.Specifically, the inequalities require cyclicity in the data and since experimenters can not possibly know all the factors in play in order to know how to index the data to preserve the cyclicity, it is unreasonable to expect their data to match the inequalities.

For a fuller treatment of this example, see Hess et al, Possible experience: From Boole to Bell. EPL. 87, No 6, 60007(1-6) (2009)
I'm not familiar with the Leggett-Garg inequality, and wikipedia's explanation is not very clear. But I would imagine any derivation of the inequality assumes certain conditions hold, like the experimenters being equally likely to choose any detection setting on each trial perhaps, and that a violation of these conditions is responsible for the violation of the inequality in your example above...is that incorrect?
billschnieder said:
Note that in deriving Bell's inequalities, Bell used Aa(l), Ab(l) Ac(l), where the hidden variables (l) are the same for all three angles.
If l represents something like the value of all local physical variables in the past light cone of the region where the measurement (and the decision of what angle to set the detector) was made, then the measurement angle cannot have a retroactive effect on the value of l, although it is possible that the value of l will itself affect the experimenter's choice of detector angle. Is it the latter possibility you're worried about? The proof of Bell's theorem does usually include a "no-conspiracy" assumption where it's assumed that the probability the particles will have different possible predetermined spins on each detector angle is independent of the probability that the experimenter will choose different possible detector angles.
billschnieder said:
For this to correspond to the Aspect-type experimental situation, the hidden variables must be exactly the same for all the angles, which is an unreasonable assumption because each particle could have it's own hidden variables with the measurement equipment each having their own hidden variables, and the time of measurement after emission itself a hidden variable.
In the case of certain inequalities like the one that says the probability of identical results when different angles are chosen must be greater than or equal to 1/3, it's assumed that there's a perfect correlation between the results whenever the experimenters choose the same angle; you can prove that the only way this is possible in a local realist universe is if the hidden variables already completely predetermine what results will be found for each detector setting, so if the hidden variables are restricted to the past light cones of the measurement regions then any additional hidden variables in the measurement regions can't affect the outcome. I discussed this in post 61/62 of the other thread, along with the "no conspiracy" assumption. Other inequalities like the CHSH inequality and the one you mentioned don't require an assumption of perfect correlations, in these cases I'd have to think more about whether hidden variables associated with the measurement apparatus might affect the outcome, but Bell's original paper did derive an inequality based on the assumption of perfect correlations for identical measurement settings.

But here we're going somewhat astray from the original question of whether P(AB|H)=P(A|H)P(B|H) is justified. Are you ever going to address my arguments about past light cones in post #41 using your own words, rather than just trying to discount it with quotes from the Stanford Encyclopedia article which turned out to be irrelevant?
 
  • #79
DrChinese said:
You critique is hardly new, and has been thoroughly rejected. You do everything possible to ignore dealing directly with the issues that make Bell important.
On the contrary, I have. I do not see that you have responded to any of the issues I have raised:

1) I have demonstrated, I believe very convincingly that it is possible to violate Bell-type inequalities by simply collecting data in such a way that the cyclicity is not preserved.
2) I have shown using an example which is macroscopic,so that there is no doubt in any reasonable persons' mind that it is local and real. Yet by not knowing the exact nature of the hidden elements of reality in play, it is very easy to violate the Bell-type inequalities.
3) Further more, I have given specific reasons why the inequality was violated, by providing an explanation for how the hidden elements are generating the data, that is locally causal. It is therefore very clear that violation of the inequalities in the example I provided is NOT due to spooky action at a distance.
4) For this macroscopic example, I have used the Bell-type inequality normally used in macroscopic situations (the Leggett–Garg inequality), which is violated by QM and was supposed to prove that the time evolution of a system cannot be understood classically. My example which is locally causal, real and classical also violates the inequality, that should be a big hint -- QM and the local reality agree with each other here. Remember that QM and Aspect type experiments also agree with each other.
5) The evidence is very clear to me. On the one hand we have QM and experiments, which agree with each other. On the other hand we have Bell-type inequalities violated by everything. There is only one odd-man in the mix and it is neither QM nor the experiments. Evidence is evidence.
6) Seeing that Bell-type inequalities are the odd-man out, my interest in this thread was to discuss how (a) Bell's ansatz represents the situation he is trying to model, and (b) how the inequalities derived from the ansatz are comparable to actual experiments performed. The argument mentioned in my first post can therefore be expanded as follows:

i) Bell's ansatz (equation 2 in his paper) correctly represent all local-causal theories
ii) Bell's ansatz necessarily leads to Bell's inequalities
iii) Aspect-type Experiments are comparable to Bell's inequalities
iv) Aspect-type Experiments violate Bell's inequalities
Conclusion: Therefore the real physical situation of Aspect-type experiments is not locally causal.

In order for the conclusion to be valid, all the premises (i to iv) must be valid. Failure of anyone is sufficient to kill the argument. We have discussed the validity of (i), JesseM says it is justified I say it is not, for reasons I have explained and we can agree to disagree there. However, even if JesseM is right and I don't believe he is, (iii) is not justified as I have shown. Therefore the conclusion is not valid.

I have already admitted that (ii) and (iv) have been proven. So bringing up Rowe doesn't say anything new I have not already admitted. You only need to look at equation (2) in Rowe's paper to see that the same issue with cyclicity and incomplete indexing applies. Do you understand the difference between incomplete indexing and incomplete detection. You could collect 100% of data with detection efficiency of 100% and still violate the inequality if the data is not indexed to maintain cyclicity for all hidden elements of reality in play.

You may disagree with everything I have said but any reasonable person will agree that failure of anyone of those premises (i to iv), invalidates the argument. It is therefore proper to discuss the validity of each one.

Through out this thread, I and many others have used examples with cards, balls, fruits etc to explain a point because it is easier to visualize. The doctors and patients example is no different.
 
  • #80
billschnieder said:
I have already admitted that (ii) and (iv) have been proven. So bringing up Rowe doesn't say anything new I have not already admitted. You only need to look at equation (2) in Rowe's paper to see that the same issue with cyclicity and incomplete indexing applies. Do you understand the difference between incomplete indexing and incomplete detection. You could collect 100% of data with detection efficiency of 100% and still violate the inequality if the data is not indexed to maintain cyclicity for all hidden elements of reality in play.

And I have already said that all science falls to the same argument you present here. You may as well be claiming that General Relativity is wrong and Newtonian gravity is correct, and that there is a cyclic component that makes it "appear" as if GR is correct. Do you not see the absurdity?

When you discover the hidden cycle, you can collect the prizes due. Meanwhile, you may want to consider WHY PDC photons pairs with KNOWN polarization don't act as you predict they should. That should be a strong hint that you are factually wrong even in this absurd claim.
 
  • #81
billschnieder:

In other words: saying that you can find a "cyclic" solution to prove Bell wrong is easy. I am challenging you to come up with an ACTUAL candidate to back up your claim. Then you will see that Bell is no laughing matter. This is MY claim, that I can take down any example you provide. Remember, no doctors in Africa; we are talking about entangled (and perhaps unentangled) PDC photon pairs.

You must be able to provide a) perfect correlations (Bell mentions this). You must be able to provide b) detection rates that are rotationally invariant (to match the predictions of QM). The results must c) form a random pattern with p(H)=p(V)=50%. And of course, you must be able to provide d) reasonable agreement with the cos^(theta) rule for the subset with respect e) for Bell's Inequality in the full universe.

Simply show me the local realistic dataset/formulae.
======================================

I already did this exercise with a model that has had years of work put in it, the De Raedt model. It failed, but perhaps you will fare better. Good luck! But please note, unsubstantiated claims (especially going against established science) are not well received around here. You have placed one out here, and you should either retract it or defend it.
 
  • #82
JesseM said:
I'm not familiar with the Leggett-Garg inequality, and wikipedia's explanation is not very clear. But I would imagine any derivation of the inequality assumes certain conditions hold, like the experimenters being equally likely to choose any detection setting on each trial perhaps, and that a violation of these conditions is responsible for the violation of the inequality in your example above...is that incorrect?

What we have been discussing is the assumption that whatever is causing the correlations is randomly distributed in the data taking process, in other words, it has been screened-off. The short of it is that, your justification for using the PCC as a definition for local causality requires that it is always possible to screen-off the correlation. But experimentally it is impossible to screen-off a correlation if you have no clue what is causing it.

It doesn't look like we are going to agree on any of this. So we can agree to disagree.

If l represents something like the value of all local physical variables in the past light cone of the region where the measurement (and the decision of what angle to set the detector) was made, then the measurement angle cannot have a retroactive effect on the value of l, although it is possible that the value of l will itself affect the experimenter's choice of detector angle. Is it the latter possibility you're worried about?
Defining I vaguely as all physical variables in the past light cone of the region of measurement, fails to consider the fact that not all subsets of I may be at play in the case of A, or B. Some subsets of I may actually be working against the result while others are working for it. Those supporting the result at A may be working against the result at B and vice versa. That is why it is mentioned in the stanford encyclopedia that it is not always possible to define I such that it screens off the correlation. It is even harder if you have no idea what is actually happening.

Another way of looking at it is as follows. If your intention is to say I is so broad as to represent all pre-existing facts in the local universe, then there is no reason to even include it because the marginal probability says the same thing. What in your opinion then will be the effective difference between P(A) and P(A|I)? There will be none and your equation returns to P(AB) = P(A)P(B) which contradicts the fact that there is a marginal correlation between A and B.

Remember that P(A) = P(A,H) + P(A,notH)

If H is defined so broadly such that P(H) = 1, then P(notH) = 0, and P(A) = P(AH).

As I already explained, in probability theory, lack of causal relationship is not a sufficient justification to assume lack of logical relationship. In the Bell situation, we are not just interested in an angle but a joint result between two angles. It is not possible to determine that a coincidence has happened without taking the other result into account, so including a P(B|AH) term is not because we think A is causing B but because we will be handling the data in a joint manner. I don't expect us to agree on this. So we can agree to disagree.

Other inequalities like the CHSH inequality and the one you mentioned don't require an assumption of perfect correlations, in these cases I'd have to think more about whether hidden variables associated with the measurement apparatus might affect the outcome, but Bell's original paper did derive an inequality based on the assumption of perfect correlations for identical measurement settings.
The key word is "cyclicity" here. Now let's look at various inequalities:

Bell's equation (15):
1 + P(b,c) >= | P(a,b) - P(a,c)|
a,b, c each occur in two of the three terms. Each time together with a different partner. However in actual experiments, the (b,c) pair is analyzed at a different time from the (a,b) pair so the bs are not the same. Just because the experimenter sets a macroscopic angle does not mean that the complete microscopic state of the instrument, which he has no control over is in the same state.

CHSH:
|q(d1,y2) - q(a1,y2)| + |q(d1,b2)+q(a1,b2)| <= 2
d1, y2, a1, b2 each occur in two of the four terms. Same argument above applies.

Leggett-Garg:
Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

I have already explained this one.

But here we're going somewhat astray from the original question of whether P(AB|H)=P(A|H)P(B|H) is justified.
Not at all, see my last post for an explanation why. Your arguments boil down to the assertion that PCC is universally valid for locally causal hidden variable theories. I disagree with that, you disagree with me. I have presented my arguments, you have presented yours so be it. We can agree to disagree about that, there is no need to keep going "no it doesn't ... yes it does ... no it doesn't ... etc".
 
  • #83
billschnieder said:
...The key word is "cyclicity" here. Now let's look at various inequalities:

Bell's equation (15):
1 + P(b,c) >= | P(a,b) - P(a,c)|
a,b, c each occur in two of the three terms. Each time together with a different partner. However in actual experiments, the (b,c) pair is analyzed at a different time from the (a,b) pair so the bs are not the same.

Oops... Are you sure about that?

If b is not the same b... How does it work out for the (b, b) case?

:eek:

You see, b and b can be tested at all different times too! According to your model, you won't get Alice's b and Bob's b to be the same. Bell mentions this requirement. Better luck next time.
 
  • #84
JesseM said:
P(AB|H)=P(A|H)P(B|AH) is a general statistical identity that should hold regardless of the meanings of A, B, and H, agreed? So to get from that to P(AB|H)=P(A|H)P(B|H), you just need to prove that in this physical scenario, P(B|AH)=P(B|H), agreed? If you agree, then just let H represent an exhaustive description of all the local variables (hidden and others) at every point in spacetime which lies in the past light cone of the region where measurement B occurred. If measurement A is at a spacelike separation from B, then isn't it clear that according to local realism, knowledge of A cannot alter your estimate of the probability of B if you were already basing that estimate on H, which encompasses every microscopic physical fact in the past light cone of B? To suggest otherwise would imply FTL information transmission, as I argued in post #41.

Based on H, which includes all values for |a-b|, the angular difference between the polarizer settings, and all values for |La - Lb|, the emission-produced angular difference between the optical vectors of the disturbances incident on the polarizer settings, a and b, respectively, then when, eg., |a-b| = 0 and |La - Lb| = 0, then P(B|AH) /= P(B|H).

In this case, we can, with certainty, say that if A = 1, then B = 1, and if A = 0, then B = 0. So, our knowledge of the result at A can alter our estimate of the probability of B without implying FTL information transmission.

P(B|AH) /= P(B|H) also holds for |a-b| = 90 degrees (with |La - Lb| = 0) without implying FTL information transmission.

The confluence of the consideration in the OP and the observation that the individual and joint measurement contexts involve different variables doesn't seem to allow the conclusion that violation of BIs imply ftl info transmission, but rather allows only that there's a disparity between Bell's ansatz and the experimental situations to which it's applied based on an inappropriate modelling requirement.

For an accurate account of the joint detection rate, P(AB) must be expressed in terms of the joint variables which determine it. Assuming that |La - Lb| = 0 for all entangled pairs, then the effective independent variable in a local account of the joint measurement context becomes |a-b|. Hence the nonseparability of the qm treatment of an experimental situation where crossed polarizers are jointly analyzing a single-valued optical vector.
 
  • #85
ThomasT said:
Based on H, which includes all values for |a-b|, the angular difference between the polarizer settings, and all values for |La - Lb|, the emission-produced angular difference between the optical vectors of the disturbances incident on the polarizer settings, a and b, respectively, then when, eg., |a-b| = 0 and |La - Lb| = 0, then P(B|AH) /= P(B|H).

In this case, we can, with certainty, say that if A = 1, then B = 1, and if A = 0, then B = 0. So, our knowledge of the result at A can alter our estimate of the probability of B without implying FTL information transmission.

P(B|AH) /= P(B|H) also holds for |a-b| = 90 degrees (with |La - Lb| = 0) without implying FTL information transmission.

The confluence of the consideration in the OP and the observation that the individual and joint measurement contexts involve different variables doesn't seem to allow the conclusion that violation of BIs imply ftl info transmission, but rather allows only that there's a disparity between Bell's ansatz and the experimental situations to which it's applied based on an inappropriate modelling requirement.

For an accurate account of the joint detection rate, P(AB) must be expressed in terms of the joint variables which determine it. Assuming that |La - Lb| = 0 for all entangled pairs, then the effective independent variable in a local account of the joint measurement context becomes |a-b|. Hence the nonseparability of the qm treatment of an experimental situation where crossed polarizers are jointly analyzing a single-valued optical vector.

ThomasT: Here is an important issue with your assumptions. Suppose I take a group of photon pairs that have the joint detection probabilities, common causes, and other relationships you describe above. This group, I will call NPE. Since they satisfy your assumptions, without any argument from me, they should produce Entangled State stats (cos^2(A-B) ). However, when we run an experiment on them, they actually produce Product State stats!

On the other hand, we take a group of photon pairs closely resembling your assumptions, but which I say do NOT fit exactly. We will call this group PE. These DO produce Entangled State stats.

NPE=Non-Polarization Entangled
PE=Polarization Entangled

Why doesn't the NPE group produce Entangled State stats? This is a serious deficiency in every local hidden variable account I have reviewed to date. If I produce a group that satisfies your assumptions without question, then that group should produce according to your predictions without question. That just doesn't happen. I hope this will spur you to re-think your approach.
 
  • #86
DrChinese said:
ThomasT: Here is an important issue with your assumptions. Suppose I take a group of photon pairs that have the joint detection probabilities, common causes, and other relationships you describe above. This group, I will call NPE. Since they satisfy your assumptions, without any argument from me, they should produce Entangled State stats (cos^2(A-B) ). However, when we run an experiment on them, they actually produce Product State stats!

On the other hand, we take a group of photon pairs closely resembling your assumptions, but which I say do NOT fit exactly. We will call this group PE. These DO produce Entangled State stats.

NPE=Non-Polarization Entangled
PE=Polarization Entangled

Why doesn't the NPE group produce Entangled State stats? This is a serious deficiency in every local hidden variable account I have reviewed to date. If I produce a group that satisfies your assumptions without question, then that group should produce according to your predictions without question. That just doesn't happen. I hope this will spur you to re-think your approach.
The assumptions are that the relationship between the disturbances incident on the polarizers is created during the emission process, and that |La - Lb| is effectively 0 for all entangled pairs. The only way these assumptions can be satisfied is by actually creating the relationship between the disturbances during the emission process. And these assumptions are compatible with PE stats.

The NPE group doesn't satisfy the above assumptions.
 
  • #87
ThomasT said:
The assumptions are that the relationship between the disturbances incident on the polarizers is created during the emission process, and that |La - Lb| is effectively 0 for all entangled pairs. The only way these assumptions can be satisfied is by actually creating the relationship between the disturbances during the emission process. And these assumptions are compatible with PE stats.

The NPE group doesn't satisfy the above assumptions.

And why not? You have them created simultaneously from the same process. The polarization value you mention is exactly 0 for ALL pairs. And they are entangled, just not polarization entangled. Care to explain your position?
 
  • #88
DrChinese said:
And why not? You have them created simultaneously from the same process. The polarization value you mention is exactly 0 for ALL pairs. And they are entangled, just not polarization entangled. Care to explain your position?
If they're not polarization entangled, then |La - Lb| > 0.
 
  • #89
ThomasT said:
If they're not polarization entangled, then |La - Lb| > 0.

Oops, that is not correct at all. For Type I PDC, they are HH>. For Type II, they are HV>.

That means, when you observe one to be H, you know with certainty what the polarization of the other is. Couldn't match your requirement any better. Spin conserved and known to have a definite value. This is simply a special case of your assumption. According to your hypothesis, these should produce Entangled State statistics. But they don't.

(Now of course, you deny that there is such a thing as Entanglement in the sense that it can be a state which survives the creation of the photon pair. But I don't.)
 
  • #90
looking at the overall big picture here, and this may indeed be a stretch. with the probability that future events effect past events and the entanglement observed in photons. I wonder about the neurological association, the synaptic cleft is 20nm right? so where dealing about quantum effects in memory.

From the perspective of how we associate time internally, it's based on memory recall to the past, if your memory recall or recording ability is altered your sense of time is too. Is it possible that the entanglement goes further than the experiment tests? So that if a future measurement changes the past then it also changes your memory due to the overall entanglement? how would one even know it occurred?

looking at the photon in each time frame the energy should be the same or it violates the conservation of energy. Then it's everywhere in each time frame, if it's everywhere then there is no such thing as discreet time for a photon. Am I twisting things up too much here?
 
Last edited:
  • #91
DrChinese said:
Oops, that is not correct at all. For Type I PDC, they are HH>. For Type II, they are HV>.

That means, when you observe one to be H, you know with certainty what the polarization of the other is.
Not exactly. Since we don't (can't) know, and therefor can't say anything about, what the values of La and Lb are, then we can only say that, for |a-b| = 0 and 90 degrees, then if the polarizer at one end has transmitted a disturbance that resulted in a detection, then we can deduce what the result at the other end will be.

Since they don't produce entangled state stats, then presumably there's a range of |La - Lb| > 0 that allows the contingent deductions for |a-b| = 0 and 90 degrees, but not the full range of entangled state stats.

Anyway, La and Lb don't even have to represent optical vectors. |La - Lb| can be taken to denote the relationship between any relevant local hidden variable subset(s) of H. Or we can just leave it out. I'm not pushing an lhv description. I think that's impossible. This thread is discussing why that's impossible.

The point is that P(B|AH) /= P(B|H) holds for certain polarizer settings without implying ftl info transmission.

Since this violation of P(AB|H) = P(A|H)P(B|H) doesn't imply ftl info transmission, then P(AB|H) = P(A|H)P(B|H) isn't a locality condition, but rather, strictly speaking, it's a local hidden variable condition.

Per the OP, since P(AB|H) = P(A|H)P(B|H) doesn't hold for all settings, then it can't possibly model the situation that it's being applied to.

Per me, since P(AB|H) = P(A|H)P(B|H) requires that joint detection rate be expressed in terms of individual variable properties which don't determine it, then it can't possibly model the situation that it's being applied to.

The point of Bell's analysis was that lhv theories are ruled out because they would have to be in the separable form that he specified, and, as he noted, "the statistical predictions of quantum mechanics are incompatible with separable predetermination".
 
  • #92
DrChinese said:
(Now of course, you deny that there is such a thing as Entanglement in the sense that it can be a state which survives the creation of the photon pair. But I don't.)
I don't know what you mean here. Could you elaborate please?
 
  • #93
ThomasT said:
I don't know what you mean here. Could you elaborate please?

If one pushes local realism, one is asserting there is no ongoing connection between Alice and Bob. QM denies this. The connection is that Alice = Bob (at same settings) for any setting.
 
  • #94
ThomasT said:
Not exactly. Since we don't (can't) know, and therefor can't say anything about, what the values of La and Lb are, then we can only say that, for |a-b| = 0 and 90 degrees, then if the polarizer at one end has transmitted a disturbance that resulted in a detection, then we can deduce what the result at the other end will be.

...

But you say that photon pairs with a joint common cause (or however you term it) and a definite polarization should produce Entangled State stats. They don't. Your assumption cannot be correct. Only ENTANGLED photons - pairs in a superposition - have the characteristic that they produce Entangled State statistics.

According to your revised explanation above, photons with the special case where we have HH> at 0 degrees should have HH> or VV> whenever A-B=0. But they don't, as I mention. Instead they have Product State stats. Hey, if the special case fails, how does your general case hold?
 
  • #95
DrChinese said:
But you say that photon pairs with a joint common cause (or however you term it) and a definite polarization should produce Entangled State stats. They don't. Your assumption cannot be correct. Only ENTANGLED photons - pairs in a superposition - have the characteristic that they produce Entangled State statistics.
Entangled state stats are compatible with the assumption that the photons have a local common cause, say, via the emission process (you can interpret the emission models this way). It's just that you can't denote the entangled state in terms of the individual properties of the separated photons -- because that's not what's being measured in the joint context.

DrChinese said:
According to your revised explanation above, photons with the special case where we have HH> at 0 degrees should have HH> or VV> whenever A-B=0. But they don't, as I mention.
You said that there are cases where pdc photons exhibit the |a-b| = 0 and 90 degrees perfect correlations, but not the polarization entanglement stats. And I said ok, but that doesn't diminish the fact that assuming a local common cause for photons that do produce polarization entanglement stats is compatible with the perfect correlations and hence P(B|H) /= P(B|AH) holds for the detection contingencies at those angles without implying ftl info transmission.
 
  • #96
DrChinese said:
If one pushes local realism, one is asserting there is no ongoing connection between Alice and Bob. QM denies this. The connection is that Alice = Bob (at same settings) for any setting.
I don't follow. Are you saying that qm says there's a nonlocal 'connection' between the observers? I don't think you have to interpret it that way.
 
  • #97
ThomasT said:
I don't follow. Are you saying that qm says there's a nonlocal 'connection' between the observers? I don't think you have to interpret it that way.

Sure it does. There is a superposition of states. Observation causes collapse (whatever that is) based on the observation. According to EPR, that makes Bob's reality dependent on Alice's decision. Now, both EPR and Bell realized there were 2 possibilities: either QM is complete (no realism possible) or there is spooky action at a distance (locality not respected). But either way, the superposition means there is something different going on than a classical mixed state.

A local realist denies this, saying that there is no superluminal influence and that QM is incomplete because a greater specification of the system is possible. But Bell shows that QM, if incomplete, is simply wrong. That's a big pill to swallow, given 10,000 experiments (or whatever) that say it isn't.
 
  • #98
DrChinese said:
Observation causes collapse (whatever that is) based on the observation. According to EPR, that makes Bob's reality dependent on Alice's decision.
Or, we can assume that the correlated events at A and B have a local common cause. And, standard qm is not incompatible with that assumption.

DrChinese said:
Now, both EPR and Bell realized there were 2 possibilities: either QM is complete (no realism possible) or there is spooky action at a distance (locality not respected).
EPR said that either qm is INcomplete (local realism possible) or there is spooky action at a distance (local realism impossible -- a detection at one end is instantaneously determining the reality at the other end -- in which case locality would be out the window). Qm is obviously incomplete as a physical description of the underlying reality. All you have to do is look at the individual results, wrt which, by the way, qm isn't incompatible with an lhv account of, to ascertain that. (But that doesn't entail that a viable lhv account of entanglement is possible.) The reason that the qm treatment is a 'complete', in a certain sense, account of the joint entangled situation is because the information necessary to predict individual detection SEQUENCES isn't necessary to predict joint detection RATES. But, again obviously, qm isn't, in the fullest sense, a complete account of the joint entangled context either, because it can't predict the order, the sequences, of the coincidental results. It can only predict the coincidence RATE, and for that all that's needed is |a - b| and the assumption that whatever |a - b| is analyzing is the same at both ends for any given coincidence window -- and that relationship, that sameness, is compatible with the assumption of a local common cause (even if qm doesn't explicitly say that, but, as I've mentioned, the emission model(s) can be interpreted that way).

DrChinese said:
But either way, the superposition means there is something different going on than a classical mixed state.
I agree. We infer that the superposition (via the preparation) has been experimentally realized when we observe that the entangled state stats have been produced -- which differ from the classical mixed state stats. But this has nothing to do with the argument(s) presented in this thread.

DrChinese said:
A local realist denies this, saying that there is no superluminal influence and that QM is incomplete because a greater specification of the system is possible.
I think we agree that lhv theories of entangled states are ruled out. We just differ as to why they're ruled out. But it's an important difference, and one worth discussing. I don't think that a greater specification of the system, beyond what qm offers, is possible. But I also think that it's important to understand why this doesn't imply nonlocality or ftl info transmission.

I do very much appreciate your comments and questions as they spur me to refine how I might communicate what I intuitively see.

DrChinese said:
But Bell shows that QM, if incomplete, is simply wrong. That's a big pill to swallow, given 10,000 experiments (or whatever) that say it isn't.
Qm, like any theory, can be an incomplete description of the underlying physical reality without being just simply wrong. I think Bell showed just what he said he showed, that a viable specification of the entangled state (ie., the statistical predictions of qm) is incompatible with separable predetermination. However, in showing that, he didn't show that separable predetermination is impossible in Nature, but only that the hidden variables which would determine individual detection SEQUENCES are not relevant wrt determining joint detection RATES. A subtle, but important, distinction.

Regarding billschnieder's argument, I'm not sure that what he's saying is equivalent to what I'm saying, but it seems to accomplish the same thing wrt Bell's ansatz, which is that it can't correctly model entanglement setups. (billschnieder might hold the position, with eg. 't hooft et al., that some other representation of local reality might be possible which would violate BIs, or that could be the basis for a new quantitative test which qm and results wouldn't violate. That isn't my position. I agree with Bell, you et al. who think that Bell's ansatz is the only form that an explicit lhv theory of entanglement can take, but since this form can't possibly model the situation it's being applied to, independent of the tacit assumption of locality, then lhv theories of entanglement are ruled out independent of the tacit assumption of locality. We simply can't explicate that tacit assumption wrt the joint context because that would require us to express the joint results in terms of variables which don't determine the joint results.)

Anyway, it seems that we can dispense with considerations of the minimum and maximum propagation rates of entangled particle 'communications' and, hopefully, focus instead on the real causes of the observed correlations. Quantum entanglement is a real phenomenon, and it's certainly reasonable to suppose that it's a result of the dynamical laws which govern any and all waves in any and all media. That is, it's reasonable to suppose that there are fundamental wave dynamics which apply to any and all scales of behavior.

After all, why is qm so successful? Could it be because wave behavior in undetectable media underlying quantum instrumental phenomena isn't essentially different than wave behavior in media that we can see?

With apologies to billschnieder for my ramblings, and returning the focus of this thread to billschnieder's argument, I think that he's demonstrated the inapplicability of Bell's ansatz to the joint entangled situation. And, since P(B|H)/=P(B|AH) holds without implying ftl info transmission, then the inapplicability of P(AB|H)=P(A|H)P(B|H) doesn't imply ftl info transmission.

Beyond this, the question of whether ANY lhv theory of entanglement is possible might be considered an open question. My answer is no based on the following consideration: All disproofs of lhv theories, including those not based directly on Bell's anstatz, involve limitations on the range of entanglement predictions due to explicitly local hidden variables a la EPR. But it's been shown that these variables are mooted in the joint (entangled) situation and explicit lhv formulations of entanglement bring us back to Bell's ansatz or some variation of it. So, lhv theories (of the sort conforming to EPR's notion of local reality anyway) seem to be definitively ruled out.
 
  • #99
Apologies to billschnieder if I've got his argument wrong.

The usual:
1) Bell's ansatz correctly represents local-causal hidden variables
2). Bell's ansatz necessarily leads to Bell's inequalities
3). Experiments violate Bell's inequalities
Conclusion: Therefore the real physical situation of the experiments is not Locally causal.

Per billschnieder:
1) Bell's ansatz incorrectly represents local-causal hidden variables
2) Bell's ansatz necessarily leads to Bell's inequalities
3) Experiments violate Bell's inequalities
Conclusion: Therefore the real physical situation of the experiments is incorrectly represented by Bell's ansatz.

Per ThomasT:
1) Bell's ansatz correctly represents local-causal hidden variables
2) Bell's ansatz incorrectly represents the relationship between the local-causal hidden variables
3) The experimental situation is measuring this relationship
4) Bell's ansatz necessarily leads to Bell inequalities
5) Experiments violate Bell's inequalities
Conclusion: Therefore the real physical situation of the experiments is incorrectly represented by Bell's ansatz.

or to put it another way:
1) Bell's ansatz is the only way that local hidden variables can explicitly represent the experimental situation
2) This representational requirement doesn't express the relationship between the hidden variables
3) The experimental situation is measuring this relationship
etc.
Conclusion: Therefore the real physical situation of the experiments is, necessarily, incorrectly represented by Bell's ansatz.

We can continue with:
1) Any lhv representation of the experimental situation must conform to Bell's ansatz or some variation of it.
Then given the foregoing we can Conclude:
Therefore lhv representations of entanglement are impossible.

But of course, per billschnieders original point, this doesn't tell us anything about Nature.
 
  • #100
JesseM, I'm here to learn. You didn't reply to my reply to your post where you stated:

JesseM said:
If measurement A is at a spacelike separation from B, then isn't it clear that according to local realism, knowledge of A cannot alter your estimate of the probability of B if you were already basing that estimate on H, which encompasses every microscopic physical fact in the past light cone of B? To suggest otherwise would imply FTL information transmission ...
This isn't yet clear to me. If we assume a relationship between the polarizer-incident disturbances due to a local common origin (say, emission by the same atom), then doesn't the experimental situation allow that both Alice and Bob know at the outset (ie., the experimental preparation is in the past light cones of both observers) that if A=1 then B=1 and if A=0 then B=0 (and if A=1 then B=0, and vice versa) for certain settings without implying FTL transmission?

In a reply to billschnieder you stated:

JesseM said:
... if P(B|L) was not equal to P(B|LA), that would imply P(A|L) is not equal to P(A|BL), meaning that learning B gives us some additional information about what happened at A, beyond whatever information we could have learned from anything in the past light cone of B ...
I agree that if P(B|L) /= P(B|AL) then P(A|L) /= P(A|BL), but doesn't the correctness of both of those expressions follow from the contingencies for certain settings which follow from the experimental preparation which is in the past light cones of both A and B?

So, it does seem that P(AB|L) /= P(A|L)P(B|L) without implying FTL transmission.

In another reply to billschnieder you stated:

JesseM said:
Consider the possibility that you may not actually understand everything about this issue, and therefore there may be points that you are missing. The alternative, I suppose, is that you have no doubt that you already know everything there is to know about the issue, and are already totally confident that your argument is correct and that Bell was wrong to write that equation ...
Is it possible that the equation is wrong for the experimental situation, but that Bell was, in a most important sense, correct to write it that way vis EPR? Haven't hidden variables historically (vis EPR) been taken to refer to underlying parameters that would affect the prediction of individual results? If so, then wouldn't a formulation of the joint situation in terms of that variable have to take the form of Bell's ansatz? If so, then Bell's ansatz is, in that sense, correct. However, what if the underlying parameter that's being jointly measured isn't the underlying parameter that determines individual results? For example, if it's the relationship between the optical vectors of disturbances emitted during the same atomic transition, and not the optical vectors themselves, that's being jointly measured, then wouldn't that require a different formulation for the joint situation?

Do the assumptions that (1) this relationship is created during the common origin of the disturbances via emission by the same atom, and that (2) it therefore exists prior to measurement by the crossed polarizers, and that (3) counter-propagating disturbances are identically polarized (though the polarization vector of any given pair is random and indeterminable), contradict the qm treatment of this situation? If not, then might the foregoing be taken as an understanding of violations of BIs due to nonseparability of the joint entangled state?

I think that Bell showed just what he said he showed -- that the statistical predictions of qm are incompatible with separable predetermination. Which, according to my attempt at disambiguation, means that joint experimental situations which produce (and for which qm correctly predicts) entanglement stats can't be viably modeled in terms of the variable or variables which determine individual results.

Any criticisms of, or comments on, any part of the above viewpoint are appreciated.
 

Similar threads

Back
Top