Bell Theorem and probabilty theory

  • #51


JesseM said:
P(AB) is not equal to P(A)P(B), so if you find out B and you don't already know anything about the hidden variables, this gives you new information about P(A).
mn4j said:
What do you mean by find out B. What are the hidden variables in this case that could give me any information about A?
Again, the assumption is that the two balls given to Alice and Bob were taken from the set D, E, and F, with the assumptions already given about the probabilities associated with each ball. But I'm talking about a situation where you know the balls were selected this way, but you don't know which specific ball was given to Alice and which specific ball was given to Bob. In this case, if you find out that Bob's ball lit up red, this will change your estimate of the probability that Alice's ball will light up red: P(A) is different from P(A|B).
mn4j said:
I do not agree. The statement I highlighted in bold above shows you clearly what I have been saying from the beginning. Logical dependence is different from physical dependence. In that sentence you are implying that lack of physical dependence implies lack of logical dependence. Although physical dependence implies logical dependence, lack of physical dependence does not imply lack of logical dependence. Do you disagree?
No, I don't disagree with that statement in general. But we are talking about the specific case where the outcome of a given ball's button being pressed is assumed to be fully determined physically by the internal mechanisms in that ball, which are not in communication with the outside world. Are you saying that even when we already know which physical mechanisms are inside the balls given to Alice and Bob, you still think that knowing whether Bob's ball lit up or not would cause us revise our estimate of the probability that Alice's ball lit up? Suppose instead Alice and Bob were flipping coins, and we know that each coin's probability of coming up heads or tails is physically determined by its physical structure, and that both coins are fair coins that have a 50/50 chance of landing heads or tails. Would knowing that Bob's coin landed heads somehow cause you to revise your estimate of the probability that Alice's came up heads?

In a universe with local realist laws, the results of a physical experiment on any system are assumed to be determined by some set of variables specific to the region of spacetime where the experiment was performed. There can be a statistical correlation (logical dependence) between outcomes A and B of experiments performed at different locations in spacetime with a spacelike separation, but the only possible explanation for this correlation is that the variables associated with each system being measured were already correlated before the experiment was done, that the systems had "inherited" correlated internal variables from some event or events in the overlap of their past light cones. Do you disagree? If so, try to think of a counterexample that we can be sure is possible in a local realist universe (no explicitly quantum examples).

If you don't disagree, then the point is that if the only reason for the correlation between A and B is that the local variables \lambda associated with system #1 are correlated with the local variables associated with system #2, then if you could somehow know the full set of variables \lambda associated with system #1, knowing the outcome B when system #2 is measured would tell you nothing additional about the likelihood of getting A when system #1 is measured. In other words, while P(A|B) may be different than P(B), P(A|B\lambda ) = P(A | \lambda ). If you disagree with this, then I think you just haven't thought through carefully enough what "local realist" means.
mn4j said:
You can not impose a logical independence condition at will in your hypothesis space.
No, but you can if it makes sense given the physical assumptions of the problem. For example, while I can't impose the condition P(AB)=P(A)P(B) in general, if I know I'm dealing with a situation where Bob and Alice are both given different fair coins and asked to flip them, and A=Alice got heads and B=Bob got heads, then of course in this case it makes sense to say P(AB)=P(A)P(B). Similarly, if I know I'm dealing with a situation where Alice got a ball that has a 90% chance of lighting up when the button was pressed due to some internal mechanisms that aren't in communication with anything outside the ball, and where Bob got a ball that has a 70% chance of lighting up due to similar internal mechanisms, then if A=Alice's ball lit up and B=Bob's ball lit up, I can of course say that P(AB)=P(A)P(B) here.
mn4j said:
It is part of the mechanism by which you reason out the problem. What you have done is to break the calculator before asking me to use it to calculate a problem. The problem therefore becomes ill-formed because if I know that Bob got F, it DOES tell me something about the probability that Alice got D.
Huh? I never asked you to calculate the probability that Alice got D given that Bob got F. (of course I'd agree that if you don't know what Alice got, knowing Bob got F increases the probability Alice got D!) The only two problems I gave were: #1, the one where you don't know which ball either of them got, and you have to calculate the probability that both balls lit up; and #2, the one where you already know which balls both of them got, and you have to calculate the probability both balls lit up given that knowledge.
 
Last edited:
Physics news on Phys.org
  • #52


mn4j said:
This is not correct. There are well-defined rules. Read up on the "principle of indifference" or "maximum entropy".
They are well-defined, but you have been misled by Jaynes if you think they are universally agreed upon by all people who study statistics--see this page on the internal division between frequentists and bayesians among statisticians. There is no objective reason why adopting the principle of indifference is more correct than adopting some other rule--indeed, it is possible to think of well-defined physical situations where, if you don't give enough information about the situation for the frequentist probabilities to be clear, then if one person fills in the blanks using the principle of indifference while second person uses a different prior distribution, the second person may be closer to the "correct" frequentist probabilities. For example, if I tell two people "where I live, on some days it rains and on others it doesn't, what's the probability it rains on a given day", then if person 1 uses the principle of indifference here he'll say the answer is 0.5, while if the other person blindly guesses the answer is 0.2, it may in fact be correct that the ratio of rainy days to sunny days over a long period in my area is closer to 0.2 than 0.5.

In any case, I hope you can agree that in some cases enough information about a problem is given so that there is no need to make use of the principle of indifference--these are precisely the problems where enough information is given so we can see exactly what the frequencies of different events would be over a large number of trials. And in any discussion of the proof of Bell's theorem, it should be understood that the proof takes the point of view of some omniscient being who knows the value of all physical variables that are relevant to determining the outcome of experiments, even if some of these variables would be "hidden" to human experimenters--based on this hypothetical point of view, it shows that it's possible to prove that in a local realist universe there'd be certain probability relations between the types of events which can be seen by human experimenters, and that these probability relations are in fact violated by quantum mechanics.
mn4j said:
When physicists say "any two configurations of atoms with the same total energy are equally probable at equilibrium" you don't expect that someone has actually counted the atoms to calculate their probability do you? They are using the principle of indifference.
It's a physical fact of our universe that the principle of indifference happens to work well in situations where we want to calculate the future evolution of a system in a certain observed macrostate. On the other hand, since the laws of physics are time-symmetric, if you know the complete physical state of a system at a given time you can calculate backwards to see what its state would have been at earlier times--if you can only see the current macrostate, assuming the principle of indifference with regard to microstates would lead you to predict the system was in a higher entropy state in the past, a prediction would be wrong in most cases. See Loschmidt's paradox.

Anyway, as I said, discussions of the principle indifference are irrelevant in the context of Bell's theorem, because the theorem is explicitly based on imagining that we (or some imaginary omniscient observer) could know the full state of every system, with no information lacking.
mn4j said:
Still I would have liked to see how you calculate on this question:
Z: Two red LEDs D, E were made on a circuit so that when D was observed to be lit, the probability of E lighting up was 0.2 and when E was lit the probability of D lighting up was 0.2. Also, the circuit was designed such that at least one LED was lit on each button press with no other bias imposed on the LED other than the correlation above. D given to Alice and E given to Bob, and button is pressed.
A: Alice sees red
B: Bob sees red

What probability will you assign to P(AB|Z).
And please point out the error you claim exists in my answer to it:
P(AB|Z) = P(A|BZ)P(B|Z) according to the product rule.

P(A|BZ) = 0.2
P(B|Z) = P(A|Z) = 0.5 since there is no bias between A and B, they are both equally likely. I suspected you will have a problem with this one because it appears, you do not understand probabilities as meaning more than frequencies.

therefore P(AB|Z) = 0.2 * 0.5 = 0.1
Actually I realize I was mistaken when I said the problem didn't give enough information, I was thrown off by your "since there is no bias between A and B" comment. In fact it is possible to deduce the likelihood of A and B in a frequentist picture here. Just let x be the fraction of trials where only Alice's LED lit up, y be the fraction where they both lit up, and z be the fraction where only Bob's lit up. Then the fraction of trials where Bob's lit up is (y + z), in which case the fact that Alice's LED has only an 0.2 probability of lighting up as well when Bob's is lit up means y/(y + z) = 0.2. Likewise, the fraction of trials where Alice's lit up is (x + y), in which case y/(x + y) = 0.2. From these equations we can conclude that z=4y and x=4y, so since we know that x + y + z = 1, that gives us y=1/9, which means x=z=4/9 (note that this means P(A|Z) = P(B|Z) = 5/9, not 0.5...you apparently forgot that these possibilities are not mutually exclusive so they don't have to add up to 1!) So, the correct probability for both lighting up is just y, i.e. 1/9. Obviously this is different from the answer you got of 0.1. If you think your answer is right and mine is wrong, please give me your own answers for P(Alice saw red, Bob didn't see red | Z), and P(Alice didn't see red, Bob saw red | Z). Hopefully you'd agree that since at least one LED always lights up red, if we take these probabilities and add them to P(Alice saw red, Bob saw red | Z), i.e. P(AB|Z) which you claim is 0.1, the sum should be equal to 1?
 
Last edited:
  • #53


You are correct that my answer of 0.1 was wrong. And yours (1/9) is correct. And the reason why mine is wrong is because in calculating one of the probabilities, I did not consider all the information provided in Z. Even though I knew the information. In other words, Z did not mean the same thing in my calculation of every term.

Since we have come full circle, please answer the following simply with yes or no.

1. When you reduce
P(AB) = \sum_i P(AB|Z_i)P(Z_i)
to
P(AB) = \sum_i P(A|Z_i)P(B|Z_i)P(Z_i)
because of independence in the specific Z_i subset. Do you agree that Z_i MUST mean the same thing in all the terms P(A|Z_i) , P(B|Z_i) and P(Z_i)?

2. Do you agree that if Z_i can be split up into individual pieces of information a_i, b_i, \lambda then
P(AB) = \sum_i P(A|Z_i)P(B|Z_i)P(Z_i)
is equivalent to
P(AB) = \sum_i P(A|a_ib_i \lambda)P(B|a_ib_i \lambda)P(a_ib_i \lambda)
but is NOT equivalent to
P(AB) = \sum_i P(A|a_i)P(B|b_i)P(\lambda)

3. Do you believe that if knowledge of a_i gives us no addition information about B and if knowledge of b_i gives us no addition information about A, and knowledge of a_i and b_i give us no additional information about \lambda, then
P(AB) = \sum_i P(A|a_ib_i\lambda)P(B|a_ib_i\lambda)P(\lambda)
should give us the same result as
P(AB) = \sum_i P(A|a_i\lambda)P(B|b_i\lambda)P(\lambda)

4. Do you agree that in Bell's proof, calculating with P(AB) = \sum_i P(A|a_ib_i\lambda)P(B|a_ib_i\lambda)P(\lambda) gave a result which was in agreement with Quantum mechanics but calculating with P(AB) = \sum_i P(A|a_i\lambda)P(B|b_i\lambda)P(\lambda) gave a result which was not.

5. If you agree with 3 and 4, then can you explain to me how the two statements can both be true.
 
Last edited:
  • #54


mn4j said:
1. When you reduce
P(AB) = \sum_i P(AB|Z_i)P(Z_i)
to
P(AB) = \sum_i P(A|Z_i)P(B|Z_i)P(Z_i)
because of independence in the specific Z_i subset. Do you agree that Z_i MUST mean the same thing in all the terms P(A|Z_i) , P(B|Z_i) and P(Z_i)?
Yes, for any value of i, Z_i should mean the same thing everywhere. Of course Z_1 means something different than Z_2 and so forth...one might involve the hidden-variable condition that Alice got ball D and Bob got ball E, another might involve the hidden-variable condition that Alice got ball D and Bob got ball F. You have to sum over all possible hidden-variable conditions in the above equation to get the total probability of P(AB). Agreed?
mn4j said:
2. Do you agree that if Z_i can be split up into individual pieces of information a_i, b_i, \lambda then
P(AB) = \sum_i P(A|Z_i)P(B|Z_i)P(Z_i)
is equivalent to
P(AB) = \sum_i P(A|a_ib_i \lambda)P(B|a_ib_i \lambda)P(a_ib_i \lambda)
but is NOT equivalent to
P(AB) = \sum_i P(A|a_i)P(B|b_i)P(\lambda)
Yes, I agree that if Z_i can be split up in the way you suggest, the sum is equivalent to the first option but not in general equivalent to the second option. But your equation actually has little relevance to Bell's proof, because you haven't put any subscript on the \lambda, implying you think the hidden variables should be exactly the same on every trial! Of course this was not what Bell assumed, he imagined the hidden variables associated with the particles could be different on different trials, and that these hidden variables would explain why an experimenter measuring on a particular axis sometimes gets spin-up and sometimes gets spin-down. So, you really need to give \lambda a subscript and sum over all possible values of this subscript. Also, of course on every trial where the first experimenter makes measurement a1 we do not assume the other experimenter makes b1, so you need multiple subscripts on those letters too. So if you want to write an equation more in keeping with Bell's proof, it should be something like:
P(AB) = \sum_i \sum_j \sum_k P(A|a_i b_j \lambda_k)P(B |a_i b_j \lambda_k)P(a_i b_j \lambda_k)
Or if you assume we are looking at the subset of trials where experimenter #1 made measurement a and experimenter #2 made measurement b (where a and b now stand for two specific measurements rather than variables) we can write:
P(AB|ab) = \sum_k P(A|ab \lambda_k)P(B |ab \lambda_k)P(ab \lambda_k)
mn4j said:
3. Do you believe that if knowledge of a_i gives us no addition information about B and if knowledge of b_i gives us no addition information about A, and knowledge of a_i and b_i give us no additional information about \lambda, then
P(AB) = \sum_i P(A|a_ib_i\lambda)P(B|a_ib_i\lambda)P(\lambda)
should give us the same result as
P(AB) = \sum_i P(A|a_i\lambda)P(B|b_i\lambda)P(\lambda)
Yes, although here the equation more in keeping with Bell's proof would be P(AB) = \sum_i \sum_j \sum_k P(A|a_i \lambda_k)P(B |b_j \lambda_k)P(\lambda_k)
mn4j said:
4. Do you agree that in Bell's proof, calculating with P(AB) = \sum_i P(A|a_ib_i\lambda)P(B|a_ib_i\lambda)P(\lambda) gave a result which was in agreement with Quantum mechanics but calculating with P(AB) = \sum_i P(A|a_i\lambda)P(B|b_i\lambda)P(\lambda) gave a result which was not.
As before, the idea of Bell's proof is to do a sum over possible values of \lambda (he did an integral because he imagined it taking a continuous range of possible values), but it'd be easy to modify your equations above in this way. But I don't know what you mean by "calculating with"--of course we have no idea of the actual possible values of \lambda_k and thus no idea of the exact value of terms like P(A|a_i\lambda_k) or P(\lambda_k)! The idea is just that if we imagine there is a "conspiracy" in the initial conditions that determines both \lambda_k on a given trial and determines what choices of measurements a_i and b_j the experimenters will make on the same trial, then it's possible to imagine that they could be correlated in a way that would be consistent with quantum predictions even in a universe with local realist physics (I can give you a specific numerical example if you like). It's only if you specifically assume no correlation between the experimenter's choice of measurement on a given trial and the source's "choice" of hidden variables to assign the particles on a given trial that you can derive the Bell inequalities which are inconsistent with QM.
 
  • #55


Let's see here:

(Q1 & Q2): So you agree that in
P(AB) = \sum_i P(A|Z_i)P(B|Z_i)P(Z_i) ,
Z_i MUST mean exactly the same thing in all the terms (see your answer to Q1.), and you also believe that

P(AB) = \sum_i P(A|a_ib_i \lambda_i)P(B|a_ib_i \lambda_i)P(a_ib_i \lambda_i)
can be equivalent to
P(AB) = \sum_i P(A|a_i)P(B|b_i)P(\lambda_i) in the case being considered by Bell (your answer to Q2).

Which implies that in Bell's thinking (and yours) there MUST be no dependence between a_i, b_i and \lambda_i, AND there MUST be no dependence between any pair of variables in the complete set (a_i,b_i and \lambda_i), irrespective of which what i is. For example, there MUST be no dependence between a_1 and a_2 or between b_5 and \lambda_{11} . Do you agree with this?


Now do you agree that time t_i can also be a piece of the information contained in Z_i. If you do, can you also envision that the settings at both stations, a_i, b_i could have time-like correlated components in which case integration over time can not be factorized and the settings a_i will be correlated with b_i without any need for spooky action at a distance? In other words, will this new scenario, not considered by Bell, be a local hidden variable model?

(Q3 & Q4):
Do you agree that unless the above conditions are valid, the correlation obtained by Bell will be consistent with QM?

Let me quote Bell's exact words here:
Thirdly, and Finally, there is no difficulty in reproducing the quantum mechanical correlation (3) if the results A and B in (2) are allowed to depend on \vec{b} and \vec{a} respectively as well as on \vec{a} and \vec{b}. For example, replace \vec{a} in (9) by \vec{a'}, obtained from \vec{a} by rotation towards \vec{b} until
1 - \frac{2}{\pi}\theta' = cos \theta
where \theta' is the angle between \vec{a} and \vec{b}. However, for given values of the hidden variables, the results of measurements with one magnet now depend on the setting of the distant magnet, which is just what we would wish to avoid.

In other words, Bell's proof is only valid for the set of hidden variable theories consistent with his assumptions about independence outlined above. Do you agree? (I know you believe that every possible hidden variable theory is covered by it)

Now let us look at this equation which is just the product rule:
P(AB) = \sum_i P(A|a_ib_i \lambda_i)P(B|a_ib_i \lambda_i)P(a_ib_i\lambda_i)

Consider a hidden variable theory in which time is considered a variable as well so that a_i, b_i and \lambda_i are time dependent variables. Note that time dependence of the settings at the stations does not take away the experimenters free will to change a_i or b_i. For example, the measuring device could be a pendulum and the experimenter has the free will to choose the length of the string. It is also not difficult to imagine that the Stern-Gerlach magnet could be made up of electrons exhibiting harmonic motion, even though the experimenter can freely choose the angle of the magnet. At the same time, it is not difficult to imagine that the electrons leaving the source would have the same harmonic motion, for instance due to the fact that they are governed by the same physical law, without any spooky action at a distance.

In this case, one can refactor the above equation easily to
P(AB) = \sum_i P(A|a_ib_i t_i\lambda_i)P(B|a_ib_it_i \lambda_i)P(a_ib_it_i\lambda_i)

Do you agree that in this case, since all the variables after | are dependent on time and thus on each other, it is not valid to reduce this equation to
P(AB) = \sum_i P(A|a_i)P(B|b_i )P(\lambda_i)
In which case such a hidden variable theory was not considered by Bell. If you do, how can you say Bell's theorem disproves all hidden variable theorems. If you think this is not a hidden variable theory, tell me why.

Consider a different hidden variable theory in which the material of which the magnet is a deterministic learner. By this I mean, on interacting with an electron from the i-th measurement event, the material updates it's state based on the value of hidden variable of the electron and it's own hidden variable value. In other words, there is some memory effect left over from the interaction. Then when the electron from the (i+1)th even arrives, the same process repeats over and over.

Do you now see that in such a case, a_i is not independent from a_{i+1} and therefore Bell's factorisation P(AB) = \sum_i P(A|a_i)P(B|b_i )P(\lambda_i) is no longer valid? Do you agree with this? If you think this is not a hidden variable theorem, explain how?

As you hopefully can see now, Bell's inequalities is just a mathematical theorem, whose result is a consequence of the specific assumptions imposed, outside of which it will NOT be valid. Bell's theorem has no experimental basis and has NEVER been proven experimentally! All known experiments violate Bell's theorem! Quantum Mechanics violates Bell's theorem! Think about that for a moment.
 
  • #56


mn4j said:
Let's see here:

(Q1 & Q2): So you agree that in
P(AB) = \sum_i P(A|Z_i)P(B|Z_i)P(Z_i) ,
Z_i MUST mean exactly the same thing in all the terms (see your answer to Q1.), and you also believe that

P(AB) = \sum_i P(A|a_ib_i \lambda_i)P(B|a_ib_i \lambda_i)P(a_ib_i \lambda_i)
can be equivalent to
P(AB) = \sum_i P(A|a_i)P(B|b_i)P(\lambda_i) in the case being considered by Bell (your answer to Q2).
I said in my answer that this equation is not really correct because you use the same subscript for both the measurement choices a_i and b_i and the hidden variable states \lambda_i, implying that each hidden variable state is associated with a unique measurement. In reality, if there's no correlation between the hidden variable states and measurements, then it should be possible to have trials where you have measurement a_1 and hidden variable state \lambda_1, trials where you have measurement a_1 and hidden variable state \lambda_4, trials where you have measurement a_3 and hidden variable state \lambda_1, etc. This is why you have to write it with a multiple sum like I did:

P(AB) = \sum_i \sum_j \sum_k P(A|a_i \lambda_k ) P(B|b_j \lambda_k ) P(\lambda_k)

Also, you made an error when you left the \lambda's out of the P(A|ai) and P(B|bi), they should be included as I did above. I also noticed I made a small error of my own when writing the above equation--it's assumed we set things up so the experimenters make each combination of measurements equally frequently, so P(a1b1)=P(a2b1)=P(a3b1) etc., all have the same probability 1/N where N is the number of possible combinations. So really we should have P(AB) = \sum_i \sum_j \sum_k P(A|a_i \lambda_k ) P(B|b_j \lambda_k ) P(a_i b_j \lambda_k) which implies P(AB) = \sum_i \sum_j \sum_k P(A|a_i \lambda_k ) P(B|b_j \lambda_k ) P(a_i b_j ) P(\lambda_k) based on the assumption of the independence of measurements and hidden variables, which means the proper equation would have the extra constant factor 1/N:

P(AB) = \sum_i \sum_j \sum_k (1/N) P(A|a_i \lambda_k ) P(B|b_j \lambda_k ) P(\lambda_k)

mn4j said:
Which implies that in Bell's thinking (and yours) there MUST be no dependence between a_i, b_i and \lambda_i, AND there MUST be no dependence between any pair of variables in the complete set (a_i,b_i and \lambda_i), irrespective of which what i is. For example, there MUST be no dependence between a_1 and a_2 or between b_5 and \lambda_{11} . Do you agree with this?
Yes. Of course with the independence of a-measurements from b-measurements, this is a situation that we arrange just by telling the experimenters to choose randomly on each trial (perhaps each uses a separate random-number generator, or each rolls a separate die or something), it isn't an assumption about the way the laws of physics work.
mn4j said:
Now do you agree that time t_i can also be a piece of the information contained in Z_i. If you do, can you also envision that the settings at both stations, a_i, b_i could have time-like correlated components in which case integration over time can not be factorized and the settings a_i will be correlated with b_i without any need for spooky action at a distance? In other words, will this new scenario, not considered by Bell, be a local hidden variable model?
The a's and b's represent choices made by the experimenters, unless you think it would be impossible to set things up so that their choices are uncorrelated with one another, I don't see the relevance here. Remember, Bell's theorem is about picking the optimum experimental situation for ruling out local hidden variables, we don't have to consider arbitrary variations in the experimental settings that we control. We do have to consider variations in the nature of the hidden variables associated with the particles, since we don't control those--so, it would be appropriate to imagine if the hidden variables associated with each particle might vary over time. But remember that according to QM, if both experimenters measure along the same axis they'll always get opposite spins (or the same spins, depending on what particles are used and how they are entangled), even if they measure at different times. And in a local hidden variables theory we are assuming that there is no physical "communication" between the particles once they are separated, any correlation in their measured behavior must be due to local physical variables--which could be time-varying functions--that were "assigned" to each one in a correlated way when the particles were at a common location.

So, if we make the assumption that there's no correlation between the hidden variable functions assigned to each particle when they were in the same location and the experimenters' later choices about how/when to measure them, the only way to explain this perfect correlation when they are measured on the same axis (regardless of when the measurements are made) is if the hidden variables predetermine a single answer each particle will give to being measured on any given axis, and there's no time variation in this answer (though there could be time variation in other aspects of the hidden variables as long as they don't change what answer a given particle would give when measured on a particular axis at different times). Do you agree?
mn4j said:
In other words, Bell's proof is only valid for the set of hidden variable theories consistent with his assumptions about independence outlined above. Do you agree? (I know you believe that every possible hidden variable theory is covered by it)
Yes, I agree--the proof covers every possible hidden variable theory that meets the stated conditions--i.e. locality (there are nonlocal hidden variables theories which Bell's theorem doesn't apply to), realism, and the assumption about a lack of "conspiracy" in the

Now let us look at this equation which is just the product rule:
P(AB) = \sum_i P(A|a_ib_i \lambda_i)P(B|a_ib_i \lambda_i)P(a_ib_i\lambda_i)
mn4j said:
Consider a hidden variable theory in which time is considered a variable as well so that a_i, b_i and \lambda_i are time dependent variables. Note that time dependence of the settings at the stations does not take away the experimenters free will to change a_i or b_i. For example, the measuring device could be a pendulum and the experimenter has the free will to choose the length of the string. It is also not difficult to imagine that the Stern-Gerlach magnet could be made up of electrons exhibiting harmonic motion, even though the experimenter can freely choose the angle of the magnet. At the same time, it is not difficult to imagine that the electrons leaving the source would have the same harmonic motion, for instance due to the fact that they are governed by the same physical law, without any spooky action at a distance.
I don't understand how it would be relevant if the electrons in the magnet are exhibiting harmonic motion. Even if we assume there are some hidden variables in the electrons making up the magnet that are correlated with the hidden variables being measured, this does not in any way imply a correlation between the hidden variables of the particles being measured and the experimenter's choice of which setting to use. The different "settings" like a1 or a2 don't contain information about all the physical details of the measuring device, they only refer to the single visible aspect of the measurement that's being varied, in this case the detection angle. It may well be that hidden variables associated with the magnet are different on one trial with setting a2 than they are on a different trial with setting a2, it doesn't mean we treat them as different settings. You can include any hidden variables associated with the measuring device in \lambda if you like, it doesn't only have to refer to hidden variables associated with the particles being measured. All that matters is that in local realism, any correlation between physical variables (hidden or otherwise) in the local neighborhood of measurement #1 and physical variables in the local neighborhood of measurement #2 must be explained by common events in the overlap of the past light cones in these two regions, the idea I was talking about in post #51 when I said:
In a universe with local realist laws, the results of a physical experiment on any system are assumed to be determined by some set of variables specific to the region of spacetime where the experiment was performed. There can be a statistical correlation (logical dependence) between outcomes A and B of experiments performed at different locations in spacetime with a spacelike separation, but the only possible explanation for this correlation is that the variables associated with each system being measured were already correlated before the experiment was done, that the systems had "inherited" correlated internal variables from some event or events in the overlap of their past light cones. Do you disagree? If so, try to think of a counterexample that we can be sure is possible in a local realist universe (no explicitly quantum examples).

If you don't disagree, then the point is that if the only reason for the correlation between A and B is that the local variables \lambda associated with system #1 are correlated with the local variables associated with system #2, then if you could somehow know the full set of variables \lambda associated with system #1, knowing the outcome B when system #2 is measured would tell you nothing additional about the likelihood of getting A when system #1 is measured. In other words, while P(A|B) may be different than P(B), P(A|B\lambda ) = P(A | \lambda ). If you disagree with this, then I think you just haven't thought through carefully enough what "local realist" means.
mn4j said:
In this case, one can refactor the above equation easily to
P(AB) = \sum_i P(A|a_ib_i t_i\lambda_i)P(B|a_ib_it_i \lambda_i)P(a_ib_it_i\lambda_i)

Do you agree that in this case, since all the variables after | are dependent on time and thus on each other, it is not valid to reduce this equation to
P(AB) = \sum_i P(A|a_i)P(B|b_i )P(\lambda_i)[/tex]
As I said before, "Bell's theorem is about picking the optimum experimental situation for ruling out local hidden variables, we don't have to consider arbitrary variations in the experimental settings that we control." The time of the two measurements is one of those settings that we control. If we want to arrange things so that each experimenter makes their measurement at the same prearranged time on every trial, we're free to do so, in this case when summing over many trials we don't have to sum over variations in time--if we can rule out local hidden variables theories in this experiment, then that means local hidden variables theories can't account for all the physics of our universe, period. And even if the time is varied randomly (each experimenter has a randomized timer that tells them when to choose what axis to measure, for example), we should still be able to arrange things so there's no correlation between the time an experimenter makes the choice and what angle they choose. So in this case it would be valid to reduce your equation above to one where only \lambda is a function of time:

P(AB) = \sum_i \sum_j \sum_k \sum_l (Constant) P(A|a_i \lambda_k (t_l) ) P(B|b_j \lambda_k (t_l) ) P(\lambda_k (t_l) )

And in this case, remember my comments earlier about a time-varying \lambda. As I said, in local realism any correlation between physical variables in the region of the two spacelike-separated measurements--whether the physical variables are hidden variables associated with the particles, hidden variables associated with the measuring devices, or the actual observed choice of measurement settings--must be explained by a common inheritance from the overlap of the past light cone of the measurements. As long as we assume no correlation between the experimenter's choices about what measurement settings to use and the physical variables inherited from this overlap region that explain correlations in hidden variables and outcomes at the two measurement-events, then it must be true that on every trial, the answers for each possible measurement choice were already predetermined in this overlap region, in order to explain how they always get opposite answers when they happen to choose the same measurement setting.
mn4j said:
In which case such a hidden variable theory was not considered by Bell. If you do, how can you say Bell's theorem disproves all hidden variable theorems. If you think this is not a hidden variable theory, tell me why.
All we need is a single type of experiment that gives results that contradict local hidden variables theories, and we've shown that such theories can't explain the physics of our universe. If we want to design the experiment so that both measurements are always performed after the same time interval, or so that there is no correlation between the time the measurements are made and the choice of detector angles, we are free to do so. You can consider time variation in the hidden variables themselves since that's out of our control, but I gave an argument above as to why this doesn't make a difference.
mn4j said:
Consider a different hidden variable theory in which the material of which the magnet is a deterministic learner. By this I mean, on interacting with an electron from the i-th measurement event, the material updates it's state based on the value of hidden variable of the electron and it's own hidden variable value. In other words, there is some memory effect left over from the interaction. Then when the electron from the (i+1)th even arrives, the same process repeats over and over.

Do you now see that in such a case, a_i is not independent from a_{i+1} and therefore Bell's factorisation P(AB) = \sum_i P(A|a_i)P(B|b_i )P(\lambda_i) is no longer valid? Do you agree with this? If you think this is not a hidden variable theorem, explain how?
Again, a_i only refers to the single visible aspect of the device which we vary, not to other hidden aspects of the device which can be included in \lambda.

To make this more concrete, I think it would really help if you'd address the example of the scratch-off lotto cards I gave in post #3. You are free to imagine that instead of a static fruit printed underneath the scratch-off material on each box, behind the scratch off material is a screen connected to a computer in the card which can vary what fruit will be revealed depending on when an experimenter scratches a box. You can also imagine that the experimenter is using a coin to scratch one of the boxes and reveal the fruit, and that the coin contains all sorts of complicated internal hidden variables that can be in communication with the card it's scratching (including some kind of 'learning material' which remembers which boxes have been scratched in the past and communicates this to the computer in the card). None of this would change the basic fact that if Alice and Bob always get opposite fruits on trials where they pick the same box to scratch, then under a local hidden variables theory where the source creating the cards has no advanced knowledge of what choices they'll make, it should be absolutely impossible for them to get opposite fruits less than 1/3 of the time when they pick different boxes to scratch. Do you disagree?
mn4j said:
As you hopefully can see now, Bell's inequalities is just a mathematical theorem, whose result is a consequence of the specific assumptions imposed, outside of which it will NOT be valid. Bell's theorem has no experimental basis and has NEVER been proven experimentally! All known experiments violate Bell's theorem! Quantum Mechanics violates Bell's theorem! Think about that for a moment.
You're getting the terminology confused here--QM violates the Bell inequalities, but Bell's theorem is essentially the statement that "in any universe where the laws of physics obey local realism (along with the no-conspiracy assumption), no experiment should violate the Bell inequalities". So, if Bell's theorem is valid, then experimental violations of Bell inequalities just shows that we do not live in a universe where the laws of physics obey local realism (along with the no-conspiracy assumption).
 
  • #57


JesseM said:
it's assumed we set things up so the experimenters make each combination of measurements equally frequently, so P(a1b1)=P(a2b1)=P(a3b1) etc., all have the same probability 1/N where N is the number of possible combinations.
Do you agree that to be consistent, you MUST include the possibility that magnets are also governed by local hidden variables so that a_i and b_i represents not only the subset of settings that the experimenter freely chose, but the COMPLETE state of the magnet at the time of the measurement?

I already gave you the example of the measuring device being like a pendulum hidden in a black box where the experimenter freely changes the length of the string but has no other control over the inner working of the box. I also showed you how in fact this is a possible scenario for a local-hidden variable governed Stern-Gerlach magnet where, even though the experimenter can freely choose the angle, they have no control over the harmonic motion of the individual particles making up the magnet. I need a simple yes or no from you whether you think this is possible local-hidden variable description of the behaviour of the Magnet.

Yes. Of course with the independence of a-measurements from b-measurements, this is a situation that we arrange just by telling the experimenters to choose randomly on each trial (perhaps each uses a separate random-number generator, or each rolls a separate die or something), it isn't an assumption about the way the laws of physics work.

Are you aware that any two objects, exhibiting harmonic motion are correlated, by virtue of circular symmetry, irrespective of differences of frequency or phase and such correlation is not necessarily due to spooky action at a distance? If you disagree, consider two harmonic oscilators which obey the following wave equation,

y(t) = A sin(\omega t + \theta)
Pick any two combinations (1,2) of (A, \omega and \theta) and plot y1 vs y2 for the same t for a given time range and see if you change your mind.

The a's and b's represent choices made by the experimenters, unless you think it would be impossible to set things up so that their choices are uncorrelated with one another, I don't see the relevance here. Remember, Bell's theorem is about picking the optimum experimental situation for ruling out local hidden variables, we don't have to consider arbitrary variations in the experimental settings that we control.

Do you believe that the experimenters can control the harmonic behaviour of the atoms and subatomic particles within their magnets? If you don't then you must agree as I said above that a_i and b_i MUST represent not only the subset of settings that the experimenter freely chose, but the COMPLETE state of the magnet at the time of the measurement, including all local-hidden variables of the magnets. For the two oscillations which you plotted above and saw that they correlated, can you explain how it is possible to design an experiment in which such correlation will not be observed, without using any information about the HIDDEN behaviour?

We do have to consider variations in the nature of the hidden variables associated with the particles, since we don't control those--so, it would be appropriate to imagine if the hidden variables associated with each particle might vary over time. But remember that according to QM, if both experimenters measure along the same axis they'll always get opposite spins (or the same spins, depending on what particles are used and how they are entangled), even if they measure at different times.
This is circular reasoning. Bell did not use QM to derive his inequalities. So what QM predicts should happen, is irrelevant to the derivation of Bell's inequalities.

And in a local hidden variables theory we are assuming that there is no physical "communication" between the particles once they are separated, any correlation in their measured behavior must be due to local physical variables--which could be time-varying functions--that were "assigned" to each one in a correlated way when the particles were at a common location.

Bell believed (and apparently you do too), that the only possible way to have any correlation between
a_i and b_i is by psychokinesis (spooky action at a distance). I have just give you above a situation in which there can be correlation between any two harmonic oscillators without psychokinesis and if you are consistent in not only assigning local-hidden variables to the particles but also to the measuring devices, and the local variables can exhibit harmonic time dependent motion, there will be a correlation without any psychokinesis.

So, if we make the assumption that there's no correlation between the hidden variable functions assigned to each particle when they were in the same location and the experimenters' later choices about how/when to measure them, the only way to explain this perfect correlation when they are measured on the same axis (regardless of when the measurements are made) is if the hidden variables predetermine a single answer each particle will give to being measured on any given axis, and there's no time variation in this answer (though there could be time variation in other aspects of the hidden variables as long as they don't change what answer a given particle would give when measured on a particular axis at different times). Do you agree?
No! I disagree, because the assumption of no correlation, excludes other valid local-hidden variable theories explained above, and if indeed this was the assumption Bell made, his theorem is only valid within the confines of the assumption.

Yes, I agree--the proof covers every possible hidden variable theory that meets the stated conditions--i.e. locality (there are nonlocal hidden variables theories which Bell's theorem doesn't apply to), realism, and the assumption about a lack of "conspiracy" in the
I have already shown you above that there are hidden variables which meet the criteria of locality, realism and lack of conspiracy which Bell did not consider. In other words, those are not the only condidions Bell imposed. He also implicity left out time-varying hidden variables of the type I've mentioned.
I don't understand how it would be relevant if the electrons in the magnet are exhibiting harmonic motion.
It is relevant because any two harmonic oscillators are correlated as demonstrated above. Therefore a_i and b_i understood as complete representations of the local state of the measuring stations are not logically independent.

The different "settings" like a1 or a2 don't contain information about all the physical details of the measuring device, they only refer to the single visible aspect of the measurement that's being varied
Why should it matter, if some of these settings are part of the natural dynamics of the measuring device? Why is it inappropriate to also describe the electrons in the devices with local hidden variables in addition to the 'settings'?


You can include any hidden variables associated with the measuring device in \lambda if you like
No. It has to be associated with a_i and b_i not \lambda because \lambda represents the hidden variable shared between the particles and to avoid consipiracy, those variables have to be separate from those of the measuring devices.

it doesn't only have to refer to hidden variables associated with the particles being measured. All that matters is that in local realism, any correlation between physical variables (hidden or otherwise) in the local neighborhood
Wrong. Then it would be a global variable not a local one. Read Bell's article. Global variables don't come in at all. It is very easy to explain spooky action at a distance using global variables!


So in this case it would be valid to reduce your equation above to one where only \lambda is a function of time.


P(AB) = \sum_i \sum_j \sum_k \sum_l (Constant) P(A|a_i \lambda_k (t_l) ) P(B|b_j \lambda_k (t_l) ) P(\lambda_k (t_l) )
No! Give me a good reason why each entity should not get it's own local variables, with the only variables in common being the ones shared by the particles from their source?

..in order to explain how they always get opposite answers when they happen to choose the same measurement setting.
Again, Bell did not use QM to derive the inequalities so this statement is completely out of place. The result in one orientation, says nothing about the mechanism by which the results are obtained!

Again, a_i only refers to the single visible aspect of the device which we vary, not to other hidden aspects of the device which can be included in \lambda.
Give me a good reason why it should not describe the complete state of the measuring device, just like in any real experiment which will ever be performed?

You're getting the terminology confused here--QM violates the Bell inequalities, but Bell's theorem is essentially the statement that "in any universe where the laws of physics obey local realism (along with the no-conspiracy assumption), no experiment should violate the Bell inequalities". So, if Bell's theorem is valid, then experimental violations of Bell inequalities just shows that we do not live in a universe where the laws of physics obey local realism (along with the no-conspiracy assumption).
No. I'm not. Both Bell's theorem and Bell's inequality are only valid within the narrow set of conditions he imposed while deriving Bell's inequalities. Can you point me to a single experiment that confirms Bell's inequalities. If you can't then how can you claim that it has be validated. If Bell's inequalities have never been validated experimentally, how can you claim that Bell's theorem, which is based on the inequalities has been validated.

The argument is like saying:
All real spiders must have 6 legs. Any spiders with more than 6 legs are not real. And then when somebody finds a spider with 8 legs, instead of evaluating the first premise, you instead conclude that the 8 legged spider is not real.
 
  • #58


mn4j said:
Do you agree that to be consistent, you MUST include the possibility that magnets are also governed by local hidden variables so that a_i and b_i represents not only the subset of settings that the experimenter freely chose, but the COMPLETE state of the magnet at the time of the measurement?
No, as I said the different a's and b's are defined to simply represent the distinct orientations of the spin-measuring device, if you think there are other properties of the measuring devices which vary on different trials and are relevant to determining the measurement outcome, these properties should be included in the \lambda's.
mn4j said:
I already gave you the example of the measuring device being like a pendulum hidden in a black box where the experimenter freely changes the length of the string but has no other control over the inner working of the box. I also showed you how in fact this is a possible scenario for a local-hidden variable governed Stern-Gerlach magnet where, even though the experimenter can freely choose the angle, they have no control over the harmonic motion of the individual particles making up the magnet. I need a simple yes or no from you whether you think this is possible local-hidden variable description of the behaviour of the Magnet.
Yes, I already said it was possible, and I already said it should be included in \lambda, the a's and b's are defined to refer just to the single property of the measuring device that the experimenters vary.
mn4j said:
Are you aware that any two objects, exhibiting harmonic motion are correlated, by virtue of circular symmetry, irrespective of differences of frequency or phase and such correlation is not necessarily due to spooky action at a distance? If you disagree, consider two harmonic oscilators which obey the following wave equation,

y(t) = A sin(\omega t + \theta)
Pick any two combinations (1,2) of (A, \omega and \theta) and plot y1 vs y2 for the same t for a given time range and see if you change your mind.
Is this equation derived from Newtonian equations where it's assumed that forces are transmitted instantaneously? If so it's not relevant to the question of how things work in a local realist universe with a speed-of-light limit on physical effects. Maybe an equation like that could also apply to something like charged particles being bobbed along by an electromagnetic plane wave, I don't know (though in this case the charged particles would not be influencing one another, they'd both just be passively influenced by electromagnetic waves which must have been generated by other charges in the overlap of their past light cones). It should be obvious that in a relativistic universe, any correlation between events with a spacelike separation must be explainable in terms of other events in the overlap of their past light cones. If you disagree, please give a detailed physical model of a situation in electromagnetism (the only non-quantum relativistic theory of forces I know of) where this would not be true. Or just give a simpler situation compatible with relativity, like two balls being drawn from an urn and shipped off in boxes at sublight speeds to Alice and Bob, where it wouldn't be true.
mn4j said:
Do you believe that the experimenters can control the harmonic behaviour of the atoms and subatomic particles within their magnets? If you don't then you must agree as I said above that a_i and b_i MUST represent not only the subset of settings that the experimenter freely chose, but the COMPLETE state of the magnet at the time of the measurement, including all local-hidden variables of the magnets.
Why "must" it? Again, the a's and b's are defined to mean just the settings that the experimenters control. If there are other physical variables associated with the measuring devices, and we choose to define \lambda to include these variables as well as variables associated with the particles being measured, what problem do you see with this? Can't we define symbols to mean whatever we want them to, and isn't it still true that in this case the combination of the a-setting and the \lambda value will determine the probability of the physical outcome A?
mn4j said:
For the two oscillations which you plotted above and saw that they correlated, can you explain how it is possible to design an experiment in which such correlation will not be observed, without using any information about the HIDDEN behaviour?
As always, the "information about the hidden behavior" is assumed to be included in the value of \lambda. \lambda can be understood to give the value of all local physical variables in the immediate spacetime region of one measurement which are relevant to determining the outcome of that measurement.
JesseM said:
We do have to consider variations in the nature of the hidden variables associated with the particles, since we don't control those--so, it would be appropriate to imagine if the hidden variables associated with each particle might vary over time. But remember that according to QM, if both experimenters measure along the same axis they'll always get opposite spins (or the same spins, depending on what particles are used and how they are entangled), even if they measure at different times.
mn4j said:
This is circular reasoning. Bell did not use QM to derive his inequalities. So what QM predicts should happen, is irrelevant to the derivation of Bell's inequalities.
No, but the fact that we always see opposite results on trials where the settings are the same is an observed experimental fact, and a variant of Bell's theorem can be used to show that if we observe this experimental fact and if the experiment is set up in the way Bell describes (with each experimenter making a random choice among three distinct detector angles) and if the universe is a local realist one (with the no-conspiracy assumption), then we should expect to see opposite results at least 1/3 of the time on the subset of trials where the experimenters chose different measurement settings. Since this Bell inequality is violated in real life, that means at least one of the "if" statements must fail to be true as well, and since we can verify directly that the first true were true, it must be the third one about the universe being local realist that's false (see my next post for an elaboration of this logic).
mn4j said:
Bell believed (and apparently you do too), that the only possible way to have any correlation between
a_i and b_i is by psychokinesis (spooky action at a distance).
I assume you are still incorrectly defining the a's and b's to refer to all physical aspects of the measuring devices, and that if you used the correct definitions, what you really mean here is that Bell believed any correlation in the values of variables in \lambda associated with one spacetime region and the values of variables in \lambdaassociated with another spacetime region at a spacelike separation from the first would by spooky action at a distance. But of course this isn't true either, the whole point of a hidden variables explanation for correlations in measurement outcomes is that there can be correlations in the values of local hidden variables in different regions with a spacelike separation, as long as these correlations were determined by events in the overlap of the past light cones of the two regions. I've repeated this over and over so there's really no excuse for your continued mischaracterization of the argument.
mn4j said:
I have just give you above a situation in which there can be correlation between any two harmonic oscillators without psychokinesis and if you are consistent in not only assigning local-hidden variables to the particles but also to the measuring devices, and the local variables can exhibit harmonic time dependent motion, there will be a correlation without any psychokinesis.
And as I said, in any relativistic model of a harmonic oscillator (which I don't think your equation is, though as I said it might be possible to find a situation in electromagnetism where the equation applies), correlations in the values of physical variables in different regions with a spacelike separation would be explained by physical causes in the overlap of the past light cones of these two regions.
JesseM said:
So, if we make the assumption that there's no correlation between the hidden variable functions assigned to each particle when they were in the same location and the experimenters' later choices about how/when to measure them, the only way to explain this perfect correlation when they are measured on the same axis (regardless of when the measurements are made) is if the hidden variables predetermine a single answer each particle will give to being measured on any given axis, and there's no time variation in this answer (though there could be time variation in other aspects of the hidden variables as long as they don't change what answer a given particle would give when measured on a particular axis at different times). Do you agree?
mn4j said:
No! I disagree, because the assumption of no correlation, excludes other valid local-hidden variable theories explained above, and if indeed this was the assumption Bell made, his theorem is only valid within the confines of the assumption.
Well, you're simply confused about the physical meaning of a "local realist" universe then. The statement I give above is a general truth about perfect correlations in regions with a spacelike separation in any universe with local realist laws--the only way to explain perfect correlations between events with a spacelike separation is to assume that the events were totally predetermined by other events in the overlap of the past light cones of the two regions. Again, if you disagree, please think up a situation compatible with relativistic physics (no instantaneous Newtonian forces) where this wouldn't be true.
JesseM said:
The different "settings" like a1 or a2 don't contain information about all the physical details of the measuring device, they only refer to the single visible aspect of the measurement that's being varied
mn4j said:
Why should it matter, if some of these settings are part of the natural dynamics of the measuring device? Why is it inappropriate to also describe the electrons in the devices with local hidden variables in addition to the 'settings'?
Where did you get the idea I said it was inappropriate? I explicitly said you could include these local hidden variables, but they should be included in \lambda, not in the a's and b's.
JesseM said:
You can include any hidden variables associated with the measuring device in if you like
mn4j said:
No. It has to be associated with a_i and b_i not \lambda because \lambda represents the hidden variable shared between the particles and to avoid consipiracy, those variables have to be separate from those of the measuring devices.
No, there is no rule that \lambda cannot include hidden variables not directly associated with the particles, it can include any physical variables that are local to the spacetime regions of the two measurements. You misunderstand the "no-conspiracy" condition if you think there can't be a correlation between the value of hidden variables associated with the particle and hidden variables associated with the measuring device--it's only a correlation between such hidden variables and the experimenter's free choice of how to set the angle that would be called a "conspiracy".
JesseM said:
it doesn't only have to refer to hidden variables associated with the particles being measured. All that matters is that in local realism, any correlation between physical variables (hidden or otherwise) in the local neighborhood
mn4j said:
Wrong. Then it would be a global variable not a local one. Read Bell's article. Global variables don't come in at all. It is very easy to explain spooky action at a distance using global variables!
Of course it wouldn't be global, I just said I was talking about variables in the local neighborhood of each measurement. If it makes it more clear, you can use the symbol \lambda to refer to the value of local physical variables in the spacetime region of one experimenter's measurement, and some other symbol like \phi to refer to local physical variables in the spacetime region of the other experimenter's measurement. In this case we can say that if the experimenters always get opposite outcomes when they both pick identical detector angles (call these identical settings a1 and b1), then it must be true that the result for experimenter #1 is fully determined by the combination of a1 and \lambda, while the result for experimenter #2 is fully determined by the combination of b1 and \phi, and that events in the overlap of the past light cones of these two regions cause \lambda and \phi to be correlated in such a way that the predetermined outcome given a1 + \lambda is guaranteed to be the opposite of the predetermined outcome given b1 + \phi.
mn4j said:
No! Give me a good reason why each entity should not get it's own local variables, with the only variables in common being the ones shared by the particles from their source?
Each spacetime region can get its own separate local variables as above if you want to write it that way. But once the particle is in the same region as the measuring-device, there's no reason it couldn't have a physical influence on the hidden variables associated with that measuring-device. Of course it would still be true that any correlations in the hidden variables associated with the two measuring-devices in different regions would still be explained by causal influences from the overlap of the past light cones of the regions (in this case, the causal influences would be the hidden variables carried by the two particles which influenced the hidden variables of their respective measuring devices, with the value of each particle's hidden variables having been determined when they both came from the source, an event which was indeed in the overlap of the two past light cones).
mn4j said:
Again, Bell did not use QM to derive the inequalities so this statement is completely out of place. The result in one orientation, says nothing about the mechanism by which the results are obtained!
See above, the fact that it's an experimental observation that you get opposite results on the same setting is part of the derivation of the conclusion that in a local realist universe, you should get opposite results at least 1/3 of the time when the experimenters choose different settings. Of course this particular inequality was not actually the one Bell derived in his original paper, though it is a valid Bell inequality--for the inequality he derived in the original paper, see post #8 of this thread which I linked to back in post #3 here. However, this inequality also includes in the derivation the fact that a perfect correlation (or anticorrelation) is seen when the experimenters choose the same detector setting.
mn4j said:
Give me a good reason why it should not describe the complete state of the measuring device, just like in any real experiment which will ever be performed?
There is obviously no cosmic force that compels you to assign particular variables a particular physical meaning. Symbols can mean whatever we define them to mean. But by the same token, there is obviously nothing stopping us from using the a's and b's to refer only to the settings chosen by the experimenters, and to include any other physical variables associated with the detectors in the variable representing all the physical hidden variables \lambda (and as I said you could make a minor tweak to the proof to have two different variables for the two distinct spacetime regions if you prefer). Bell's proof definitely depends on the assumption that the a's and b's refer only to the choices made by the experimenters, so if you want to follow Bell's proof you should adopt this convention, which is as good as any other convention.

(continued in next post)
 
  • #59


(continued from previous post)
mn4j said:
No. I'm not. Both Bell's theorem and Bell's inequality are only valid within the narrow set of conditions he imposed while deriving Bell's inequalities. Can you point me to a single experiment that confirms Bell's inequalities. If you can't then how can you claim that it has be validated. If Bell's inequalities have never been validated experimentally, how can you claim that Bell's theorem, which is based on the inequalities has been validated.

The argument is like saying:
All real spiders must have 6 legs. Any spiders with more than 6 legs are not real. And then when somebody finds a spider with 8 legs, instead of evaluating the first premise, you instead conclude that the 8 legged spider is not real.
You really have the logic totally confused--your analogy has nothing to do with deriving certain conclusions from theoretical assumptions about the laws of physics, saying "in a universe where the laws of physics take form X, under experimental conditions Y we should be guaranteed to see results Z" is a theoretical deduction, nothing like the arbitrary definition "all real spiders must have 6 legs".

Your comment "Can you point me to a single experiment that confirms Bell's inequalities" also shows confusion about the logic of what Bell was trying to do--the whole point is that Bell's inequalities are violated, thus demonstrating that the assumption of local realism (along with the no-conspiracy assumption) must be false! Are you familiar with the idea of "the contrapositive" or "contraposition" in logic? (see here and here). The idea here is that if you can prove that A logically implies B, that is logically equivalent to the statement that if B is false, then logically A must be false as well. As it says at the bottom of the second article from wikipedia, this can be a good way of doing proofs by contradiction:
Because the contrapositive of a statement always has the same truth value (truth or falsity) as the statement itself, it can be a powerful tool for proving mathematical theorems via proof by contradiction, as in the proof of the irrationality of the square root of 2. By the definition of a rational number, the statement can be made that "If \sqrt{2} is rational, then it can be expressed as an irreducible fraction". This statement is true because it is a restatement of a true definition. The contrapositive of this statement is "If \sqrt{2} cannot be expressed as an irreducible fraction, then it is not rational". This contrapositive, like the original statement, is also true. Therefore, if it can be proven that \sqrt{2} cannot be expressed as an irreducible fraction, then it must be the case that \sqrt{2} is not a rational number.
The logic of Bell's theorem, and how when combined with the observed confirmation of quantum predictions it can be used to show that QM is incompatible with local hidden variables, is essentially the same. Here, the A in "A implies B" can be divided into three conditions:

A1: A universe where the laws of physics respect local realism with the no-conspiracy assumption.
A2: An experimental setup where two experimenters make measurements at a spacelike separation, and each is choosing from three possible detector settings which can be labeled a1-a3 for the first experimenter and b1-b3 for the second. The experimenters are making free random choices of which setting to use on each trial.
A3: It is observed to be the case that whenever the experimenters both choose the same setting, they always get opposite results.

Now for B, one version of Bell's theorem proves that these conditions lead logically to the following conclusion:

B: On the subset of trials where experimenters choose different settings, the probability that they get opposite results should be greater than or equal to 1/3.

Now, this experiment can be done, and we can verify that conditions A2 and A3 both apply, yet B is false. So using contraposition, we know some part of A must be false, and since A2 and A3 can be directly verified to be true, the false part must be the theoretical assumption A1.
 
Last edited:
  • #60


JesseM said:
Yes, I already said it was possible, and I already said it should be included in \lambda, the a's and b's are defined to refer just to the single property of the measuring device that the experimenters vary.
You still have not said why it should not be separate. It would seem that if Bell's proof was robust, it should be able to accommodate hidden variables at the sources in addition to source parameters. It should tell you a lot that the hidden variables must be defined a specific way in order for the proof to work. Since you are the one claiming that Bell's proof eliminates all possible local-hidden variable theorems, the onus is on you to explain why the stations should not be able to get separate local hidden variables. Probably because you already know that Bell's theorem can not be formulated in that context. Therefore, you can not claim with a straight face that all real local hidden variable theorems are out.

Is this equation derived from Newtonian equations where it's assumed that forces are transmitted instantaneously?
You don't recognize a simple wave equation? This has nothing to do with physics, it is mathematics. If you did the plot as I explained you will see that any two sinusoidal waves are correlated irrespective of phase, amplitude and frequency. In other words, there is no way you can design an experiment that will eliminate the correlation if you do not know the specific parameters for each wave.

If so it's not relevant to the question of how things work in a local realist universe with a speed-of-light limit on physical effects.
It is relevant. Especially since we know about wave-particle duality. It should tell you that we do not need psychokinesis to explain correlations between distant objects.

It should be obvious that in a relativistic universe, any correlation between events with a spacelike separation must be explainable in terms of other events in the overlap of their past light cones. If you disagree, please give a detailed physical model of a situation in electromagnetism (the only non-quantum relativistic theory of forces I know of) where this would not be true.
You obviously have not thought it through well enough. Two objects can be correlated because they are governed by the same physical laws, whether or not they share a common past or not. This is obvious. A pendulum clock on opposite sides of the globe made by different local manufacturers are correlated by virtue of the fact that they exhibit harmonic motion. What do you claim is the common event in their past that is the source of their correlations?

Why "must" it? Again, the a's and b's are defined to mean just the settings that the experimenters control. Can't we define symbols to mean whatever we want them to, and isn't it still true that in this case the combination of the a-setting and the \lambda value will determine the probability of the physical outcome A?

It must, because you claim that Bell's theorem eliminates ALL hidden variable theorems. It is telling that the terms were so narrowly defined that other possible local hidden variable theorems do not fit.
No you can't define the terms to mean whatever you want them to. You have to define them so that they include all possible hidden variable theorems. Therefore the conclusion of Bell's theorem is handicapped.

No, but the fact that we always see opposite results on trials where the settings are the same is an observed experimental fact, and a variant of Bell's theorem can be used to show that if we observe this experimental fact and if the experiment is set up in the way Bell describes (with each experimenter making a random choice among three distinct detector angles) and if the universe is a local realist one (with the no-conspiracy assumption), then we should expect to see opposite results at least 1/3 of the time on the subset of trials where the experimenters chose different measurement settings. Since this Bell inequality is violated in real life, that means at least one of the "if" statements must fail to be true as well, and since we can verify directly that the first true were true, it must be the third one about the universe being local realist that's false (see my next post for an elaboration of this logic).

You forgot a very important "if" that is the very topic of this thread, ie

  1. "if we observe this experimental fact"
  2. "if a local realist universe behaves only as described by Bell's assumptions"
  3. "if the universe is a local realist one (with the no-conspiracy assumption)"
  4. "if the experiment is set up in the way Bell describes"
  5. then we should expect to see opposite results at least 1/3 of the time on the subset of trials where the experimenters chose different measurement settings.
As you can see, violation of 5 can imply that either (2), (3) or (4) or combinations of them are wrong. For some probably religious reason, proponents of Bell's theorem, jump right to (3) and claim that it must be (3) that is wrong.

I have given you already two examples of hidden variable theorems that point to the falsity of (2). In fact, (2) is the proverbial "a spider must have 6 legs". Do you deny that the validity of Bell's theorem rests as much on (2) as on (3) or (4). It remains to be seen whether any experiment has ever been performed which exactly reproduced Bell's assumptions. But that is a different topic.

For other more rigorous proofs why (2) is wrong, see:
  • Brans, CH (1988). Bell's theorem does not eliminate fully causal hidden variables. 27, 2 , International Journal of Theoretical Physics, 1988, pp 219-226
  • Joy Christian, "Can Bell's Prescription for Physical Reality Be Considered Complete?"
    [http://arxiv.org/pdf/0806.3078v1
  • See, Hess K, and Philipp W (2000). PNAS ͉ December 4, 2000 ͉ vol. 98 ͉ no. 25 pp 14228-14233 for a proof that Bell's theorem can not be derived for time-like correlated parameters, and that such variables produce the QM result.
  • See also, Hess K, and Philipp W (2003), "Breakdown of Bell's theorem for certain objective local parameter spaces"
    PNAS February 17, 2004 vol. 101 no. 7 1799-1805
Well, you're simply confused about the physical meaning of a "local realist" universe then. The statement I give above is a general truth about perfect correlations in regions with a spacelike separation in any universe with local realist laws--the only way to explain perfect correlations between events with a spacelike separation is to assume that the events were totally predetermined by other events in the overlap of the past light cones of the two regions. Again, if you disagree, please think up a situation compatible with relativistic physics (no instantaneous Newtonian forces) where this wouldn't be true.
I suppose all those people cited above are also confused, as is Jaynes. Yet you have not shown me a single reason why my descriptions of the two scenarios in pos #55 are not valid realist local hidden variable theorems. For some reason you ignored the second scenario completely and did not even bother to say whether a "deterministic learning machine" is local or not.

You can see the following articles, for proof that a local deterministic learning hidden variable model reproduces the quantum result:

  • Raedt, KD, et. al.
    A local realist model for correlations of the singlet state
    The European Physical Journal B - Condensed Matter and Complex Systems, Volume 53, Number 2 / September, 2006, pp 139-142
  • Raedt, HD, et. al.
    Event-Based Computer Simulation Model of Aspect-Type Experiments Strictly Satisfying Einstein's Locality Conditions
    J. Phys. Soc. Jpn. 76 (2007) 104005
  • Peter Morgan,
    Violation of Bell inequalities through the coincidence-time loophole
    http://arxiv.org/pdf/0801.1776
  • More about the coincidence time loophole here:
    Larson, JA, Gill, RD, Europhys. Lett. 67, 707 (204)
 
Last edited:
  • #61


Hi mn4j, apologies for not replying to your last post before now, I started it a while ago but realized it would require a somewhat involved response, so I kept putting off writing it for weeks. Anyway, I've finally finished it up:
mn4j said:
You still have not said why it should not be separate. It would seem that if Bell's proof was robust, it should be able to accommodate hidden variables at the sources in addition to source parameters.
It is able to do so. I already said "the hidden variables can be included in \lambda"--did you miss that, or are you not understanding it somehow?
mn4j said:
It should tell you a lot that the hidden variables must be defined a specific way in order for the proof to work.
Physically the hidden variables can be absolutely anything, but for the proof to work you do need to assign them separate variables from the experimental choices. This is like just about any proof where you can't redefine terms willy-nilly and expect it to still make sense. If the proof is mathematically and logically valid, then you have to accept the conclusions follow from the premises, you can't somehow object to it on the basis that you wish the symbols meant different things than what they are defined to mean.
mn4j said:
Since you are the one claiming that Bell's proof eliminates all possible local-hidden variable theorems, the onus is on you to explain why the stations should not be able to get separate local hidden variables.
Do you understand the difference between "the stations should not be able to get separate local hidden variables" and "there can be hidden variables associated with the stations, but the symbols used to refer to them should be separate from the symbols used to refer to the experimenters' choice of measurements angles"? Remember, each value of \lambda is supposed to stand for an array of values for all the hidden variables--we are supposed to have some function that maps values of \lambda to a (possibly very long) list of values for all the different physical variables which may be in play, like "\lambda=3.8 corresponds to hidden variable #1 having value x=7.2 nanometers, hidden variable #2 having value 0.03 meters/second, hidden variable #3 having value 34 cycles/second, ... , hidden variable #17,062,948,811 having value 17 m/s^2", something along those lines. There's no reason at all why the long list of values included in a given value of \lambda can't be values of hidden variables associated with the measuring-device.
mn4j said:
You don't recognize a simple wave equation?
Of course I recognize a wave equation--you weren't tipped off by the fact that I immediately suggested the idea of particles being bobbed along by an electromagnetic plane wave? My point was that I wanted to see a well-defined physical scenario, compatible with local realism, in which the equation would actually apply to physical elements with a spacelike separation, but no common cause in their mutual past light cone to explain why they were both obeying this equation (for example, in the example of two particles at different locations being bobbed up and down by an electromagnetic plane wave, the oscillations of the charges which generated this wave would lie in the overlap of the past light cones). However, I've since realized that they might both be synchronized just because of coincidental similarity in their initial conditions, so I've modified my comments about the relevance to Bell's theorem accordingly--see below.
mn4j said:
It is relevant. Especially since we know about wave-particle duality. It should tell you that we do not need psychokinesis to explain correlations between distant objects.
Of course wave-particle duality is part of QM, and you can't treat it as a foregone conclusion that QM itself is compatible with local realism.
JesseM said:
It should be obvious that in a relativistic universe, any correlation between events with a spacelike separation must be explainable in terms of other events in the overlap of their past light cones. If you disagree, please give a detailed physical model of a situation in electromagnetism (the only non-quantum relativistic theory of forces I know of) where this would not be true.
mn4j said:
You obviously have not thought it through well enough. Two objects can be correlated because they are governed by the same physical laws, whether or not they share a common past or not. This is obvious.
If two experimenters at a spacelike separation happen to choose to do the same experiment, then since the same laws of physics govern them they'll get correlated results--but this is a correlation due to the coincidence of their happening to independently replicate the same experiment, not the type of correlation where seeing the results of both experimenters' measurements tells us something more about the system being measured than we'd learn from just looking at the results that either experimenter gets on their own. This does show that my statement above is too vague though, and needs modification. One way to sharpen things a little would be to specify we're talking about experiments where even with the same settings the experimenters can get different results on different trials, with the results being seemingly random and unpredictable; if we find that the results of the two experimenters are nevertheless consistently correlated, with a spacelike separation between pairs of measurements, this is at least strongly suggestive of the idea that each result was conditioned by events in the past light cones of the two measurements. But this is still not really satisfactory, because in principle there might actually be some hidden deterministic pattern behind the seemingly random results, and it might be that the two systems they were studying coincidentally had identical and synchronized deterministic patterns (for example, they might both be looking out a series of numbers generated by a pseudorandom deterministic computer program, with the programmers at different locations coincidentally having written exactly the same program without having been influenced to do so by a common cause in their mutual past light cone). So, back to the drawing board!

Let me try a different tack. Consider the claim I was making about correlations in a local realist universe earlier, which you were disputing for a while but then stopped after my post #51, so I'm not really sure if I managed to convince you with that post...here's the statement from post #51:
In a universe with local realist laws, the results of a physical experiment on any system are assumed to be determined by some set of variables specific to the region of spacetime where the experiment was performed. There can be a statistical correlation (logical dependence) between outcomes A and B of experiments performed at different locations in spacetime with a spacelike separation, but the only possible explanation for this correlation is that the variables associated with each system being measured were already correlated before the experiment was done ... Do you disagree? If so, try to think of a counterexample that we can be sure is possible in a local realist universe (no explicitly quantum examples).

If you don't disagree, then the point is that if the only reason for the correlation between A and B is that the local variables \lambda associated with system #1 are correlated with the local variables associated with system #2, then if you could somehow know the full set of variables \lambda associated with system #1, knowing the outcome B when system #2 is measured would tell you nothing additional about the likelihood of getting A when system #1 is measured. In other words, while P(A|B) may be different than P(B), P(A|B\lambda ) = P(A | \lambda ). If you disagree with this, then I think you just haven't thought through carefully enough what "local realist" means.
Note that I put some ellipses in the quote above, the statement I removed was "that the systems had 'inherited' correlated internal variables from some event or events in the overlap of their past light cones". I want to retract that part of the post since it does have some problems as you've pointed out, but I stand by the rest. The statement about "variables specific to the region of spacetime where the experiment was performed" could stand to be made a little more clear, though. To that end, I'd like to define the term "past light cone cross-section" (PLCCS for short), which stands for the idea of taking a spacelike cross-section through the past light cone of some point in spacetime M where a measurement is made; in SR this spacelike cross-section could just be the intersection of the past light cone with a surface of constant t in some inertial reference frame (which would be a 3D sphere containing all the events at that instant which can have a causal influence on M at a later time). Now, let \lambda stand for the complete set of values of all local physical variables, hidden or non-hidden, which lie within some particular PLCCS of M. Would you agree that in a local realist universe, if we want to know whether the measurement M yielded result A, and B represents some event at a spacelike separation from M, then although knowing B occurred may change our evaluation of the probability A occurred so that P(A|B) is not equal to P(A), if we know the full set of physical facts \lambda about a PLCCS of M, then knowing B can tell us nothing additional about the probability A occurred at M, so that P(A|\lambda) = P(A|\lambda B)?

If so, consider two measurements of entangled particles which occur at spacelike-separated points M1 and M2 in spacetime. For each of these points, pick a PLCCS from a time which is prior to the measurements, and which is also prior to the moment that the experimenter chose (randomly) which of the three detector settings under his control to use (as before, this does not imply the experimenter has complete control over all physical variables associated with the detector). Assume also that we have picked the two PLCCS's in such a way that every event in the PLCCS of M1 lies at a spacelike separation from every event in the PLCCS of M2. Use the symbol \lambda_1 to label the complete set of physical variables in the PLCCS of M1, and the symbol \lambda_2 to label the complete set of physical variables in the PLCCS of M2. In this case, if we find that whenever the experimenters chose the same setting they always got the same results at M1 and M2, I'd assert that in a local realist universe this must mean the results each of them got on any such trial were already predetermined by \lambda_1 and \lambda_2; would you agree? The reasoning here is just that if there were any random factors between the PLCCS and the time of the measurement which were capable of affecting the outcome, then it could no longer be true that the two measurements would be guaranteed to give identical results on every trial.

Now, keep in mind that each PLCCS was chosen to be prior to the moment each experimenter chose what detector setting to use. So, if we assume that the experimenters' choices were uncorrelated with the values of physical variables \lambda_1 and \lambda_2, either because the choice involved genuine randomness (using the decay of a radioactive isotope and assuming this is a truly random process, for example), or because the choice involved "free will" (whatever that means), then if it's true that \lambda_1 and \lambda_2 predetermine the result on every trial where they happen to make the choice, in a local realist universe we must assume that on each trial \lambda_1 and \lambda_2 predetermine what the results would be for any of the three choices each experimenter can make, not just the result for the choice they do actually make on that trial (since the values of physical variables in the PLCCS cannot 'anticipate' which choice will be made at a later time), the assumption known as counterfactual definiteness. And if at the time of the PLCCS there was already a predetermined answer for the result of either of the three choices the experimenter could make, then if they always get the same results when they make the same choice, we must assume that on every trial the two PLCCSs had the same predetermined answers for all three results, which is sufficient to show that the Bell inequalities should be respected (see my post #3). It would be simplest to assume that the reason for this perfect matchup between the PLCCSs on every trial was that they had "inherited" the same predetermined answers from some events in the overlap of the past light cones of the two measurements, but this assumption is not strictly necessary.

The deterministic case

If the experimenters' choices are not assumed to be truly random or a product of free will, but instead are pseudorandom events that do follow in some deterministic (but probably chaotic) way from the complete set of physical variables in the PLCCS, then showing that the results for each possible measurement must be predetermined by the PLCCS is trickier. I think we can probably come up with some variant of the "no-conspiracy" assumption discussed earlier that applies in this case, though. To see why it would seem to require a strange "conspiracy" to explain the perfect correlations in a local realist universe without the assumption that there was a predetermined answer for each possible choice (i.e. without assuming counterfactual definiteness), let's imagine we are trying to perform a computer simulation to replicate the results of these experiments. Suppose we have two computers A and B which will simulate the results of each measurement, and a middle computer M which can send signals to A and B for a while but then is disconnected, leaving A and B isolated and unable to communicate at some time t, after which they simulate both an experimenter making a choice and the results of the measurement with the chosen detector setting. Here the state of the information in each computer at time t represents the complete set of physical variables in the PLCCS of the measurement, while the fact that M was able to send each computer signals prior to t represents the fact that the state of each PLCCS may be influenced by events in the overlap of the past light cone of the measurement events.

Also, assume that in order to simulate the seemingly random choices of the experimenters on each trial, the computer uses some complicated pseudorandom algorithm to determine their choice, using the complete set of information in the computer at time t as a http://www.lycos.com/info/pseudorandom-number-generator--seeds.html so that even in a deterministic universe, everything in the past light cone of the choice has the potential to influence the choice. Finally, assume the initial conditions at A and B are not identical, so the two experimenters are not just perfect duplicates of one another. Then the question becomes: is there any way to design the programs so that the simulated experimenters always get the same outcome when they make the same choice about detector settings, but counterfactual definiteness does not apply, meaning that each computer didn't just have a preset answer for each detector setting at time t, but only a preset answer for the setting the simulated experimenter would, in fact, choose on that trial? Well, if the computer simulations are deterministic over multiple trials so we just have to load some initial conditions at the beginning and then let them run over as many trials as we want, rather than having to load new initial conditions for each trial, then in principle we could imagine some godlike intelligence looking through all possible initial conditions (probably a mind-bogglingly vast number, if N bits were required to describe the state of the simulation at any given moment there'd be 2^N possible initial conditions), and simply picking the very rare initial conditions where it happened to be true that whenever the two experimenters made the same choice, they always get the same results. Then if we run the simulation forward from those initial conditions, it will indeed be guaranteed with probability 1 that they'll get the same results whenever they make the same choice, without the simulation needing to have had predetermined answers for what they would have gotten on these trials if they had made a different choice. But this preselecting of the complete initial conditions, including all the elements of the initial conditions that might influence the experimenters' choices, is exactly the sort of "conspiracy" that the no-conspiracy assumption is supposed to rule out.

So, let's make some slightly different assumptions about the degree to which we can control the initial conditions. Let's say we do have complete control over the data that M sends to A and B on each trial, corresponding to the notion that we want to allow the source to attach hidden variables to the particles it sends to the experimenters in any fiendishly complicated way we can imagine. If you like we are also free to assume we have complete control over any variables, hidden or otherwise, associated with the measuring-devices being simulated in the A and B computers initially at time t (after M has already sent its information to A and B but before the simulated experimenters have made their choice), to fit with your idea that hidden variables associated with the measuring device may be important too. But assume there are other aspects of the initial conditions at A and B that we don't control--perhaps we can only decide what the "macrostate" of the neighborhood of the two experimenters looks like, but the detailed "microstate" is chosen randomly, or perhaps we can decide the values of all non-hidden variables in their neighborhood but not the hidden ones (aside from the ones associated with the particles sent by the source and the measuring devices, as noted above). Since the pseudorandom algorithm that determines each experimenter's choice takes the entire initial state as a seed, this means that without knowing every single precise detail of the initial state, we can't predict what choices the experimenters will make on each trial. So, for all practical purposes this is just like the situation I discussed earlier where the experimenters' choices were truly random and unpredictable, which means that if we only control some of the initial data at time t (the variables sent from M and the variables associated with the measuring-device) but after that must let the simulation run without any further ability to intervene, the only way to guarantee that the experimenters always get the same result when they make opposite choices is to make sure that the data we control at time t guarantees with 100% certainty what results the experimenters would get for any of the three possible choices, in such a way that the predetermined answers match up for computer A and computer B.
 
Last edited by a moderator:
  • #62


Part 2 of response

Simulations as a test of proposed hidden-variables theories

That was a somewhat long discussion of the case where the experimenters' brains make their choice in a deterministic way, and given that most people discussing Bell's theorem are willing to grant for the sake of the argument that the choice can be treated as random, perhaps unnecessary. But I think the idea I introduced of trying to simulate EPR type experiments on computers is a very useful one regardless. If anyone proposes that a local hidden variables theory can explain the results of these experiments, there's no reason that such a theory could not be simulated in the setup I described, where a middle computer M can send signals to two different computers A and B until some time t when the computers are disconnected, and some time after t the experimenters (real or simulated) make choices about which orientation to use for the simulated detector (if the experimenters are real people interacting with the simulation they could make this choice by deciding whether to type 1, 2, or 3 on the keyboard, for example), and each computer A and B must return a measurement result. On p. 15 of the Jaynes paper you linked to, Jaynes seemed to acknowledge that if there was a local realist theory which could replicate the violations of Bell inequalities, then it should be possible to simulate on independent computers:
The Aspect experiment may show that such theories are untenable, but without further analysis it leaves open the status of other local causal theories more to Einstein's liking.

That future analysis is, in fact, already underway. An important part of it has been provided by Steve Gull's "You can't program two independently running computers to emulate the EPR experiment" theorem, which we learned about at this meeting. It seems, at first glance, to be just what we have needed because it could lead to more cogent tests of these issues than did the Bell argument. The suggestion is that some of the QM predictions can be duplicated by local causal theories only by invoking teleological elements as in the Wheeler-Feynman electrodynamics. If so, then a crucial experiment would be to verify the QM predictions in such cases. It is not obvious whether the Aspect experiment serves this purpose.

The implication seems to be that, if the QM predictions continue to be confirmed, we exorcise Bell's superluminal spook only to face Gull's teleological spook. However, we shall not rush to premature judgments. Recalling that it required some 30 years to locate von Neumann's hidden assumptions, and then over 20 years to locate Bell's, it seems reasonable to ask for a little time to search for Gull's, before drawing conclusions and possibly suggesting new experiments.
So, do you agree with the idea that this is a good way to test claims that someone has thought up a way to reproduce the EPR results with a local realist theory? Earlier you seemed to suggest that they could be reproduced by a theory in which the hidden variables associated with the particle interacted with hidden variables associated with the measuring apparatus in some way--can you explain in a schematic way how this could be simulated? Do you disagree with my statement earlier that in order to explain how experimenters always get the same result when they make the same choice about how to set the simulated detector orientation (which is not to imply there couldn't be other variables associated with the simulated detector that are out of their control), we must assume that at the time t the two computers are disconnected, the state of each computer at that time already predetermines what final result the simulation will give for each possible choice made by the experimenter?
JesseM said:
Why "must" it? Again, the a's and b's are defined to mean just the settings that the experimenters control. Can't we define symbols to mean whatever we want them to, and isn't it still true that in this case the combination of the a-setting and the \lambda value will determine the probability of the physical outcome A?
mn4j said:
It must, because you claim that Bell's theorem eliminates ALL hidden variable theorems. It is telling that the terms were so narrowly defined that other possible local hidden variable theorems do not fit.
No you can't define the terms to mean whatever you want them to. You have to define them so that they include all possible hidden variable theorems. Therefore the conclusion of Bell's theorem is handicapped.
See the first part of my response--you are simply confused here, adopting a particular labeling convention for what physical facts are labeled with what symbols has no physical implications whatsoever, I cannot possibly be ruling out any local hidden variables theories by choosing to let the letter "a" stand for the choice made by the experimenter. Nothing about this convention rules out the idea that there could be other physical variables associated with the measuring device that the experimenter does not control, it's just that they must be denoted by some symbol other than "a" (I suggested that these other variables could be folded into \lambda, although if you wished you could define a separate symbol for physical variables associated with the measuring-device).
JesseM said:
No, but the fact that we always see opposite results on trials where the settings are the same is an observed experimental fact, and a variant of Bell's theorem can be used to show that if we observe this experimental fact and if the experiment is set up in the way Bell describes (with each experimenter making a random choice among three distinct detector angles) and if the universe is a local realist one (with the no-conspiracy assumption), then we should expect to see opposite results at least 1/3 of the time on the subset of trials where the experimenters chose different measurement settings. Since this Bell inequality is violated in real life, that means at least one of the "if" statements must fail to be true as well, and since we can verify directly that the first true were true, it must be the third one about the universe being local realist that's false (see my next post for an elaboration of this logic).
mn4j said:
You forgot a very important "if" that is the very topic of this thread, ie
  1. "if we observe this experimental fact"
  2. "if a local realist universe behaves only as described by Bell's assumptions"
  3. "if the universe is a local realist one (with the no-conspiracy assumption)"
  4. "if the experiment is set up in the way Bell describes"
  5. then we should expect to see opposite results at least 1/3 of the time on the subset of trials where the experimenters chose different measurement settings.
As you can see, violation of 5 can imply that either (2), (3) or (4) or combinations of them are wrong. For some probably religious reason, proponents of Bell's theorem, jump right to (3) and claim that it must be (3) that is wrong.I have given you already two examples of hidden variable theorems that point to the falsity of (2). In fact, (2) is the proverbial "a spider must have 6 legs". Do you deny that the validity of Bell's theorem rests as much on (2) as on (3) or (4). It remains to be seen whether any experiment has ever been performed which exactly reproduced Bell's assumptions. But that is a different topic.
I disagree that #2 is necessary there, no assumptions about the type of hidden-variable theory are needed aside from the fact that it is a local realist one. I confused the issue a bit by making a statement about statistical correlations between spacelike separated events in a local hidden variables theory that you correctly pointed out could be violated in certain cases, but see my revised statements above. Do you agree that in a local realist universe, if \lambda is taken to mean the complete set of local variables in a PLCCS of some point is spacetime S, and we want to know the probability that an event A will take place at S given the knowledge of some other event B at a spacelike separation from S, then P(A|\lambdaB) = P(A|\lambda), i.e. knowing that B occurred gives us no additional information about the likelihood of A if we already know the complete set of information about \lambda?
mn4j said:
For other more rigorous proofs why (2) is wrong, see:
  • Brans, CH (1988). Bell's theorem does not eliminate fully causal hidden variables. 27, 2 , International Journal of Theoretical Physics, 1988, pp 219-226
  • Joy Christian, "Can Bell's Prescription for Physical Reality Be Considered Complete?"
    [http://arxiv.org/pdf/0806.3078v1
  • See, Hess K, and Philipp W (2000). PNAS ͉ December 4, 2000 ͉ vol. 98 ͉ no. 25 pp 14228-14233 for a proof that Bell's theorem can not be derived for time-like correlated parameters, and that such variables produce the QM result.
  • See also, Hess K, and Philipp W (2003), "Breakdown of Bell's theorem for certain objective local parameter spaces"
    PNAS February 17, 2004 vol. 101 no. 7 1799-1805
I suppose all those people cited above are also confused, as is Jaynes.
Most likely there are some confusions in any papers that claim to show a local realist theory with the no-conspiracy assumption can reproduce QM results, yes (I don't know if this is what all the papers above are claiming since I don't have access to any but Joy Christian's paper)--if any such demonstration was valid it would have won widespread acceptance in the physics community and this would be very big news, but that hasn't happened. On the subject of Joy Christian's paper, I remember it being discussed earlier on this forum and it being mentioned that other physicists had claimed to find flaws in the argument, see for example ZapperZ's post #18 here which links to responses here and here. Wikipedia refers to Christian's work as "controversial" here, and says "The controversy around his work concerns his noncommutative averaging procedure, in which the averages of products of variables at distant sites depend on the order in which they appear in an averaging integral. To many, this looks like nonlocal correlations, although Christian defines locality so that this type of thing is allowed". Once again, I think the best way to cut through the fog is just to ask if Christian's proposal, whatever the details, could allow us to create computer programs which would correctly simulate QM statistics on pairs of computers which have been separated from connections to any other computers prior to the time the experimenters make random choices as to how to orient their simulated detectors on each trial. If you've read and understood Christian's proposal (I was not able to follow it myself because I'm not familiar with Clifford algebra), do you think this could be done?
mn4j said:
Yet you have not shown me a single reason why my descriptions of the two scenarios in pos #55 are not valid realist local hidden variable theorems. For some reason you ignored the second scenario completely and did not even bother to say whether a "deterministic learning machine" is local or not.
You didn't give enough details there for me to be able to tell what you're proposing, or how it would reproduce violations of Bell inequalities. Any "deterministic learning machine" is certainly local if you could simulate it with a program running on a computer, but there's no way that loading this program on the two computers A and B in the setup I described would allow you to reproduce both the fact that the experimenters always get the same result when they choose the same setting on a given trial and the fact that on trials where they choose different settings they get the same result less than 1/3 of the time. Again, the basic point is that if the computers have been disconnected from communication with other computers at time t prior to the moment each experimenter makes their choice, then the only way you can guarantee a 100% chance that they'll return identical results if the experimenters make the same choice is to have the state of each computer at time t predetermine what answer they'll give for each of the three choices the experimenters can make (with both computers having the same predetermined answers), and this predetermination is enough to guarantee that if the experimenters make different choices they'll get the same answer at least 1/3 of the time.
mn4j said:
You can see the following articles, for proof that a local deterministic learning hidden variable model reproduces the quantum result:
  • Raedt, KD, et. al.
    A local realist model for correlations of the singlet state
    The European Physical Journal B - Condensed Matter and Complex Systems, Volume 53, Number 2 / September, 2006, pp 139-142
  • Raedt, HD, et. al.
    Event-Based Computer Simulation Model of Aspect-Type Experiments Strictly Satisfying Einstein's Locality Conditions
    J. Phys. Soc. Jpn. 76 (2007) 104005
  • Peter Morgan,
    Violation of Bell inequalities through the coincidence-time loophole
    http://arxiv.org/pdf/0801.1776
  • More about the coincidence time loophole here:
    Larson, JA, Gill, RD, Europhys. Lett. 67, 707 (204)
Are any of these other than the Morgan paper available online? Also, it's important to distinguish between two fundamentally different types of claims of "loopholes" in discussions of Bell's theorem. The first category says that there might be types of local hidden variables theories that fully reproduce the predictions of orthodox QM--for example, a theory involving a conspiracy in the initial conditions of the universe would fall in this category. This is the category I've been discussing so far on this thread. But there's a second category which doesn't actually dispute the basic idea of Bell's theorem that orthodox QM is incompatible with local realism, but instead suggests that existing tests of orthodox QM's predictions about EPR-type experiments have not adequately reproduced the conditions assumed by Bell, so that there might be a local realist theory which makes the correct predictions about experiments that have actually been performed but which would not actually violate Bell inequalities if better tests were performed that sealed off certain experimental loopholes seen in tests that have been done so far (meaning in these cases the theory would disagree with the predictions of orthodox QM). For example, one experimental loophole in some previous tests is that there may not have actually be a spacelike separation between the events of the two detector settings being chosen and the events of the two particles' spins being measured, so in principle the choice of detector settings could have had a causal influence on hidden variables associated with the particle before the particle was detected. This is known as the "communication loophole", and as discussed here the latest experiments have managed to seal it off. Another is the detection loophole which apparently has not yet been fully dealt with by existing experiments.

I haven't really read over the Morgan paper you link to in detail, but it sounds to me like he's talking about an experimental loophole rather than a theoretical loophole--on p. 1 he specifically compares it to the detection loophole, saying that the computer model under discussion "is a local model that can be said to exploit the 'coincidence-time' loophole, which was identified by Larsson and Gill as 'significantly more damaging than the well-studied detection problem'". If you have followed the details of Morgan's discussion, can you tell me if he's talking about an experimental loophole akin to the communication loophole and the detection loophole, or if he's proposing a genuine theoretical loophole involving a local hidden variables model that he thinks can precisely reproduce the predictions of QM in every possible experiment?
 
  • #63


DrChinese said:
This is plain wrong, and on a lot of levels. Besides, you are basically hijacking the OP's thread to push a minority personal opinion which has been previously discussed ad nauseum here. Start your own thread on "Where Bell Went Wrong" (and here's a reference as a freebee) and see how far your argument lasts. These kind of arguments are a dime a dozen.

For the OP: You should try my example with the 3 coins. Simply try your manipulations, but then randomly compare 2 of the 3. You will see that the correlated result is never less than 1/3. The quantum prediction is 1/4, which matches experiments which are done on pretty much a daily basis.
Lol, Dr.Chinese says that these "experiments" are done routinely; as if on a "daily basis".
Please, Dr. Chinese, tell us these experiments you are talking about that are carried out on a daily basis, using no special crystals; not specific radiation wavelengths, and no unorthodox equipage! [If you are unable to do so then you fail.]

Lol. Direct the author to the thread all you want, but he, like you, will never explain the basis of it. Certainly not under local environments! Einstein was once fond of saying that it should be simple. That it should always be kept simple.
 
  • #64


Glenns said:
Please, Dr. Chinese, tell us these experiments you are talking about that are carried out on a daily basis, using no special crystals; not specific radiation wavelengths, and no unorthodox equipage! [If you are unable to do so then you fail.]

I really have no idea what you are saying. Bell tests are done in undergrad classrooms these days. They do require special PDC crystals and the appropriate laser source to create entangled photon pairs.

JesseM: Nice detailed response to mn4j. Raedt's work does involve the so-called "coincidence time loophole" also referenced by Morgan. See here for a related article. (There are 2 authors named Raedt and I assume they are related as they sometimes write together.)

These types of attacks on Bell tests attempt to explain the results as being a form of a biased sample, and as such always comes back to the fair sampling assumption. Of course, as technology improves these attacks always get weaker and weaker and the results NEVER get any closer to the local realistic requirements. And note that IF THEY DID, then the QM prediction would be wrong. And now we are back to Bell's result anyway, that no local realistic theory can reproduce the predictions of QM. So ultimately, the local realist must state: QM is wrong, or they are wrong. Can't both be right!
 
  • #65


DrChinese said:
I really have no idea what you are saying. Bell tests are done in undergrad classrooms these days. They do require special PDC crystals and the appropriate laser source to create entangled photon pairs.

To back up DrChinese claim that these experiments are now routinely done in undergraduate curriculum, please see this link:

http://people.whitman.edu/~beckmk/QM/

I too am puzzled by the requirement of not using any PDC crystal, etc. What's wrong with using those to get the entangled photons?

Zz.
 
  • #66


DrChinese said:
JesseM: Nice detailed response to mn4j. Raedt's work does involve the so-called "coincidence time loophole" also referenced by Morgan. See here for a related article. (There are 2 authors named Raedt and I assume they are related as they sometimes write together.)

These types of attacks on Bell tests attempt to explain the results as being a form of a biased sample, and as such always comes back to the fair sampling assumption. Of course, as technology improves these attacks always get weaker and weaker and the results NEVER get any closer to the local realistic requirements. And note that IF THEY DID, then the QM prediction would be wrong. And now we are back to Bell's result anyway, that no local realistic theory can reproduce the predictions of QM. So ultimately, the local realist must state: QM is wrong, or they are wrong. Can't both be right!

This seems rather dismissive. Raedt's work is not an attack on QM. They have developed a local realistic hidden variable model which gives the same result as QM in EPR type experiments and explains double-slit diffraction among other phenomena.
The matter is very simple, do you claim their model is not local realistic? If it is, then you must be alarmed that it reproduces the Quantum result, contrary to the claims of Bell. If it is not, then you must explain why it is not.

The model is described in the following articles:

http://arxiv.org/abs/0712.3781
http://arxiv.org/abs/0809.0616
http://arxiv.org/abs/0712.3693

The essence of the model is that quantum particles are Deterministic Learning Machines. Using this model, they are able to simulate EPR experiments, delayed-choice experiments, and double-slit experiments event-by-event in a local realist manner. You can't just brush this off.
 
Last edited:
  • #67


mn4j said:
This seems rather dismissive. Raedt's work is not an attack on QM. They have developed a local realistic hidden variable model which gives the same result as QM in EPR type experiments and explains double-slit diffraction among other phenomena.
The matter is very simple, do you claim their model is not local realistic? If it is, then you must be alarmed that it reproduces the Quantum result, contrary to the claims of Bell. If it is not, then you must explain why it is not.

Well, actually they say that the sample is not representative due to choice of the time window for coincidence counting. Their conclusion (quote): "In general, these results support the idea that the idealized EPRB gedanken experiment that agrees with quantum theory cannot be performed". In other words, they claim: a) The experimental results of their purported local realistic theory will be biased to agree with the predictions of QM; b) On the other hand, no suitable Bell test that supports QM can be performed - ever; And finally c) QM is wrong and their local realistic theory is correct.

Why do these attacks get dismissed? Because it is not actually a proof of anything. Can you imagine saying experimental proof supporting X is actually proof of not-X? That is what is being asserted.

Let me put it a different way: there is NO alternative theory presented in these papers. Period. They try to say they have a simulation. OK, fine. Show me the THEORY that matches the scope of QM. Then we can get to the meat and potatoes. The evidence from Bell tests supports the predictions of QM. When we see their theory (which we never will of course) - let's call it LR - then we can say:

Experimental Evidence=> QM
True Theory=>LR (different predictions)
QM-LR=Delta (the difference they purport to explain)

Now we have the problem of why - regardless of approach - every Bell test has a growing Delta and not a shrinking Delta. Delta should decrease to zero as test sampling improves. Instead, Delta is now at about 150+ standard deviations. That is way up from about 10 SD a few decades ago.

So please, get serious.
 
  • #68


DrChinese said:
Let me put it a different way: there is NO alternative theory presented in these papers. Period.
So what?. You still did not answer the following:
1. Do you deny that they presented event-by-event simulation of EPRB?
2. Do you claim that the model of their simulations is not local realistic?

These are the only two important questions. If you agree that they have indeed presented an event-by-event simulation of EPRB, then you end up with only two options

a) Their model is not local realistic or
b) Their model is local realistic contrary to the claims of Bell.

They don't need to have a complete theory which matches QM. All they need to demonstrate is that a local realistic model can reproduce the QM result, to refute Bell.

You probably have seen the following as well, although I can guess your response will be to ask them to get serious:

http://arxiv.org/abs/0901.2546

Maybe what you need is to spell out what evidence it will take for you to see the problem with Bell's theorem. Surely if your belief in it is rational, it must be falsifiable. What will it take to falsify it? Seriously, have you ever considered this question even?
 
  • #69


Hello,
Sorry, I didn't read all the thread. I just studied the http://arxiv.org/abs/0712.3781 article that you linked.

It seems to me that they do have a point.

They simulate measurments and associate to each of them a time t. Then, they count coincidences within a given time window only.

Their model violates Bell's inequality in the following way : they make the time t depend on the spin of the particle and of the orientation of the detector (locally). The delay between both detections associated to a pair thus depends on the spins and orientations of both particles and detectors. The coincidence count, that violates Bell's inequality is then a subset of the total coincidence count, that respects Bell's inequality. The selection of this subset depends on the delay between the events, thus on the spin and orientations of the detectors. This is a non-local hidden variable, and this is why it can violate Bell's inequality.

The most interesting point in their simulation, in my eyes, is that a real electric coincidence counter, in a real laboritory, can do exactly the same thing ! It can count a subset of results that violates Bell's inequality, from a total set of physical results that respects it, as long as a physical dependence exists between the extra correlations and the delay between the signals from the twin particules.
 
  • #70


Actually, this loophole is testable experimentally : we just have to emit the pairs of particles one by one, so that the time window for coincidences can be extended far beyond the maximum processing time for the detection.
This way, we can count all detections, whatever the delay between them.

If the idea in the paper is right, Bell's inequality should become respected.
If the idea in the paper is wrong, Bell's inequality should still be violated.

Maybe this have been already done.
 
  • #71


Pio2001 said:
Actually, this loophole is testable experimentally : we just have to emit the pairs of particles one by one, so that the time window for coincidences can be extended far beyond the maximum processing time for the detection.
This way, we can count all detections, whatever the delay between them.

If the idea in the paper is right, Bell's inequality should become respected.
If the idea in the paper is wrong, Bell's inequality should still be violated.

Maybe this have been already done.
What do you mean by "emit the pair of photons one by one"?
Aren't two entagled photons emitted at the same time, by definition?
 
  • #72


I mean decreasing the emission rate until pairs of photons are emitted slower (both photons being still emitted at the same time, of course).
we can then set the window of the coincidence counter very large, so that it counts the detection of two photons whatever the small time delay introduced by the authors in order to violate Bell's inequality.

If I have understood correctly their simulation, De Raedt et al.show that Bell's inequality violation in Aspect-like experiments is not necessarily caused by quantum non-local effects, but may come from an artefact caused by the coincidence counter setup.

Quantum theory predicts that as long as the two photons are from the same entangled pair, Bell's inequality will be violated.
In my understanding, De Raedt et al. simulation violates Bell's inequality introducing a delay between the photons AND setting the coincidence window narrower than this delay. So it predicts that if the coincidence window is widened enough for counting all coincidence, whatever the delay between the recording of the events, Bell's inequality will become respected.

In practice, that's exactly what happens in the real data set that they took as an example, BUT, it seems logical to assume that this is because widening the time window, we count more and more false coincidences, thus decreasing the correlations.
Decreasing the physical emission rate at the source, we should be able to widen the coincidence window without increasing the false coincidence rate at all.

This way, if Bell's inequality is still violated, the hypothesis of an artifact in the coincidence counter setup will be rejected, and the quantum non local correlations will remain the only explanation.
 
  • #73


Pio2001 said:
we just have to emit the pairs of particles one by one, so that the time window for coincidences can be extended far beyond the maximum processing time for the detection.
There is something that can be done without decreasing emission rate.
You can use two coincidence windows of different width. Say that coincidences that are in one widow but are outside other will be poorly synchronized coincidences and coincidences that are inside smaller widow are decently synchronized coincidences. Now if you calculate rate "poorly synchronized coincidences"/"decently synchronized coincidences" for different relative polarization angles you should not see any correlation between relative angle and this rate for fair sampling assumption to hold.
And if there is no such correlation there will be match less possible models for coincidence loophole if any.
Good thing is that analysis like that can be done without performing any new experiments based only on existing data from experiment where all detections are recorded with timestamps (and coincidences are found later from recorded data).
 
  • #74


mn4j said:
Maybe what you need is to spell out what evidence it will take for you to see the problem with Bell's theorem. Surely if your belief in it is rational, it must be falsifiable. What will it take to falsify it? Seriously, have you ever considered this question even?

Let's see if I get this right. The experimental evidence is X, and we are supposed to use that evidence to conclude not-X.

The thing about Bell is that it is more or less independent of whether local reality or QM is correct. It says they both cannot be correct, which was not obvious at the time. So let's get specific.

QM says the coincidence rate for entangled photon pairs at 60 degrees is 25%. Local realistic theories say the true coincidence rate is at least 33%. Raedt is saying that the true rate is [insert your guess here since he skips this step]% but that experiments will always support QM.

Now, once again, how are we supposed to conclude there is anything wrong with Bell? Clearly, the entire issue here is Raedt trying to explain why a LR theory, which makes predictions incompatible with QM, actually provides an experimental result compatible with QM. So clearly, this is not about Bell at all. You may as well say that all experiments supporting General Relativity are actually evidence of Newtonian gravity.

Now, get serious. Even Raedt ought to be able to see why the argument falls flat. It is going to take experimental evidence IN FAVOR of a local realistic theory to convince anyone of their result. If they really had a bead on anything, they would be proposing an experiment to test their ideas. Rather than writing a paper saying they are correct in the face of evidence to the contrary.
 
  • #75


I spent some time reading de Raedt's articles and talking with him about them. Let me answer some of the mn4j's questions in the last postings ("the only two important questions", as you write).

mn4j said:
So what?. You still did not answer the following:
1. Do you deny that they presented event-by-event simulation of EPRB?
2. Do you claim that the model of their simulations is not local realistic?

These are the only two important questions. If you agree that they have indeed presented an event-by-event simulation of EPRB, then you end up with only two options

a) Their model is not local realistic or
b) Their model is local realistic contrary to the claims of Bell.

1. Yes, they did present event-by-event simulations of the certain experiments (like Aspect's and Weihs' et al.), that often are thought to be conclusive evidences that Bell's equality is violated.

2. Yes, their models are local realistic.

But you are totally wrong about two options that I'm supposedly left with. The important thing to realize is that no conclusive test of EPRB has ever been done. Every experiment that has been conducted has certain loopholes (there's even a special wikipedia article about those). This means that all those experiments are not ideal, and it's possible to explain their results with a local realist theory. This is well-known to everybody who is interested in QM foundations, and has been known already for ages (Philip Pearle showed in the late 1970s how this can be done using one of the loopholes).

de Raedt presents yet another model of how these loopholes can be used to still "give some chance" to local realism. This is certainly not a big deal, and has no consequences to Bell's theorem. The usual hope is that in some years the conclusive experiment will be performed (I've heard people hoping that it will occur in 10-20 years).

What I find particularly confusing in de Raedt's articles is that they are totally out of context: he never mentions a word "loophole", leave alone all existing bulk of knowledge about them. This is misleading to say the least.

See http://arxiv.org/abs/quant-ph/0703120 for this critique (yes, I do know that there's a reply by de Raedt; I think that his reply misses the point).


mn4j said:
This seems rather dismissive. Raedt's work is not an attack on QM. They have developed a local realistic hidden variable model which gives the same result as QM in EPR type experiments and explains double-slit diffraction among other phenomena.
The matter is very simple, do you claim their model is not local realistic?

This paper about double-slit (http://arxiv.org/abs/0809.0616) is a different story, but also very telling. Have you read it? Did you realize that this model works only because many photons "get lost"? If we imagine a perfect emitter that emits 1 photon per second and we let it emit 1000 photons, and then we count how many photons hit the screen and how many photons hit the double-slit screen, and then we add those two numbers together, then the result according to this model will be a lot less then 1000.

This is clearly a prediction different from that of QM. This model can be tested and falsified. I'm absolutely sure that it's just wrong.

Of course such an experiment is tremendously difficult to perform, but there's an easier test. This model works because the detectors have memory and are "learning". Now if we start to jiggle the screen back and forth (parallel to itself) sufficiently fast, then de Raedt's model predicts that the interference image will get smeared (I think it's stated in the paper). Now, here's the question for you: what is the prediction of QM?

I think that QM predicts that the interference picture will stay the same. I asked de Raedt this question, and he replied that in his opinion QM predicts nothing, because it requires the experimental apparatus to be completely fixed during the experiment and not "jiggled". Well, then I asked him what would he say if such an experiment is performed and the interference picture does not change.

He said that in that case he would (I quote here) retire.

The bottomline is that what de Raedt proposes are some local realistic explanations of the certain experiments. All his models are in principle distinguishable from QM (as Bell always told us). And personally I'm quite sure that when the tests are done, these models will be proven false.
 
  • #76


DrChinese said:
QM says the coincidence rate for entangled photon pairs at 60 degrees is 25%. Local realistic theories say the true coincidence rate is at least 33%. Raedt is saying that the true rate is [insert your guess here since he skips this step]% but that experiments will always support QM.

Yes, it follows from figure 6 in the first paper that according to their simulation, the true rate, in your example, may actually be 25 %, while being measured at 33 % because of the least correlated photons being registered by the two detectors at a time interval bigger than the time window of the counter. Which leads to discarding the least correlated pairs.
 
  • #77


kobak said:
I spent some time reading de Raedt's articles and talking with him about them. Let me answer some of the mn4j's questions in the last postings ("the only two important questions", as you write).
1. Yes, they did present event-by-event simulations of the certain experiments (like Aspect's and Weihs' et al.), that often are thought to be conclusive evidences that Bell's equality is violated.

2. Yes, their models are local realistic.

But you are totally wrong about two options that I'm supposedly left with. The important thing to realize is that no conclusive test of EPRB has ever been done. Every experiment that has been conducted has certain loopholes (there's even a special wikipedia article about those). This means that all those experiments are not ideal, and it's possible to explain their results with a local realist theory. This is well-known to everybody who is interested in QM foundations, and has been known already for ages (Philip Pearle showed in the late 1970s how this can be done using one of the loopholes).

de Raedt presents yet another model of how these loopholes can be used to still "give some chance" to local realism. This is certainly not a big deal, and has no consequences to Bell's theorem.

...

The bottomline is that what de Raedt proposes are some local realistic explanations of the certain experiments. All his models are in principle distinguishable from QM (as Bell always told us). And personally I'm quite sure that when the tests are done, these models will be proven false.

Welcome to PhysicForums, kobak! And thank you very much for this insight on de Raedt.

I was just looking at the papers in a bit more detail. I have been disappointed by the approach, as it obscures what is being asserted in favor of trying to prove Bell wrong (which I think is overreaching). I do not personally consider these to be counter-examples to Bell, and I seriously doubt they will sway others either.

1. Their model (at least in one paper) does not provide fair sampling (assuming I read it correctly) to deliver an explicitly biased sample. As such, it exploits the loopholes you mention and doesn't really provide anything new (as you also mention). Quote:

"The mathematical structure of Eq. (18) is the same as the one that is used in the derivation of Bell’s results and if we would go ahead in the same way, our model also cannot produce the correlation of the singlet state. However, the real factual situation in the experiment [8] is different: The events are selected using a time window W that the experimenters try to make as small as possible. ...

"In our simulation model, the time delays ti are distributed uniformly over the interval [0, Ti] where T1 = [not random]."

In other words, there is tinkering with the time window and by their choice of how the time window is chosen, combined with time delay parameter choice, they bias the sample. They have to, because otherwise the raw source data would run afoul of Bell.

2. The other paper (also Dec 2007/Feb 2008) relies on so-called DLMs (Deterministic Learning Machines). These purport to satisfy local causality and involve a form of memory from trial to trial:

"A DLM learns by processing successive events but does not store the data contained in the individual events. Connecting the input of a DLM to the output of another DLM yields a locally connected network of DLMs. A DLM within the network locally processes the data contained in an event and responds by sending a message that may be used as input for another DLM. Networks of DLMs process messages in a sequential manner and only communicate with each other by message passing: They satisfy Einstein’s criterion of local causality. For the present purpose, we only need the simplest version of the DLM [11]. The DLM that we use to simulate the operation of the Stern-Gerlach magnet is defined as follows. The internal state of the ith DLM, after the nth event, is described by one real variable un,i. Although irrelevant for what follows, this variable may be thought of as describing the fluctuations of the applied field due to the passage of an uncharged particle that carries a magnetic moment."

and

"A key ingredient of these models, not present in the textbook treatments of the EPRB gedanken experiment, is the time window W that is used to detect coincidences. We have demonstrated (see Section IIG) the importance of the choice of the time window by analyzing a data set of a real EPRB experiment with photons [32].

3. With both of these, the critique is really the same: why not point to the specific difference? They do everything humanly possible to obscure what should be a simple point: what is the difference between QM and their LR? Clearly, they could show how their data points satisfy the Inequality if all trials are considered and are fully independent, while the sub-sample within the time window is biased to yield a result consistent with QM but violating the Inequality.

Specifically: the QM prediction of entangled photon coincidences is .250 at 60 degrees. So we know their adjusted result must therefore also be .250. The LR value must be .333 or greater, so the delta is .0833. Which data items were excluded to get this result? Or why would the results be biased specifically towards that of a wrong theory (QM)? These are the lines in the sand, and they really are not addressed. I can see the hand waving in the equations, but without this simple explanation I don't see where they have anything. Quoting again:

"Extensive tests (data not shown) lead to the conclusion that for d = 3 and to first order in W, our simulation model reproduces the results of quantum theory of two S = 1/2 objects, for both Case I and Case II."

Clearly, for the algorithm to work, the delta must be .0833 at 60 degrees; delta=0 at 0 and 45 degrees; and so on. That delta function, in my opinion, should jump off the page. In reality, I don't think they have identified such a function. They should be the ones to point out the source of the delta. I have tried, but can't really follow their algorithm far enough to generate values.

My point is basically: why not make a testable prediction showing how using the algorithm, the experimental results vary in good agreement with the model but NOT according to any quantum mechanical prediction? I.e. if I change the time window and delay parameters in an actual experiment, the results match the LR model but are not explained by QM. After all, according to the LR model, it is strictly an accident of chance that QM happens to be correct in its predictions regarding how entangled photons behave (since there are no such things as entangled photons in LR, by definition).
 
  • #78


DrChinese said:
I have tried, but can't really follow their algorithm far enough to generate values.

I only read the december 2007 paper with the Deterministic Learning Machines. They give two algorithms, the one with DLM (page 16 : deterministic model), and a pseudo-random one, much simpler (page 16 : pseudorandom model). The way to sort results follows (page 17 : time tags / data analysis).

They suggest a possible physical meaning for this bias : "experimental evidence that the time-of-flight of single photons passing through an electro-optic modulator fluctuates consederably can be found in ref 56"

I find this idea interesting, because it is more realistic to suppose that the time-of-flight of a photon can depend of its polarisation in an environnement sensitive to polarisation, than to suppose that the detector purposely discards detections that would comply with Bell's inequality.
This relation is explicitely proposed page 17 in the last paragraph before "5.Data Analysis" (the formula have no number). They later set d=3 in this formula (for 1/2 spin particles).

Zonde's idea to test coincidence efficiency vs relative angle seems good. We could try it on available data (after checking that it works for all possible scenarii of this kind).
 
  • #79


Pio2001 said:
They suggest a possible physical meaning for this bias : "experimental evidence that the time-of-flight of single photons passing through an electro-optic modulator fluctuates consederably can be found in ref 56"

I find this idea interesting, because it is more realistic to suppose that the time-of-flight of a photon can depend of its polarisation in an environnement sensitive to polarisation, than to suppose that the detector purposely discards detections that would comply with Bell's inequality.
This relation is explicitely proposed page 17 in the last paragraph before "5.Data Analysis" (the formula have no number). They later set d=3 in this formula (for 1/2 spin particles).

Thanks, I'll look again. There must be a connection between the polarization and detection probability (which is here related to the window size and delay factors) in order to get the desired results. I just couldn't figure out where, and I couldn't figure out why that wasn't highlighted.
 
  • #80


DrChinese said:
Let's see if I get this right. The experimental evidence is X, and we are supposed to use that evidence to conclude not-X.

The thing about Bell is that it is more or less independent of whether local reality or QM is correct. It says they both cannot be correct, which was not obvious at the time. So let's get specific.

You do realize that Bell has a definition of local reality which has not been verified experimentally don't you. If you think it has been verified, show me experimental evidence that proves Bell's definition of local reality. Have you not been reading this thread at all? The bulk of the discussion was about this point.

QM says the coincidence rate for entangled photon pairs at 60 degrees is 25%. Local realistic theories say the true coincidence rate is at least 33%. Raedt is saying that the true rate is [insert your guess here since he skips this step]% but that experiments will always support QM.
NO! Bell's local realist theories say the true coincidence rate is 33%. If you disagree, point me to a reference about a local realist theory which makes that claim. Again you will notice that only Bell makes that claim, which it turns out is a straw-man, because there is no experimental validation of it. Do you know of any local realist theory for which that claim is valid? If not, why do you state it as though it was dogmatically accepted to be the case?

So then we have:
1. What QM predicts
2. What Bell claims (and this is crucial) local realist theories should result in
3. What experiments observe

It turns out (1) agrees with (3) but disagrees with (2). If you are thinking intellectually honestly, you must realize that failure of (3) to agree with (2) can mean that Bell's claim about local realist theories is dubious. Yet, for some reason you'd rather think Bell was a god and every claim he made was dogma, which leads you to conclude that both (1) and (3) are results of non-local realist theories. Why is that, I ask? This is not rocket science.

Now, once again, how are we supposed to conclude there is anything wrong with Bell? Clearly, the entire issue here is Raedt trying to explain why a LR theory, which makes predictions incompatible with QM, actually provides an experimental result compatible with QM.
NO! Your bias is clouding your judgement of Raedt's work. Raedt has developed a model which is unmistakably and convincingly LR, and he shows that it agrees with (1) and (3). Again, if you are thinking intellectually honestly, you must realize that according to Bell's definition (and this is crucial) of what LR means, this is impossible.

If you want to criticize Raedt, you have to show that either:
1) The model he has developed is not LR
2) The model he has developed does not reproduce the results of QM and real experiments
You have done neither.
Now, get serious.
No. YOU get serious!
 
  • #81


kobak said:
The important thing to realize is that no conclusive test of EPRB has ever been done.
EXACTLY! So on what basis do followers of Bell purport to have proven that local realist theories should produce a certain result?

Here is an experiment to try:
- use a separate set of apparatus for each pair of photons emitted. If you still obtain the QM result, then Raedt's model is wrong.
 
  • #82


mn4j said:
> The important thing to realize is that
> no conclusive test of EPRB has ever been done.

EXACTLY! So on what basis do followers of Bell purport to have proven that local realist theories should produce a certain result?

Well, I'm glad that we agree on something. However, your question doesn't relate to my statement that you quote. Let me try to clarify things a bit.

There are two things: Bell's theorem as an abstract theorem, and its experimental tests. Bell's theorem states that local realist theories can't reproduce all the predictions of QM. It doesn't need to be proven by experiment, because the proof is given on a piece of paper. The experiment has to show what is correct: QM or local realism. What I said means that no conclusive proof that QM is right and LR is wrong (i.e. no conclusive violation of Bell's inequalities) has ever been done. This has no relation to the validity of the theorem itself.

Now, you seem to claim that Bell's theorem is wrong. But even if it were wrong, de Raedt's articles about EPR wouldn't prove it wrong (because as I explained, those articles only show that the experiments done so far were not perfect).

Finally, one more point. What exactly does "local realism" mean, is a philosophical question. For his proof Bell used a particular equation (P(A|aBL) = P(A|aL) or something like that) and he gave certain "physical" arguments about why this should be true if we assume local realism.

Famous ET Jaynes (and de Raedt follows Jaynes here) wrote a paper that you cited in this discussion, where he claimed that Bell made a stupid mistake when applying rules of probability (this is not a quote, but that's how it sounds). This is absurd. Bell certainly understood the rules of probability perfectly well and he actually did give the physical arguments for his assumption (that Jaynes seemed to fail to either notice or understand).

You may still say that local realism does not necessarily entail this assumption of Bell. Since "local realism" isn't something defined by a formula, this is in principle a meaningful claim. However I never saw any local realist model that would violate Bell's assumption (in the very particular example that Bell is discussing). de Raedt's models of simulations of Aspect-Weihs experiments have no relation to this issue.
 
  • #83


The experiment has to show what is correct: QM or local realism. What I said means that no conclusive proof that QM is right and LR is wrong (i.e. no conclusive violation of Bell's inequalities) has ever been done. This has no relation to the validity of the theorem itself.

Why MUST LR and QM contradict each other? Just because Bell says they must? This is what you fail to realize. The issue here is not whether QM is wrong and LR is right! The issue, whether Bells understanding of LR is correct.

Now, you seem to claim that Bell's theorem is wrong. But even if it were wrong, de Raedt's articles about EPR wouldn't prove it wrong (because as I explained, those articles only show that the experiments done so far were not perfect).
If you agree that the experiments were not perfect, then how come those same experiments are still presented as proof of Bell's theorem. Bell's theorem is a negative theorem.

Bell says, "NO LR can reproduce the QM results". Now Bell better be sure that his definition of LR is such that it accounts for EVERY possible LR theory. If even 1 is found that can not be modeled by Bell's equations, Bell's theorem has to be thrown out. Do you agree with this? De Raedt's articles presents just one such models.

Finally, one more point. What exactly does "local realism" mean, is a philosophical question. For his proof Bell used a particular equation (P(A|aBL) = P(A|aL) or something like that) and he gave certain "physical" arguments about why this should be true if we assume local realism.

Famous ET Jaynes (and de Raedt follows Jaynes here) wrote a paper that you cited in this discussion, where he claimed that Bell made a stupid mistake when applying rules of probability (this is not a quote, but that's how it sounds). This is absurd. Bell certainly understood the rules of probability perfectly well and he actually did give the physical arguments for his assumption (that Jaynes seemed to fail to either notice or understand).
I'll take Jaynes over Bell any day when it comes to who understands probability better. Take a look at De Raedts recent article together with Hess (http://arxiv.org/abs/0901.2546) for a succinct explanation of Bell's error.

You may still say that local realism does not necessarily entail this assumption of Bell. Since "local realism" isn't something defined by a formula, this is in principle a meaningful claim. However I never saw any local realist model that would violate Bell's assumption (in the very particular example that Bell is discussing).
Don't you realize that the type of claim Bell is making about LR models requires that he MUST be absolutely sure that he has presented an exhaustive representation of ALL POSSIBLE LR models. I am perfectly happy to accept that Bell's theorem is true ONLY for the LR models narrowly defined by his assumptions.

de Raedt's models of simulations of Aspect-Weihs experiments have no relation to this issue.
Don't forget that you already agreed that de Raedt's model is LR. So they are relevant. Bell's equations do not apply to a deterministic learning machine model like de Raedt's. How then can Bell claim that No LR can reproduce the QM results?

You see, the problem with Bell's theorem is not that his conclusions can not be drawn from his assumptions. The problem is that those conclusions are interpreted by those who don't know better beyond the scope of the assumptions on which they are based. For someone purporting to characterize all LR models, he chose a severely narrow and handicapped subset of LR to base his calculations on.
 
  • #84


What about the GHZ proof, then ?
 
  • #85


Dear mn4j,
we are already going in circles. I will try to summarize my points as clear as possible and I would like to ask you to comment on each of them, whether you agree or not. If you still don't listen to what I'm saying, then it's better to stop this discussion.

1. All the experiments that has been done so far to test Bell's inequalities ARE. NOT. PERFECT. This is not something to agree or disagree, it's just a fact, and it's admitted by everybody, including of course the experimenters themselves. Agreed?

2. These experiments definitely can't be presented "as proof of Bell's theorem", because they are not. Please stop asking me why they are presented in such way! If anybody does so, he or she just doesn't understand anything here. Bell's theorem is a theoretical construct, it doesn't need to be proven by experiment. Experiment has to show whether Bell's inequalities are violated or if they are not. Agreed?

3. I quote you: "I am perfectly happy to accept that Bell's theorem is true ONLY for the LR models narrowly defined by his assumptions". OK, let's call all theories that Bell's theorem applies to "Bell local realistic" (BLR). Now, Bell's theorem says and proves that BLR theories should obey Bell's inequalities while QM violates them. Agreed?

4. Your main point seems to be that BLR is only a narrow subclass of LR theories. Well, I repeat: what is "local realism" is a philosophical question. I'm personally quite happy to include in my notion of local realism the assumption that the outcomes of Bob's experiments are statistically independent from Alice's choice of experimental settings (this is Bell's assumption and precise definition of BLR). You are not, right?

5. Since scientific consensus is that BLR and LR are the same thing, and you disagree, the only meaningful way to disagree is to give an example of a LR theory that is not BLR. Agreed?

6. In case you want to say that de Raedt's models are such kind of example, I repeat once again: NO, they ARE NOT. de Raedt showed (as was already known) that the experiments to test Bell's inequalities were not perfect (there are loopholes), and because of these experimental flaws their results can be explained in LR way. Agreed?

7. But (the crucial point!) de Raedt's model is obviously not only LR, but BLR as well! IF a loophole-free test of Bell's inequalities is ever done and IF Bell's inequalities are still found to be violated, then de Raedt won't be able to explain this with his model (why? because of Bell's theorem). Agreed? Please think a bit before answering.

I'm asking you think, because you wrote that "Bell's equations do not apply to a deterministic learning machine model like de Raedt's". This is just plain wrong. Of course they do apply! De Raedt's model is perfectly BLR.

8. You mentioned the recent de Raedt's article with Hess (http://arxiv.org/abs/0901.2546). I've seen it and I took a brief look, but I didn't read it carefully and I failed to understand the crux of it. I just don't want to investigate 40+ pages of formulas, when I already know that de Raedt's reasoning is often confusing and misleading, and that Hess is well-known for fighting will Bell's theorem, though his claims were long ago shown wrong by people, whose opinion I respect in this issue (see http://arxiv.org/abs/quant-ph/0208187).

If you have read and understood this 40+ pages article, everybody here will be grateful if you give us the arguments in a concise and clear way.

9. For some reason you completely ignored my point about double-slit paper of de Raedt. You were first to mention it! Did you read it? If you did, could you please comment on what I said earlier? If you didn't, how come you use it in the arguments?


dk
 
Last edited by a moderator:
  • #86


kobak said:
8. You mentioned the recent de Raedt's article with Hess (http://arxiv.org/abs/0901.2546). I've seen it and I took a brief look, but I didn't read it carefully and I failed to understand the crux of it. I just don't want to investigate 40+ pages of formulas, when I already know that de Raedt's reasoning is often confusing and misleading, and that Hess is well-known for fighting will Bell's theorem, though his claims were long ago shown wrong by people, whose opinion I respect in this issue (see http://arxiv.org/abs/quant-ph/0208187).

If you have read and understood this 40+ pages article, everybody here will be grateful if you give us the arguments in a concise and clear way.

Good post. I tried to find the meat in the argument and couldn't either. If anyone had a good counter-argument, they would put their proof up front rather than hide it. Meanwhile, we have the following history of experimental teams seeing violation of Bell Inequalities:

Aspect, 1982: 5 standard deviations.
Kwiat, 1995: 102 standard deviations.
Kurtsiefer, 2002: 204 standard deviations.
Barbieri, 2003: 213 standard deviations.

And since entanglement doesn't even exist in any local realistic theory (by definition), it is interesting to note that last year, Vallone et al were observing hyper-entanglement on photons in 3 independent degrees of freedom. Further, there have been numerous experiments involving time-bin entanglement, including with photons that have never interacted in the past. Just the loophole de Raedt thought to exploit in his later paper (you would think experiments like this would finally end the search for an LR theory).

None of this could be predicted by any local realistic theory. On the other hand, all are predicted by QM. This is why looking for LR theories is a waste of time. It made sense up until the 1970's or so, but not since.
 
Last edited by a moderator:
  • #87


DrChinese said:
Meanwhile, we have the following history of experimental teams seeing violation of Bell Inequalities

Sorry, DrChinese, are you saying that some of these experiments were loophole-free? As far as I know, this has so far never been achieved.

If it is true, then exactly how many standard deviations is observed -- doesn't really matter. A strict believer in local realism still can say: there are this and that loopholes, and so the results can be explained in a sophisticated enough local realistic way. It can be 100000 standard deviations, or whatever. What is important (in the sense of putting a full stop in this discussion) is to conduct an experiment, completely free of any loopholes.
 
  • #88


kobak said:
Sorry, DrChinese, are you saying that some of these experiments were loophole-free? As far as I know, this has so far never been achieved.

If it is true, then exactly how many standard deviations is observed -- doesn't really matter. A strict believer in local realism still can say: there are this and that loopholes, and so the results can be explained in a sophisticated enough local realistic way. It can be 100000 standard deviations, or whatever.

I disagree. What is being asserted is anti-scientific because in a sense, no experiment is loophole free. As you are undoubtedly aware, there are still experiments going on to test General Relativity. At least there, the competing theories (or versions of GR as you may call them) have key elements in common.

On the other hand, there is no existing candidate LR theory on the table to compare to QM at this time. Stochastic Mechanics (Marshall, Santos) is an example of a field of research in that regard, but every candidate SM model is found to have problems and is quickly modified again. And since such models do not predict anything useful, there is no incentive to study them further. We already have a very useful model - QM - and the experiments supporting it are in the thousands. Something useful from the field of study would go a long way towards convincing the scientific community.

So yes, I think quantity does matter, and I think utility matters. And I think the history of the area does matter as well, including when a theory (QM) is supported by improving technology. That doesn't mean that conventional thinking is right always. I just mean to say that science evolves towards ever more useful theories. I do not see how LR theories can ever hope to fall into that category (useful) since they deny the known phenomena of entanglement. I mean, we are at the point now of entangling particles with no common history. Why don't the local realists acknowledge the obvious hurdle such experiments place on LR theories?

And as a practical matter, I disagree that loophole-free experiments have not been performed. In my opinion, the fair-sampling loophole has been closed (Rowe et al, 2001). In my opinion, the strict locality loophole has been closed (Weihs et al, 1998). Etc. Why should you need to close every loophole simultaneously if you can close each separately? If a prisoner cannot escape from the first lock by itself, and cannot escape from the second lock by itself, how can he escape when both locks are present? I don't disagree with a desire to close all loopholes simultaneously; but I think that is a standard that is being applied to Bell tests which is applied nowhere else in science. Surely you must have noticed this as well.
 
  • #89


DrChinese said:
I disagree. What is being asserted is anti-scientific because in a sense, no experiment is loophole free. As you are undoubtedly aware, there are still experiments going on to test General Relativity. ... And as a practical matter, I disagree that loophole-free experiments have not been performed. In my opinion, the fair-sampling loophole has been closed (Rowe et al, 2001). In my opinion, the strict locality loophole has been closed (Weihs et al, 1998). Etc. Why should you need to close every loophole simultaneously if you can close each separately? ... I don't disagree with a desire to close all loopholes simultaneously; but I think that is a standard that is being applied to Bell tests which is applied nowhere else in science. Surely you must have noticed this as well.

Three points. First. I'm not an expert in Bell tests and loopholes issue, so can't really comment on that on the detailed level. I know that there's for example "time-coincidence" loophole (http://arxiv.org/abs/quant-ph/0312035), which is apparently exactly the loophole de Raedt is exploiting (http://arxiv.org/abs/quant-ph/0703120, the link I already gave). I'm not sure that all known loopholes were already closed even separately, though this might be true. In particular, I just don't know any details about this "entanglement" studies that you cite (and don't have time at the moment to start reading them). Do they test Bell inequalities after this entanglement "swapping"? Or how else these findings prove LR false?

Second. I guess that I slightly disagree with you about different standards of tests. Of course there are super-precise tests of GR still being done. But to test GR you need to observe something that is predicted by GR, like light deflection or whatnot. When this is observed, nobody claims that there's a "loophole" in this experiment, and the results can be interpreted such that light is not deflected. It's evident: nobody heard of any loopholes in GR tests. On the other hand, to test QM versus LR one needs to show that the Bell's inequalities are violated. And all the attempts to show it still have some loopholes that allow alternative explanations.

Third. Nobody in his right mind claims that QM is "wrong". For de Raedt, QM is a correct mathematical model working well on the ensemble level only, without saying anything about single events. He is not trying to show that QM is wrong, he is trying to show that it can be completed in a LR way. Well, we know that it's impossible due to Bell. But de Raedt obviously disagrees. And it doesn't make a lot of sense for me to defend de Raedt, but he is most definitely not a crackpot (he has done a huge amount of "real" work in computer simulations of different physical models, including decoherence etc.). I believe (as you do) that his reasoning about Bell is flawed, but he certainly does not try to obscure anything on purpose: I'm quite sure that he is honest.
 
Last edited by a moderator:
  • #90


kobak said:
1. All the experiments that has been done so far to test Bell's inequalities ARE. NOT. PERFECT. This is not something to agree or disagree, it's just a fact, and it's admitted by everybody, including of course the experimenters themselves. Agreed?
Agreed! This is a fact. Not a single loophole-free experiment has ever been performed.
2. These experiments definitely can't be presented "as proof of Bell's theorem", because they are not. Please stop asking me why they are presented in such way! If anybody does so, he or she just doesn't understand anything here.
Agreed!

Bell's theorem is a theoretical construct, it doesn't need to be proven by experiment. Experiment has to show whether Bell's inequalities are violated or if they are not. Agreed?
No. I disagree. So long as Bell's inequalities purport to make claims about reality, the correspondence between those inequalities and reality MUST be independently validated by experiments before any claims they make about reality can be said to be proven.

3. I quote you: "I am perfectly happy to accept that Bell's theorem is true ONLY for the LR models narrowly defined by his assumptions". OK, let's call all theories that Bell's theorem applies to "Bell local realistic" (BLR). Now, Bell's theorem says and proves that BLR theories should obey Bell's inequalities while QM violates them. Agreed?
Agreed without prejudice. Note that every loop-hole found to date is a hidden assumption in Bell's proof. I do not claim by agreeing to the above that all loop-holes have been found.

4. Your main point seems to be that BLR is only a narrow subclass of LR theories. Well, I repeat: what is "local realism" is a philosophical question. I'm personally quite happy to include in my notion of local realism the assumption that the outcomes of Bob's experiments are statistically independent from Alice's choice of experimental settings (this is Bell's assumption and precise definition of BLR). You are not, right?
Again remember that every loop-hole is a hidden assumption of Bell's proof. The fact that there are loop holes tells you that BLR is not exhaustive of all LR.

5. Since scientific consensus is that BLR and LR are the same thing, and you disagree, the only meaningful way to disagree is to give an example of a LR theory that is not BLR. Agreed?
No. If you think scientific consensus is that BLR and LR are the same thing, then you have not been paying attention, and this thread does not exist, and the loop-holes do not exist.

6. In case you want to say that de Raedt's models are such kind of example, I repeat once again: NO, they ARE NOT.
If you say Raedt's modes are not examples of LR which are not accounted for by Bell's LR, I repeat once again: YES THEY ARE. You see this kind of discussions takes us no where. Explain why they are not.

de Raedt showed (as was already known) that the experiments to test Bell's inequalities were not perfect (there are loopholes), and because of these experimental flaws their results can be explained in LR way. Agreed?
That is a very narrow reading of de Raedt's work. Did you completely fail to understand the importance of the Deterministic Learning Machine model of de Raedt's?

7. But (the crucial point!) de Raedt's model is obviously not only LR, but BLR as well!
If de Raedt's model is BLR then how do you explain the fact that the model violates the inequality, when according to Bell it is impossible. Think before you answer. If you want to say that only under certain conditions will violate the inequality, then you still face the question of answering how come some BLR will violate the inequality under certain conditions. There is no escaping here.

IF a loophole-free test of Bell's inequalities is ever done and IF Bell's inequalities are still found to be violated, then de Raedt won't be able to explain this with his model (why? because of Bell's theorem). Agreed? Please think a bit before answering.
This is circular reasoning. A loop-hole free test of Bell's inequality is required to be able to validate the inequality in the first place. Violation of Bell's inequality in any experiment has two possible explanations, not just one.
1) That Bell's inequality is a correct representation of local reality and the experiment is either not real or not local or both
2) That Bell's inequality is not a correct representation of local reality.

Now for some reason, Bell's followers ALWAYS gravitate towards (1). Do you agree that (2) is also a possibility and MUST be considered together with (1) when interpreting the results of these experiments? Please, I need a specific answer to this question.

I'm asking you think, because you wrote that "Bell's equations do not apply to a deterministic learning machine model like de Raedt's". This is just plain wrong. Of course they do apply! De Raedt's model is perfectly BLR.
You have no idea what you are talking about. Even ardent Bell believers have shown that not all LR are accounted for in BLR. See http://arxiv.org/abs/quant-ph/0205016 for one example. Bell's starting equation is the following:
<br /> P(AB) = \sum_i P(A|a_i)P(B|b_i )P(\lambda_i)<br />
This experiment does not apply in situations in which \lambda_{i+1} is dependent on \lambda_{i}, like is the case in de Raedt's model. The reason is simple. If case (i) and case (i+1) are not mutually exclusive, you can integrate or as in this case perform a sum the way Bell did.

8. You mentioned the recent de Raedt's article with Hess (http://arxiv.org/abs/0901.2546). I've seen it and I took a brief look, but I didn't read it carefully and I failed to understand the crux of it. I just don't want to investigate 40+ pages of formulas, when I already know that de Raedt's reasoning is often confusing and misleading, and that Hess is well-known for fighting will Bell's theorem, though his claims were long ago shown wrong by people, whose opinion I respect in this issue (see http://arxiv.org/abs/quant-ph/0208187).
This explains why you will never understand him. Apparently, as soon as you see Hess or de Raedt, you put on green goggles. The article you posted as disproving Hess is nothing short of a joke. (See http://arxiv.org/abs/quant-ph/0307092).
If you have read and understood this 40+ pages article, everybody here will be grateful if you give us the arguments in a concise and clear way.
If you are interested in understanding the opposing position, you will make the effort to read and understand their arguments before purporting to refute it. Since it appears you have access to de Raedt personally, why don't you ask him to explain to you concisely what the article is talking about. That will be much better than any of my efforts to explain his work to a hostile audience.
9. For some reason you completely ignored my point about double-slit paper of de Raedt. You were first to mention it! Did you read it? If you did, could you please comment on what I said earlier? If you didn't, how come you use it in the arguments?
You claimed that some photons were lost in their double slit simulation. This is wrong! All photons reach the detector and affect the outcome of the experiment. Maybe what you were trying to say is that in their model, not all photons result in a click. In any case, do you have experimental evidence proving that all photons leaving the source MUST result in a click at the detector in a double slit experiment?
 
Last edited by a moderator:
  • #91


DrChinese said:
On the other hand, there is no existing candidate LR theory on the table to compare to QM at this time.
You are mischaracterizing the debate as one between LR and QM. It is NOT.

Stochastic Mechanics (Marshall, Santos) is an example of a field of research in that regard, but every candidate SM model is found to have problems and is quickly modified again. And since such models do not predict anything useful, there is no incentive to study them further. We already have a very useful model - QM - and the experiments supporting it are in the thousands. Something useful from the field of study would go a long way towards convincing the scientific community.

So yes, I think quantity does matter, and I think utility matters. And I think the history of the area does matter as well, including when a theory (QM) is supported by improving technology.
The utility of a theory says nothing about it's correctness. The system of epicycles was very useful in the dark ages but you won't claim it as a correct theory. Technology always precedes theoretical understanding.

Why should you need to close every loophole simultaneously if you can close each separately? If a prisoner cannot escape from the first lock by itself, and cannot escape from the second lock by itself, how can he escape when both locks are present?
You answer your own question. The papers you mentioned are just prisoners claiming to have escaped because they were able to open one of seven locks. Unless you can open all seven locks you can't reasonably claim to have escaped, even if you change your name to Houdini.

I don't disagree with a desire to close all loopholes simultaneously; but I think that is a standard that is being applied to Bell tests which is applied nowhere else in science. Surely you must have noticed this as well.
It is common sense. The standard is demanded by the claims made by Bell. Extraordinary claims require extraordinary evidence. If you claim there is no green stone on Jupiter, you better get your ducks together and be sure you have combed every micrometer of the planet before you can say your experiment proves the claim. Yet if you claim there is a white stone in Alabama, all you have to do is find one white stone anywhere in Alabama to prove your claim. Bell says NO LR can violate his inequality.
 
  • #92


kobak said:
Three points. First. I'm not an expert in Bell tests and loopholes issue, so can't really comment on that on the detailed level. I know that there's for example "time-coincidence" loophole (http://arxiv.org/abs/quant-ph/0312035), which is apparently exactly the loophole de Raedt is exploiting (http://arxiv.org/abs/quant-ph/0703120, the link I already gave). I'm not sure that all known loopholes were already closed even separately, though this might be true. In particular, I just don't know any details about this "entanglement" studies that you cite (and don't have time at the moment to start reading them). Do they test Bell inequalities after this entanglement "swapping"? Or how else these findings prove LR false?

Second. I guess that I slightly disagree with you about different standards of tests. Of course there are super-precise tests of GR still being done. But to test GR you need to observe something that is predicted by GR, like light deflection or whatnot. When this is observed, nobody claims that there's a "loophole" in this experiment, and the results can be interpreted such that light is not deflected. It's evident: nobody heard of any loopholes in GR tests. On the other hand, to test QM versus LR one needs to show that the Bell's inequalities are violated. And all the attempts to show it still have some loopholes that allow alternative explanations.

Third. Nobody in his right mind claims that QM is "wrong". For de Raedt, QM is a correct mathematical model working well on the ensemble level only, without saying anything about single events. He is not trying to show that QM is wrong, he is trying to show that it can be completed in a LR way. Well, we know that it's impossible due to Bell. But de Raedt obviously disagrees. And it doesn't make a lot of sense for me to defend de Raedt, but he is most definitely not a crackpot (he has done a huge amount of "real" work in computer simulations of different physical models, including decoherence etc.). I believe (as you do) that his reasoning about Bell is flawed, but he certainly does not try to obscure anything on purpose: I'm quite sure that he is honest.

A couple of comments, and by the way I doubt our positions are very different overall.

Any theory, including GR, can be attacked as lacking loophole free experimental support by a sufficiently motivated scientist. The concept would be to deny an essential element of the theory, and then try to show that somehow the experiment "could" be wrong even if the evidence is convincing by normal scientific standards. What if GR readings are not a fair sample? Maybe Newtonian physics is correct instead because there is a built-in sample bias. (I am only kidding of course.)

The entanglement swapping issue is really just another aspect of the hurdles any LR theory must explain. Two independently created photon pairs A1/A2 and B1/B2 are created. By performing a suitable partial Bell State Measurement (BSM) on one of each pair (A1 & B1), their partners A2 and B2 are now partially entangled (as to time bin). (So in this particular case, polarization is not swapped but that has been done as well by Pan, Zeilinger et al.) So the question is: how do the local hidden variables guiding A2 and B2 - which have never been in causal contact - manage to be correlated? That's not even a process that a non-local Bohmian (dBB) type theory has an easy time with.

As to intellectual honesty: no assertion being made to the negative. I just ask why someone in that position wouldn't make the source of the delta between LR model and QM be obvious? That is the first thing we all look for. And yet I always find myself reading a lecture on the wrongs of Bell while looking for that little detail I know is there somewhere. The author, I would think, would know what that detail is.

I always ask myself: what would Einstein have thought about Bell or Aspect? If he were alive today, I think he would be well convinced and would cede the essential point.
 
Last edited by a moderator:
  • #93


mn4j said:
The utility of a theory says nothing about it's correctness. The system of epicycles was very useful in the dark ages but you won't claim it as a correct theory. Technology always precedes theoretical understanding.

That's wrong: how are theories judged correct? There is no such standard. Theories can have experimental support, and theories can make predictions that can be tested. And that is how they are judged. Correctness implies black and white, right or wrong. Theories can be better or worse depending on their application. But I cannot meaningfully say a theory is correct.

As to technology preceding theory: that makes no sense at all. Sometimes it does, sometimes it doesn't. There is no historical absolute on this. So again, meaningless.
 
  • #94


Thanks for replying. In the beginning I started answering and addressing all the points where we disagree, but this is getting too huge and difficult to handle (so I'll concentrate on the main issue). But let me first say one thing. I'm not a "hostile audience", because I sincerely try to understand what de Raedt is saying. But it's extremely difficult to talk with you because you're constantly being very sloppy. Here's an example:

If you say Raedt's modes are not examples of LR, I repeat once again: YES THEY ARE. You see this kind of discussions takes us no where. Explain why they are not.

Are you joking? I have been saying all the way that I think that de Raedt's models are LR. Well, I think that you just miswrote something here, but it's quite difficult to try to decipher you sometimes. Another example:

2) That Bell's inequality is not a correct representation of local reality.
Now for some reason, Bell's followers ALWAYS gravitate towards (1). Do you agree that (2) is also a possibility

What on Earth is this second option supposed to mean at all? "Bell's inequality is not a representation of local reality"? Eh? I guess that what you mean here is that BLR is not LR. But it's really a pain to guess what you meant all the time.

Now, it's clear that the *MAIN POINT* is that you think there are LR theories that are not BLR. And you think that de Raedt's model is an example. I don't see why it's not BLR, I think it's absolutely BLR, and I think that I never saw any LR-but-not-BLR suggestion. And without an example I won't believe that that's possible. Here's your objection:

If de Raedt's model is BLR then how do you explain the fact that the model violates the inequality, when according to Bell it is impossible.

Well, my answer is simple: it does not violate the inequality. The inequality "seems" to be violated in the particular experimental setup because the certain post-selection procedure is applied. It's possible to create correlation by post-selecting, that's what this whole coincidence loophole is about!

You have no idea what you are talking about. Even ardent Bell believers have shown that not all LR are accounted for in BLR. See http://arxiv.org/abs/quant-ph/0205016 for one example.

Thanks for giving this link, it's actually interesting. I don't see though how it proves your point. Authors clearly write that "memory loophole" that they're describing can be avoided in experiment. If it's avoided along with other loopholes -- goodbye LR.

You claimed that some photons were lost in their double slit simulation. This is wrong! All photons reach the detector and affect the outcome of the experiment. Maybe what you were trying to say is that in t heir model, not all photons result in a click. In any case, do you have experimental evidence proving that all photons leaving the source MUST result in a click at the detector in a double slit experiment?

Yes, I meant exactly this: not all photons result in a click. I don't have an evidence, but it could be obtained. I described two experiments that could check this model. Take a look at the second. De Raedt says that if the screen is moved back and forth, the interference picture will get smeared. If this experiment is done and interference is NOT smeared, then de Raedt himself said that he would "retire", which I guess means that he will admit that his models are totally wrong and give up. And what would you say in this case?
 
  • #95


DrChinese, yes, I think that our position regarding the main points here is the same. And I must say that I myself have also wondered many times about what "would Einstein have thought about Bell or Aspect"...
 
  • #96


mn4j said:
No. I disagree. So long as Bell's inequalities purport to make claims about reality, the correspondence between those inequalities and reality MUST be independently validated by experiments before any claims they make about reality can be said to be proven.

In photons' polarisation experiments, the correspondance between Bell's inequality and reality are that a detection is noted 1 and an absence of detection is noted 0, and also that nothing that is done outside the past light-cone of an event has any observable consequence on this event, which corresponds to the fact that in Bell's theorem, A does not depend on beta and that B does not depend on alpha.

The second correspondance is validated by experiments that show that nothing can go faster than light.
The first correspondance has not to be experimentally validated. You don't have to prove that you set 1 for a detection and 0 otherwise. We believe you !

mn4j said:
Agreed without prejudice. Note that every loop-hole found to date is a hidden assumption in Bell's proof. I do not claim by agreeing to the above that all loop-holes have been found.

Action of the detector on the source, disproven by Aspect with ultra-fast switch, was not a hidden assumption, it was the explicit assumption that A did not depend on beta and conversely.
Fair sampling loophole was not either. Bell's theorem applies to the means of all measurments, not only some of them.
Statistics loophole have been filled with the GHZ evidence.

I've not studied all this, but not all loopholes were hidden assumptions in Bell's theorem. Actually, it seems to me that most loopholes claimed to be found in Bell's theorem rather than in experiments were unfounded. The CHSH generalisation of Bell's theorem makes things more clear : it takes into account anything that can happen around the measurment as hidden variable.

mn4j said:
If you say Raedt's modes are not examples of LR which are not accounted for by Bell's LR, I repeat once again: YES THEY ARE. You see this kind of discussions takes us no where. Explain why they are not.

They violate Bell's inequality because Cxy depends on both t(n,1) and t(n,2) (equation 3), which is not the case in Bell's theorem. In Bell's theorem, Cxy depends only on the product of the measurments results (the Kronecker deltas in equation 3).

The role of t(n,1) and t(n,2) is to introduce a measurable individual dependence on the measurment angles, while they have no effect on the individual spin results.

Technically, it makes Cxy not being Bell's coincidence rate anymore. It has more to do with "what we measure" than with "what is locality".

mn4j said:
That is a very narrow reading of de Raedt's work. Did you completely fail to understand the importance of the Deterministic Learning Machine model of de Raedt's?

De Raedt's pseudorandom model works without any Deterministic Learning Machine, and perfectly predicts Bell's inequality violation ! DLM are not involved in this step.
DLM are there to restore determinism, after the prevous step has restored locality.

Moreover, I'm not sure of it, but it seems to me that DLM would be accounted for as hidden variables in the general CHSH proof of Bell's theorem of 1969 :
This generalisation attributes hidden variables not only to the particles, but also to the measurment devices. For this purpose, the result A, function of the hidden variable lambda, and of the angle alpha, with the value -1 or +1, is replaced by the average value of A, function of alpha and lambda, on all hidden variables of the measurment device, and we start with
|average of A| <= 1. (respectively for B...)

mn4j said:
Violation of Bell's inequality in any experiment has two possible explanations, not just one.
1) That Bell's inequality is a correct representation of local reality and the experiment is either not real or not local or both
2) That Bell's inequality is not a correct representation of local reality.

Now for some reason, Bell's followers ALWAYS gravitate towards (1). Do you agree that (2) is also a possibility and MUST be considered together with (1) when interpreting the results of these experiments? Please, I need a specific answer to this question.

I myself agree, but case 2 deals with what we do, while case 1 deals with what we get.

In De Raedt's simulation, Cxy is not the coincidence rate defined in Bell's theorem. That's how Bell's inequality does not represents what's going on in the simulation.
If the simulation is a good representation of reality, then the experiment can be modified so as to make W big enough compared to |t(n,1) - t(n,2)| in equation 3, so that the Heaviside function is always equal to 1, and Cxy tends to Bell's definition of the measurments product.
This way, we get back the experiment in adequation with Bell's theorem (case 2 is discarded), and we can test local determinism.

Another, sad, example : Joy Christian's use of Clifford algebra to prove Bell wrong ( http://arxiv.org/abs/quant-ph/0703179 ). Christian uses the half spin model, where Bell's theorem is applied setting spin down = -1, and spin up = +1.
He starts from the hypothesis that spin down and spin up are not real numbers, but numbers from Clifford algebra. He then shows that S can be equal to more than 2.

Since Bell's theorem says nothing else than if the possible results are -1 or 1, then S<=2, Christian's result is trivial and useless !
 
  • #97


DrChinese said:
That's wrong: how are theories judged correct? There is no such standard. Theories can have experimental support, and theories can make predictions that can be tested. And that is how they are judged. Correctness implies black and white, right or wrong. Theories can be better or worse depending on their application. But I cannot meaningfully say a theory is correct.

As to technology preceding theory: that makes no sense at all. Sometimes it does, sometimes it doesn't. There is no historical absolute on this. So again, meaningless.
I guess then you believe the system of epicycles is an accurate representation of the solar system and the motion of the planets!
 
  • #98


kobak said:
What on Earth is this second option supposed to mean at all? "Bell's inequality is not a representation of local reality"? Eh? I guess that what you mean here is that BLR is not LR. But it's really a pain to guess what you meant all the time.
If you have thought it through clearly enough you will know what the second option means. Let me put it to you in layman terms.

A man was depressed to the point he believed he was dead. No matter the efforts of his family he kept saying he was dead. A smart doctor tried to convince him that dead men do not bleed. After a significant effort he accepted. But at that moment, the doctor pierced him with a needle and he started bleeding. Can you guess what his next statement was? The doctor had hoped he would say "I am alive". Instead he said "Oops, I guess dead men bleed afterall".

The violation of Bell's inequality only proves that that the assumptions used in deriving the inequality do not apply to the experiment in question. Do you agree? Please give me a specific answer to this.

Those assumptions include assumptions about the way probabilities of local realist variables are supposed to be calculated. So in effect, Bell has an untested presentation of how local realist theories are supposed to behave. Yet when the inequalities are violated, instead of re-evaluating those assumptions, Bell proponents screem "I guess dead mean bleed after all".

Now, it's clear that the *MAIN POINT* is that you think there are LR theories that are not BLR. And you think that de Raedt's model is an example. I don't see why it's not BLR, I think it's absolutely BLR, and I think that I never saw any LR-but-not-BLR suggestion. And without an example I won't believe that that's possible.
I already mentioned why this is not a constructive criticism. I also explained in my previous post to you why de Raedt's mode is not accounted for by Bell. The article I presented by a pro-Bellist clearly states that Bell's model does not account for models like de Raedt's which have memory effects, notwithstanding the conclusion of that paper. Also your claim that you never saw any suggestion that Bell's representation of LR was not exhaustive is surprising because it is numerous in the literature. This thread was started by one such, Hess has presented a few, Joy Christian has presented a few. Are you serious?

Well, my answer is simple: it does not violate the inequality. The inequality "seems" to be violated in the particular experimental setup because the certain post-selection procedure is applied. It's possible to create correlation by post-selecting, that's what this whole coincidence loophole is about!
What experiment setup, it is a simulation. What aspect of the simulation do you claim deviates from how real experiments are actually performed.
Thanks for giving this link, it's actually interesting. I don't see though how it proves your point.
Do you agree that according to this article, Bell's theory does not account for models with memory effects. The authors state as much even though they end up dismissing it's importance. At least they were honest to admit that Bell's theory does not account for such local realist theories, which you apparently are still unwilling to do.
Authors clearly write that "memory loophole" that they're describing can be avoided in experiment. If it's avoided along with other loopholes -- goodbye LR.
That is beside the point. Do they or do they not state that Bell's model of LR does not apply to situations in which there are memory effects? Please answer this question.

Yes, I meant exactly this: not all photons result in a click. I don't have an evidence, but it could be obtained. I described two experiments that could check this model.
You don't have evidence, yet you are ready and willing to proclaim proudly that de Raedt's model is wrong on this basis? Isn't it more prudent to wait until you have obtained such evidence before you make such claims?

Take a look at the second. De Raedt says that if the screen is moved back and forth, the interference picture will get smeared. If this experiment is done and interference is NOT smeared, then de Raedt himself said that he would "retire", which I guess means that he will admit that his models are totally wrong and give up. And what would you say in this case?
Do you believe, the interference will get smeared if the slits are moved back and forth? What about the source?
 
  • #99


mn4j said:
I guess then you believe the system of epicycles is an accurate representation of the solar system and the motion of the planets!

The map is not the territory, my friend. Theory is always a model (map). And some are better than others.
 
  • #100


Hello again, mn4j,
I suggest that if we continue this discussion at all, let's try to concentrate on the most important points only and also try avoid nitpicking each other (like what "scientific consensus" really means etc.). I'm quite sure that we won't reach an agreement, but we can at least pinpoint our disagreements.

mn4j said:
The violation of Bell's inequality only proves that that the assumptions used in deriving the inequality do not apply to the experiment in question. Do you agree? Please give me a specific answer to this.

Yes, certainly.

Those assumptions include assumptions about the way probabilities of local realist variables are supposed to be calculated. So in effect, Bell has an untested presentation of how local realist theories are supposed to behave.

Well, let me put it a bit differently. I've been always repeating here that "local realism" is not a well-defined term. So strictly speaking you're right: it might not be fully correct to say (without any additional clarifications) that Bell's theorem proves that all LR theories should obey Bell's inequality. I hope you'll be happy that I agree with you here.

However, here's my main point: Bell derives his technical assumption about probabilities distribution by providing some particular *physical* intuition. This technical assumption that he uses is certainly always true and absolutely uncontroversial in all areas of classical (meaning non-quantum) physics. It's just the plain fact, that in all classical physics the outcome of Bob's experiment can never be statistically dependent on Alice's choice of experimental setup etc., so Bell's assumption holds. So let me drop the issue of "local realism" and make the following claim instead: Bell's theorem shows that *assuming "classicality"* -- his inequality has to be true.

Do you agree to such a statement? Note, that even if de Raedt model turns out to (a) be not BLR, (b) violate Bell's inequalities, (c) be just right (I still strongly disagree that it's possible, but even if it's like that), -- then this is certainly not a "classical" model. In classical physics apparata do not learn. The same goes for Christian's models: if they are right, then ok, spin measurements form a Clifford algebra (whatever this means), and this is again certainly not a "classical" model.

So, to recapitulate: do you agree that Bell's technical assumptions about correlations are completely well motivated, if we change the assumption of "local realism" to assumption of "classicality"? I very much hope that you will agree to that.

---------

Now, except of this, I see two main issues. First: I claim that de Raedt's model is BLR (and hence has to obey Bell's inequalities, and hence will not be able to hold anymore, after a loophole-free violation of inequalities is observed). You're saying that his model is not BLR, and your argument is that it's even stated in the Darrett-Popescu article. Well, I have to read it more carefully to answer you here. I will try to find the time for that and get back then, that's important.

Second point is that you also say that de Raedt model is not BLR because it violates Bell's inequalities. Here I disagree strongly, and I think that this shows that you don't really understand what a "coincidence loophole" is. Here's again, how I see it: 1. de Raedt's simulation does not deviate from real experiment of Weihs et al (you asked me, how it deviated; well, it doesn't). 2. The coincidence loophole (which this experiment did not avoid) means that it's possible to explain the apparent Bell's inequality violation by the fact that events are post-selected, and because of this post-selection the correlation is created out of nothing. 3. This is exactly what de Raedt is exploiting. 4. Bottomline of this analysis: de Raedt's model is BLR, it obeys Bell's inequalities, but in the non-perfect loopholed experiment it can LOOK like it violates them. This is the view expressed here: http://arxiv.org/abs/quant-ph/0703120 (see also http://arxiv.org/abs/quant-ph/0312035 about coincidence loophole). Do you understand this argumentation? You may disagree (please tell where exactly), but do you understand it?

Let me also ask for a clarification of your point of view. Do you think that Bell's inequalities are in reality NOT violated (and all experimental violations are only due to loopholes)? Or do you think that his inequalities in reality ARE violated (so that even ideal perfect experiment will find violations), but these violations can be explained by some LR theory which is not accounted by Bell's theorem? I guess that yours is the latter opinion. Which means that even a perfect loophole-free experiment will not prove anything to you, right? Why are you then arguing about loopholes and this "prisoners who can get out in different ways" at all?

---------

Finally, about double-slit paper.

You don't have evidence, yet you are ready and willing to proclaim proudly that de Raedt's model is wrong on this basis? Isn't it more prudent to wait until you have obtained such evidence before you make such claims?

It is. I don't proclaim that his model is wrong on this basis, I'm proposing a bet (let's put it that way). Imagine this experiment is done exactly as de Raedt himself proposed it (screen is jittered from left to right parallel to itself). Question: what will happen? My bet: interference pattern doesn't change. "de Raedt's" bet: interference pattern gets smeared, because "detectors" on the screen won't have enough time to "learn". Your bet?

And additionally: imagine that experiment is made and I win. What would that mean? I think that de Raedt thinks that it will prove his model false, and he certainly does not believe that this outcome is possible AT ALL. What do you think?

Please don't ask questions about different experimental setups, I'm interested only in this one, that is defined absolutely precisely.
 
Last edited by a moderator:
Back
Top