Bell Theorem and probabilty theory

Click For Summary
The discussion centers on the validity of Bell's theorem and its assumptions regarding probability theory. It argues that Bell's conclusions are flawed due to misunderstandings of logical versus physical independence, particularly in local causal theories. The example of a monkey drawing balls from an urn illustrates that the conditional probabilities Bell proposes do not hold true under certain conditions. Furthermore, the conversation critiques the interpretation of correlations in quantum mechanics, suggesting that if local hidden variables existed, they would not align with the predictions of Bell's inequalities. Ultimately, the dialogue emphasizes the need for a clearer understanding of probability in the context of quantum mechanics and Bell's theorem.
  • #61


Hi mn4j, apologies for not replying to your last post before now, I started it a while ago but realized it would require a somewhat involved response, so I kept putting off writing it for weeks. Anyway, I've finally finished it up:
mn4j said:
You still have not said why it should not be separate. It would seem that if Bell's proof was robust, it should be able to accommodate hidden variables at the sources in addition to source parameters.
It is able to do so. I already said "the hidden variables can be included in \lambda"--did you miss that, or are you not understanding it somehow?
mn4j said:
It should tell you a lot that the hidden variables must be defined a specific way in order for the proof to work.
Physically the hidden variables can be absolutely anything, but for the proof to work you do need to assign them separate variables from the experimental choices. This is like just about any proof where you can't redefine terms willy-nilly and expect it to still make sense. If the proof is mathematically and logically valid, then you have to accept the conclusions follow from the premises, you can't somehow object to it on the basis that you wish the symbols meant different things than what they are defined to mean.
mn4j said:
Since you are the one claiming that Bell's proof eliminates all possible local-hidden variable theorems, the onus is on you to explain why the stations should not be able to get separate local hidden variables.
Do you understand the difference between "the stations should not be able to get separate local hidden variables" and "there can be hidden variables associated with the stations, but the symbols used to refer to them should be separate from the symbols used to refer to the experimenters' choice of measurements angles"? Remember, each value of \lambda is supposed to stand for an array of values for all the hidden variables--we are supposed to have some function that maps values of \lambda to a (possibly very long) list of values for all the different physical variables which may be in play, like "\lambda=3.8 corresponds to hidden variable #1 having value x=7.2 nanometers, hidden variable #2 having value 0.03 meters/second, hidden variable #3 having value 34 cycles/second, ... , hidden variable #17,062,948,811 having value 17 m/s^2", something along those lines. There's no reason at all why the long list of values included in a given value of \lambda can't be values of hidden variables associated with the measuring-device.
mn4j said:
You don't recognize a simple wave equation?
Of course I recognize a wave equation--you weren't tipped off by the fact that I immediately suggested the idea of particles being bobbed along by an electromagnetic plane wave? My point was that I wanted to see a well-defined physical scenario, compatible with local realism, in which the equation would actually apply to physical elements with a spacelike separation, but no common cause in their mutual past light cone to explain why they were both obeying this equation (for example, in the example of two particles at different locations being bobbed up and down by an electromagnetic plane wave, the oscillations of the charges which generated this wave would lie in the overlap of the past light cones). However, I've since realized that they might both be synchronized just because of coincidental similarity in their initial conditions, so I've modified my comments about the relevance to Bell's theorem accordingly--see below.
mn4j said:
It is relevant. Especially since we know about wave-particle duality. It should tell you that we do not need psychokinesis to explain correlations between distant objects.
Of course wave-particle duality is part of QM, and you can't treat it as a foregone conclusion that QM itself is compatible with local realism.
JesseM said:
It should be obvious that in a relativistic universe, any correlation between events with a spacelike separation must be explainable in terms of other events in the overlap of their past light cones. If you disagree, please give a detailed physical model of a situation in electromagnetism (the only non-quantum relativistic theory of forces I know of) where this would not be true.
mn4j said:
You obviously have not thought it through well enough. Two objects can be correlated because they are governed by the same physical laws, whether or not they share a common past or not. This is obvious.
If two experimenters at a spacelike separation happen to choose to do the same experiment, then since the same laws of physics govern them they'll get correlated results--but this is a correlation due to the coincidence of their happening to independently replicate the same experiment, not the type of correlation where seeing the results of both experimenters' measurements tells us something more about the system being measured than we'd learn from just looking at the results that either experimenter gets on their own. This does show that my statement above is too vague though, and needs modification. One way to sharpen things a little would be to specify we're talking about experiments where even with the same settings the experimenters can get different results on different trials, with the results being seemingly random and unpredictable; if we find that the results of the two experimenters are nevertheless consistently correlated, with a spacelike separation between pairs of measurements, this is at least strongly suggestive of the idea that each result was conditioned by events in the past light cones of the two measurements. But this is still not really satisfactory, because in principle there might actually be some hidden deterministic pattern behind the seemingly random results, and it might be that the two systems they were studying coincidentally had identical and synchronized deterministic patterns (for example, they might both be looking out a series of numbers generated by a pseudorandom deterministic computer program, with the programmers at different locations coincidentally having written exactly the same program without having been influenced to do so by a common cause in their mutual past light cone). So, back to the drawing board!

Let me try a different tack. Consider the claim I was making about correlations in a local realist universe earlier, which you were disputing for a while but then stopped after my post #51, so I'm not really sure if I managed to convince you with that post...here's the statement from post #51:
In a universe with local realist laws, the results of a physical experiment on any system are assumed to be determined by some set of variables specific to the region of spacetime where the experiment was performed. There can be a statistical correlation (logical dependence) between outcomes A and B of experiments performed at different locations in spacetime with a spacelike separation, but the only possible explanation for this correlation is that the variables associated with each system being measured were already correlated before the experiment was done ... Do you disagree? If so, try to think of a counterexample that we can be sure is possible in a local realist universe (no explicitly quantum examples).

If you don't disagree, then the point is that if the only reason for the correlation between A and B is that the local variables \lambda associated with system #1 are correlated with the local variables associated with system #2, then if you could somehow know the full set of variables \lambda associated with system #1, knowing the outcome B when system #2 is measured would tell you nothing additional about the likelihood of getting A when system #1 is measured. In other words, while P(A|B) may be different than P(B), P(A|B\lambda ) = P(A | \lambda ). If you disagree with this, then I think you just haven't thought through carefully enough what "local realist" means.
Note that I put some ellipses in the quote above, the statement I removed was "that the systems had 'inherited' correlated internal variables from some event or events in the overlap of their past light cones". I want to retract that part of the post since it does have some problems as you've pointed out, but I stand by the rest. The statement about "variables specific to the region of spacetime where the experiment was performed" could stand to be made a little more clear, though. To that end, I'd like to define the term "past light cone cross-section" (PLCCS for short), which stands for the idea of taking a spacelike cross-section through the past light cone of some point in spacetime M where a measurement is made; in SR this spacelike cross-section could just be the intersection of the past light cone with a surface of constant t in some inertial reference frame (which would be a 3D sphere containing all the events at that instant which can have a causal influence on M at a later time). Now, let \lambda stand for the complete set of values of all local physical variables, hidden or non-hidden, which lie within some particular PLCCS of M. Would you agree that in a local realist universe, if we want to know whether the measurement M yielded result A, and B represents some event at a spacelike separation from M, then although knowing B occurred may change our evaluation of the probability A occurred so that P(A|B) is not equal to P(A), if we know the full set of physical facts \lambda about a PLCCS of M, then knowing B can tell us nothing additional about the probability A occurred at M, so that P(A|\lambda) = P(A|\lambda B)?

If so, consider two measurements of entangled particles which occur at spacelike-separated points M1 and M2 in spacetime. For each of these points, pick a PLCCS from a time which is prior to the measurements, and which is also prior to the moment that the experimenter chose (randomly) which of the three detector settings under his control to use (as before, this does not imply the experimenter has complete control over all physical variables associated with the detector). Assume also that we have picked the two PLCCS's in such a way that every event in the PLCCS of M1 lies at a spacelike separation from every event in the PLCCS of M2. Use the symbol \lambda_1 to label the complete set of physical variables in the PLCCS of M1, and the symbol \lambda_2 to label the complete set of physical variables in the PLCCS of M2. In this case, if we find that whenever the experimenters chose the same setting they always got the same results at M1 and M2, I'd assert that in a local realist universe this must mean the results each of them got on any such trial were already predetermined by \lambda_1 and \lambda_2; would you agree? The reasoning here is just that if there were any random factors between the PLCCS and the time of the measurement which were capable of affecting the outcome, then it could no longer be true that the two measurements would be guaranteed to give identical results on every trial.

Now, keep in mind that each PLCCS was chosen to be prior to the moment each experimenter chose what detector setting to use. So, if we assume that the experimenters' choices were uncorrelated with the values of physical variables \lambda_1 and \lambda_2, either because the choice involved genuine randomness (using the decay of a radioactive isotope and assuming this is a truly random process, for example), or because the choice involved "free will" (whatever that means), then if it's true that \lambda_1 and \lambda_2 predetermine the result on every trial where they happen to make the choice, in a local realist universe we must assume that on each trial \lambda_1 and \lambda_2 predetermine what the results would be for any of the three choices each experimenter can make, not just the result for the choice they do actually make on that trial (since the values of physical variables in the PLCCS cannot 'anticipate' which choice will be made at a later time), the assumption known as counterfactual definiteness. And if at the time of the PLCCS there was already a predetermined answer for the result of either of the three choices the experimenter could make, then if they always get the same results when they make the same choice, we must assume that on every trial the two PLCCSs had the same predetermined answers for all three results, which is sufficient to show that the Bell inequalities should be respected (see my post #3). It would be simplest to assume that the reason for this perfect matchup between the PLCCSs on every trial was that they had "inherited" the same predetermined answers from some events in the overlap of the past light cones of the two measurements, but this assumption is not strictly necessary.

The deterministic case

If the experimenters' choices are not assumed to be truly random or a product of free will, but instead are pseudorandom events that do follow in some deterministic (but probably chaotic) way from the complete set of physical variables in the PLCCS, then showing that the results for each possible measurement must be predetermined by the PLCCS is trickier. I think we can probably come up with some variant of the "no-conspiracy" assumption discussed earlier that applies in this case, though. To see why it would seem to require a strange "conspiracy" to explain the perfect correlations in a local realist universe without the assumption that there was a predetermined answer for each possible choice (i.e. without assuming counterfactual definiteness), let's imagine we are trying to perform a computer simulation to replicate the results of these experiments. Suppose we have two computers A and B which will simulate the results of each measurement, and a middle computer M which can send signals to A and B for a while but then is disconnected, leaving A and B isolated and unable to communicate at some time t, after which they simulate both an experimenter making a choice and the results of the measurement with the chosen detector setting. Here the state of the information in each computer at time t represents the complete set of physical variables in the PLCCS of the measurement, while the fact that M was able to send each computer signals prior to t represents the fact that the state of each PLCCS may be influenced by events in the overlap of the past light cone of the measurement events.

Also, assume that in order to simulate the seemingly random choices of the experimenters on each trial, the computer uses some complicated pseudorandom algorithm to determine their choice, using the complete set of information in the computer at time t as a http://www.lycos.com/info/pseudorandom-number-generator--seeds.html so that even in a deterministic universe, everything in the past light cone of the choice has the potential to influence the choice. Finally, assume the initial conditions at A and B are not identical, so the two experimenters are not just perfect duplicates of one another. Then the question becomes: is there any way to design the programs so that the simulated experimenters always get the same outcome when they make the same choice about detector settings, but counterfactual definiteness does not apply, meaning that each computer didn't just have a preset answer for each detector setting at time t, but only a preset answer for the setting the simulated experimenter would, in fact, choose on that trial? Well, if the computer simulations are deterministic over multiple trials so we just have to load some initial conditions at the beginning and then let them run over as many trials as we want, rather than having to load new initial conditions for each trial, then in principle we could imagine some godlike intelligence looking through all possible initial conditions (probably a mind-bogglingly vast number, if N bits were required to describe the state of the simulation at any given moment there'd be 2^N possible initial conditions), and simply picking the very rare initial conditions where it happened to be true that whenever the two experimenters made the same choice, they always get the same results. Then if we run the simulation forward from those initial conditions, it will indeed be guaranteed with probability 1 that they'll get the same results whenever they make the same choice, without the simulation needing to have had predetermined answers for what they would have gotten on these trials if they had made a different choice. But this preselecting of the complete initial conditions, including all the elements of the initial conditions that might influence the experimenters' choices, is exactly the sort of "conspiracy" that the no-conspiracy assumption is supposed to rule out.

So, let's make some slightly different assumptions about the degree to which we can control the initial conditions. Let's say we do have complete control over the data that M sends to A and B on each trial, corresponding to the notion that we want to allow the source to attach hidden variables to the particles it sends to the experimenters in any fiendishly complicated way we can imagine. If you like we are also free to assume we have complete control over any variables, hidden or otherwise, associated with the measuring-devices being simulated in the A and B computers initially at time t (after M has already sent its information to A and B but before the simulated experimenters have made their choice), to fit with your idea that hidden variables associated with the measuring device may be important too. But assume there are other aspects of the initial conditions at A and B that we don't control--perhaps we can only decide what the "macrostate" of the neighborhood of the two experimenters looks like, but the detailed "microstate" is chosen randomly, or perhaps we can decide the values of all non-hidden variables in their neighborhood but not the hidden ones (aside from the ones associated with the particles sent by the source and the measuring devices, as noted above). Since the pseudorandom algorithm that determines each experimenter's choice takes the entire initial state as a seed, this means that without knowing every single precise detail of the initial state, we can't predict what choices the experimenters will make on each trial. So, for all practical purposes this is just like the situation I discussed earlier where the experimenters' choices were truly random and unpredictable, which means that if we only control some of the initial data at time t (the variables sent from M and the variables associated with the measuring-device) but after that must let the simulation run without any further ability to intervene, the only way to guarantee that the experimenters always get the same result when they make opposite choices is to make sure that the data we control at time t guarantees with 100% certainty what results the experimenters would get for any of the three possible choices, in such a way that the predetermined answers match up for computer A and computer B.
 
Last edited by a moderator:
Physics news on Phys.org
  • #62


Part 2 of response

Simulations as a test of proposed hidden-variables theories

That was a somewhat long discussion of the case where the experimenters' brains make their choice in a deterministic way, and given that most people discussing Bell's theorem are willing to grant for the sake of the argument that the choice can be treated as random, perhaps unnecessary. But I think the idea I introduced of trying to simulate EPR type experiments on computers is a very useful one regardless. If anyone proposes that a local hidden variables theory can explain the results of these experiments, there's no reason that such a theory could not be simulated in the setup I described, where a middle computer M can send signals to two different computers A and B until some time t when the computers are disconnected, and some time after t the experimenters (real or simulated) make choices about which orientation to use for the simulated detector (if the experimenters are real people interacting with the simulation they could make this choice by deciding whether to type 1, 2, or 3 on the keyboard, for example), and each computer A and B must return a measurement result. On p. 15 of the Jaynes paper you linked to, Jaynes seemed to acknowledge that if there was a local realist theory which could replicate the violations of Bell inequalities, then it should be possible to simulate on independent computers:
The Aspect experiment may show that such theories are untenable, but without further analysis it leaves open the status of other local causal theories more to Einstein's liking.

That future analysis is, in fact, already underway. An important part of it has been provided by Steve Gull's "You can't program two independently running computers to emulate the EPR experiment" theorem, which we learned about at this meeting. It seems, at first glance, to be just what we have needed because it could lead to more cogent tests of these issues than did the Bell argument. The suggestion is that some of the QM predictions can be duplicated by local causal theories only by invoking teleological elements as in the Wheeler-Feynman electrodynamics. If so, then a crucial experiment would be to verify the QM predictions in such cases. It is not obvious whether the Aspect experiment serves this purpose.

The implication seems to be that, if the QM predictions continue to be confirmed, we exorcise Bell's superluminal spook only to face Gull's teleological spook. However, we shall not rush to premature judgments. Recalling that it required some 30 years to locate von Neumann's hidden assumptions, and then over 20 years to locate Bell's, it seems reasonable to ask for a little time to search for Gull's, before drawing conclusions and possibly suggesting new experiments.
So, do you agree with the idea that this is a good way to test claims that someone has thought up a way to reproduce the EPR results with a local realist theory? Earlier you seemed to suggest that they could be reproduced by a theory in which the hidden variables associated with the particle interacted with hidden variables associated with the measuring apparatus in some way--can you explain in a schematic way how this could be simulated? Do you disagree with my statement earlier that in order to explain how experimenters always get the same result when they make the same choice about how to set the simulated detector orientation (which is not to imply there couldn't be other variables associated with the simulated detector that are out of their control), we must assume that at the time t the two computers are disconnected, the state of each computer at that time already predetermines what final result the simulation will give for each possible choice made by the experimenter?
JesseM said:
Why "must" it? Again, the a's and b's are defined to mean just the settings that the experimenters control. Can't we define symbols to mean whatever we want them to, and isn't it still true that in this case the combination of the a-setting and the \lambda value will determine the probability of the physical outcome A?
mn4j said:
It must, because you claim that Bell's theorem eliminates ALL hidden variable theorems. It is telling that the terms were so narrowly defined that other possible local hidden variable theorems do not fit.
No you can't define the terms to mean whatever you want them to. You have to define them so that they include all possible hidden variable theorems. Therefore the conclusion of Bell's theorem is handicapped.
See the first part of my response--you are simply confused here, adopting a particular labeling convention for what physical facts are labeled with what symbols has no physical implications whatsoever, I cannot possibly be ruling out any local hidden variables theories by choosing to let the letter "a" stand for the choice made by the experimenter. Nothing about this convention rules out the idea that there could be other physical variables associated with the measuring device that the experimenter does not control, it's just that they must be denoted by some symbol other than "a" (I suggested that these other variables could be folded into \lambda, although if you wished you could define a separate symbol for physical variables associated with the measuring-device).
JesseM said:
No, but the fact that we always see opposite results on trials where the settings are the same is an observed experimental fact, and a variant of Bell's theorem can be used to show that if we observe this experimental fact and if the experiment is set up in the way Bell describes (with each experimenter making a random choice among three distinct detector angles) and if the universe is a local realist one (with the no-conspiracy assumption), then we should expect to see opposite results at least 1/3 of the time on the subset of trials where the experimenters chose different measurement settings. Since this Bell inequality is violated in real life, that means at least one of the "if" statements must fail to be true as well, and since we can verify directly that the first true were true, it must be the third one about the universe being local realist that's false (see my next post for an elaboration of this logic).
mn4j said:
You forgot a very important "if" that is the very topic of this thread, ie
  1. "if we observe this experimental fact"
  2. "if a local realist universe behaves only as described by Bell's assumptions"
  3. "if the universe is a local realist one (with the no-conspiracy assumption)"
  4. "if the experiment is set up in the way Bell describes"
  5. then we should expect to see opposite results at least 1/3 of the time on the subset of trials where the experimenters chose different measurement settings.
As you can see, violation of 5 can imply that either (2), (3) or (4) or combinations of them are wrong. For some probably religious reason, proponents of Bell's theorem, jump right to (3) and claim that it must be (3) that is wrong.I have given you already two examples of hidden variable theorems that point to the falsity of (2). In fact, (2) is the proverbial "a spider must have 6 legs". Do you deny that the validity of Bell's theorem rests as much on (2) as on (3) or (4). It remains to be seen whether any experiment has ever been performed which exactly reproduced Bell's assumptions. But that is a different topic.
I disagree that #2 is necessary there, no assumptions about the type of hidden-variable theory are needed aside from the fact that it is a local realist one. I confused the issue a bit by making a statement about statistical correlations between spacelike separated events in a local hidden variables theory that you correctly pointed out could be violated in certain cases, but see my revised statements above. Do you agree that in a local realist universe, if \lambda is taken to mean the complete set of local variables in a PLCCS of some point is spacetime S, and we want to know the probability that an event A will take place at S given the knowledge of some other event B at a spacelike separation from S, then P(A|\lambdaB) = P(A|\lambda), i.e. knowing that B occurred gives us no additional information about the likelihood of A if we already know the complete set of information about \lambda?
mn4j said:
For other more rigorous proofs why (2) is wrong, see:
  • Brans, CH (1988). Bell's theorem does not eliminate fully causal hidden variables. 27, 2 , International Journal of Theoretical Physics, 1988, pp 219-226
  • Joy Christian, "Can Bell's Prescription for Physical Reality Be Considered Complete?"
    [http://arxiv.org/pdf/0806.3078v1
  • See, Hess K, and Philipp W (2000). PNAS ͉ December 4, 2000 ͉ vol. 98 ͉ no. 25 pp 14228-14233 for a proof that Bell's theorem can not be derived for time-like correlated parameters, and that such variables produce the QM result.
  • See also, Hess K, and Philipp W (2003), "Breakdown of Bell's theorem for certain objective local parameter spaces"
    PNAS February 17, 2004 vol. 101 no. 7 1799-1805
I suppose all those people cited above are also confused, as is Jaynes.
Most likely there are some confusions in any papers that claim to show a local realist theory with the no-conspiracy assumption can reproduce QM results, yes (I don't know if this is what all the papers above are claiming since I don't have access to any but Joy Christian's paper)--if any such demonstration was valid it would have won widespread acceptance in the physics community and this would be very big news, but that hasn't happened. On the subject of Joy Christian's paper, I remember it being discussed earlier on this forum and it being mentioned that other physicists had claimed to find flaws in the argument, see for example ZapperZ's post #18 here which links to responses here and here. Wikipedia refers to Christian's work as "controversial" here, and says "The controversy around his work concerns his noncommutative averaging procedure, in which the averages of products of variables at distant sites depend on the order in which they appear in an averaging integral. To many, this looks like nonlocal correlations, although Christian defines locality so that this type of thing is allowed". Once again, I think the best way to cut through the fog is just to ask if Christian's proposal, whatever the details, could allow us to create computer programs which would correctly simulate QM statistics on pairs of computers which have been separated from connections to any other computers prior to the time the experimenters make random choices as to how to orient their simulated detectors on each trial. If you've read and understood Christian's proposal (I was not able to follow it myself because I'm not familiar with Clifford algebra), do you think this could be done?
mn4j said:
Yet you have not shown me a single reason why my descriptions of the two scenarios in pos #55 are not valid realist local hidden variable theorems. For some reason you ignored the second scenario completely and did not even bother to say whether a "deterministic learning machine" is local or not.
You didn't give enough details there for me to be able to tell what you're proposing, or how it would reproduce violations of Bell inequalities. Any "deterministic learning machine" is certainly local if you could simulate it with a program running on a computer, but there's no way that loading this program on the two computers A and B in the setup I described would allow you to reproduce both the fact that the experimenters always get the same result when they choose the same setting on a given trial and the fact that on trials where they choose different settings they get the same result less than 1/3 of the time. Again, the basic point is that if the computers have been disconnected from communication with other computers at time t prior to the moment each experimenter makes their choice, then the only way you can guarantee a 100% chance that they'll return identical results if the experimenters make the same choice is to have the state of each computer at time t predetermine what answer they'll give for each of the three choices the experimenters can make (with both computers having the same predetermined answers), and this predetermination is enough to guarantee that if the experimenters make different choices they'll get the same answer at least 1/3 of the time.
mn4j said:
You can see the following articles, for proof that a local deterministic learning hidden variable model reproduces the quantum result:
  • Raedt, KD, et. al.
    A local realist model for correlations of the singlet state
    The European Physical Journal B - Condensed Matter and Complex Systems, Volume 53, Number 2 / September, 2006, pp 139-142
  • Raedt, HD, et. al.
    Event-Based Computer Simulation Model of Aspect-Type Experiments Strictly Satisfying Einstein's Locality Conditions
    J. Phys. Soc. Jpn. 76 (2007) 104005
  • Peter Morgan,
    Violation of Bell inequalities through the coincidence-time loophole
    http://arxiv.org/pdf/0801.1776
  • More about the coincidence time loophole here:
    Larson, JA, Gill, RD, Europhys. Lett. 67, 707 (204)
Are any of these other than the Morgan paper available online? Also, it's important to distinguish between two fundamentally different types of claims of "loopholes" in discussions of Bell's theorem. The first category says that there might be types of local hidden variables theories that fully reproduce the predictions of orthodox QM--for example, a theory involving a conspiracy in the initial conditions of the universe would fall in this category. This is the category I've been discussing so far on this thread. But there's a second category which doesn't actually dispute the basic idea of Bell's theorem that orthodox QM is incompatible with local realism, but instead suggests that existing tests of orthodox QM's predictions about EPR-type experiments have not adequately reproduced the conditions assumed by Bell, so that there might be a local realist theory which makes the correct predictions about experiments that have actually been performed but which would not actually violate Bell inequalities if better tests were performed that sealed off certain experimental loopholes seen in tests that have been done so far (meaning in these cases the theory would disagree with the predictions of orthodox QM). For example, one experimental loophole in some previous tests is that there may not have actually be a spacelike separation between the events of the two detector settings being chosen and the events of the two particles' spins being measured, so in principle the choice of detector settings could have had a causal influence on hidden variables associated with the particle before the particle was detected. This is known as the "communication loophole", and as discussed here the latest experiments have managed to seal it off. Another is the detection loophole which apparently has not yet been fully dealt with by existing experiments.

I haven't really read over the Morgan paper you link to in detail, but it sounds to me like he's talking about an experimental loophole rather than a theoretical loophole--on p. 1 he specifically compares it to the detection loophole, saying that the computer model under discussion "is a local model that can be said to exploit the 'coincidence-time' loophole, which was identified by Larsson and Gill as 'significantly more damaging than the well-studied detection problem'". If you have followed the details of Morgan's discussion, can you tell me if he's talking about an experimental loophole akin to the communication loophole and the detection loophole, or if he's proposing a genuine theoretical loophole involving a local hidden variables model that he thinks can precisely reproduce the predictions of QM in every possible experiment?
 
  • #63


DrChinese said:
This is plain wrong, and on a lot of levels. Besides, you are basically hijacking the OP's thread to push a minority personal opinion which has been previously discussed ad nauseum here. Start your own thread on "Where Bell Went Wrong" (and here's a reference as a freebee) and see how far your argument lasts. These kind of arguments are a dime a dozen.

For the OP: You should try my example with the 3 coins. Simply try your manipulations, but then randomly compare 2 of the 3. You will see that the correlated result is never less than 1/3. The quantum prediction is 1/4, which matches experiments which are done on pretty much a daily basis.
Lol, Dr.Chinese says that these "experiments" are done routinely; as if on a "daily basis".
Please, Dr. Chinese, tell us these experiments you are talking about that are carried out on a daily basis, using no special crystals; not specific radiation wavelengths, and no unorthodox equipage! [If you are unable to do so then you fail.]

Lol. Direct the author to the thread all you want, but he, like you, will never explain the basis of it. Certainly not under local environments! Einstein was once fond of saying that it should be simple. That it should always be kept simple.
 
  • #64


Glenns said:
Please, Dr. Chinese, tell us these experiments you are talking about that are carried out on a daily basis, using no special crystals; not specific radiation wavelengths, and no unorthodox equipage! [If you are unable to do so then you fail.]

I really have no idea what you are saying. Bell tests are done in undergrad classrooms these days. They do require special PDC crystals and the appropriate laser source to create entangled photon pairs.

JesseM: Nice detailed response to mn4j. Raedt's work does involve the so-called "coincidence time loophole" also referenced by Morgan. See here for a related article. (There are 2 authors named Raedt and I assume they are related as they sometimes write together.)

These types of attacks on Bell tests attempt to explain the results as being a form of a biased sample, and as such always comes back to the fair sampling assumption. Of course, as technology improves these attacks always get weaker and weaker and the results NEVER get any closer to the local realistic requirements. And note that IF THEY DID, then the QM prediction would be wrong. And now we are back to Bell's result anyway, that no local realistic theory can reproduce the predictions of QM. So ultimately, the local realist must state: QM is wrong, or they are wrong. Can't both be right!
 
  • #65


DrChinese said:
I really have no idea what you are saying. Bell tests are done in undergrad classrooms these days. They do require special PDC crystals and the appropriate laser source to create entangled photon pairs.

To back up DrChinese claim that these experiments are now routinely done in undergraduate curriculum, please see this link:

http://people.whitman.edu/~beckmk/QM/

I too am puzzled by the requirement of not using any PDC crystal, etc. What's wrong with using those to get the entangled photons?

Zz.
 
  • #66


DrChinese said:
JesseM: Nice detailed response to mn4j. Raedt's work does involve the so-called "coincidence time loophole" also referenced by Morgan. See here for a related article. (There are 2 authors named Raedt and I assume they are related as they sometimes write together.)

These types of attacks on Bell tests attempt to explain the results as being a form of a biased sample, and as such always comes back to the fair sampling assumption. Of course, as technology improves these attacks always get weaker and weaker and the results NEVER get any closer to the local realistic requirements. And note that IF THEY DID, then the QM prediction would be wrong. And now we are back to Bell's result anyway, that no local realistic theory can reproduce the predictions of QM. So ultimately, the local realist must state: QM is wrong, or they are wrong. Can't both be right!

This seems rather dismissive. Raedt's work is not an attack on QM. They have developed a local realistic hidden variable model which gives the same result as QM in EPR type experiments and explains double-slit diffraction among other phenomena.
The matter is very simple, do you claim their model is not local realistic? If it is, then you must be alarmed that it reproduces the Quantum result, contrary to the claims of Bell. If it is not, then you must explain why it is not.

The model is described in the following articles:

http://arxiv.org/abs/0712.3781
http://arxiv.org/abs/0809.0616
http://arxiv.org/abs/0712.3693

The essence of the model is that quantum particles are Deterministic Learning Machines. Using this model, they are able to simulate EPR experiments, delayed-choice experiments, and double-slit experiments event-by-event in a local realist manner. You can't just brush this off.
 
Last edited:
  • #67


mn4j said:
This seems rather dismissive. Raedt's work is not an attack on QM. They have developed a local realistic hidden variable model which gives the same result as QM in EPR type experiments and explains double-slit diffraction among other phenomena.
The matter is very simple, do you claim their model is not local realistic? If it is, then you must be alarmed that it reproduces the Quantum result, contrary to the claims of Bell. If it is not, then you must explain why it is not.

Well, actually they say that the sample is not representative due to choice of the time window for coincidence counting. Their conclusion (quote): "In general, these results support the idea that the idealized EPRB gedanken experiment that agrees with quantum theory cannot be performed". In other words, they claim: a) The experimental results of their purported local realistic theory will be biased to agree with the predictions of QM; b) On the other hand, no suitable Bell test that supports QM can be performed - ever; And finally c) QM is wrong and their local realistic theory is correct.

Why do these attacks get dismissed? Because it is not actually a proof of anything. Can you imagine saying experimental proof supporting X is actually proof of not-X? That is what is being asserted.

Let me put it a different way: there is NO alternative theory presented in these papers. Period. They try to say they have a simulation. OK, fine. Show me the THEORY that matches the scope of QM. Then we can get to the meat and potatoes. The evidence from Bell tests supports the predictions of QM. When we see their theory (which we never will of course) - let's call it LR - then we can say:

Experimental Evidence=> QM
True Theory=>LR (different predictions)
QM-LR=Delta (the difference they purport to explain)

Now we have the problem of why - regardless of approach - every Bell test has a growing Delta and not a shrinking Delta. Delta should decrease to zero as test sampling improves. Instead, Delta is now at about 150+ standard deviations. That is way up from about 10 SD a few decades ago.

So please, get serious.
 
  • #68


DrChinese said:
Let me put it a different way: there is NO alternative theory presented in these papers. Period.
So what?. You still did not answer the following:
1. Do you deny that they presented event-by-event simulation of EPRB?
2. Do you claim that the model of their simulations is not local realistic?

These are the only two important questions. If you agree that they have indeed presented an event-by-event simulation of EPRB, then you end up with only two options

a) Their model is not local realistic or
b) Their model is local realistic contrary to the claims of Bell.

They don't need to have a complete theory which matches QM. All they need to demonstrate is that a local realistic model can reproduce the QM result, to refute Bell.

You probably have seen the following as well, although I can guess your response will be to ask them to get serious:

http://arxiv.org/abs/0901.2546

Maybe what you need is to spell out what evidence it will take for you to see the problem with Bell's theorem. Surely if your belief in it is rational, it must be falsifiable. What will it take to falsify it? Seriously, have you ever considered this question even?
 
  • #69


Hello,
Sorry, I didn't read all the thread. I just studied the http://arxiv.org/abs/0712.3781 article that you linked.

It seems to me that they do have a point.

They simulate measurments and associate to each of them a time t. Then, they count coincidences within a given time window only.

Their model violates Bell's inequality in the following way : they make the time t depend on the spin of the particle and of the orientation of the detector (locally). The delay between both detections associated to a pair thus depends on the spins and orientations of both particles and detectors. The coincidence count, that violates Bell's inequality is then a subset of the total coincidence count, that respects Bell's inequality. The selection of this subset depends on the delay between the events, thus on the spin and orientations of the detectors. This is a non-local hidden variable, and this is why it can violate Bell's inequality.

The most interesting point in their simulation, in my eyes, is that a real electric coincidence counter, in a real laboritory, can do exactly the same thing ! It can count a subset of results that violates Bell's inequality, from a total set of physical results that respects it, as long as a physical dependence exists between the extra correlations and the delay between the signals from the twin particules.
 
  • #70


Actually, this loophole is testable experimentally : we just have to emit the pairs of particles one by one, so that the time window for coincidences can be extended far beyond the maximum processing time for the detection.
This way, we can count all detections, whatever the delay between them.

If the idea in the paper is right, Bell's inequality should become respected.
If the idea in the paper is wrong, Bell's inequality should still be violated.

Maybe this have been already done.
 
  • #71


Pio2001 said:
Actually, this loophole is testable experimentally : we just have to emit the pairs of particles one by one, so that the time window for coincidences can be extended far beyond the maximum processing time for the detection.
This way, we can count all detections, whatever the delay between them.

If the idea in the paper is right, Bell's inequality should become respected.
If the idea in the paper is wrong, Bell's inequality should still be violated.

Maybe this have been already done.
What do you mean by "emit the pair of photons one by one"?
Aren't two entagled photons emitted at the same time, by definition?
 
  • #72


I mean decreasing the emission rate until pairs of photons are emitted slower (both photons being still emitted at the same time, of course).
we can then set the window of the coincidence counter very large, so that it counts the detection of two photons whatever the small time delay introduced by the authors in order to violate Bell's inequality.

If I have understood correctly their simulation, De Raedt et al.show that Bell's inequality violation in Aspect-like experiments is not necessarily caused by quantum non-local effects, but may come from an artefact caused by the coincidence counter setup.

Quantum theory predicts that as long as the two photons are from the same entangled pair, Bell's inequality will be violated.
In my understanding, De Raedt et al. simulation violates Bell's inequality introducing a delay between the photons AND setting the coincidence window narrower than this delay. So it predicts that if the coincidence window is widened enough for counting all coincidence, whatever the delay between the recording of the events, Bell's inequality will become respected.

In practice, that's exactly what happens in the real data set that they took as an example, BUT, it seems logical to assume that this is because widening the time window, we count more and more false coincidences, thus decreasing the correlations.
Decreasing the physical emission rate at the source, we should be able to widen the coincidence window without increasing the false coincidence rate at all.

This way, if Bell's inequality is still violated, the hypothesis of an artifact in the coincidence counter setup will be rejected, and the quantum non local correlations will remain the only explanation.
 
  • #73


Pio2001 said:
we just have to emit the pairs of particles one by one, so that the time window for coincidences can be extended far beyond the maximum processing time for the detection.
There is something that can be done without decreasing emission rate.
You can use two coincidence windows of different width. Say that coincidences that are in one widow but are outside other will be poorly synchronized coincidences and coincidences that are inside smaller widow are decently synchronized coincidences. Now if you calculate rate "poorly synchronized coincidences"/"decently synchronized coincidences" for different relative polarization angles you should not see any correlation between relative angle and this rate for fair sampling assumption to hold.
And if there is no such correlation there will be match less possible models for coincidence loophole if any.
Good thing is that analysis like that can be done without performing any new experiments based only on existing data from experiment where all detections are recorded with timestamps (and coincidences are found later from recorded data).
 
  • #74


mn4j said:
Maybe what you need is to spell out what evidence it will take for you to see the problem with Bell's theorem. Surely if your belief in it is rational, it must be falsifiable. What will it take to falsify it? Seriously, have you ever considered this question even?

Let's see if I get this right. The experimental evidence is X, and we are supposed to use that evidence to conclude not-X.

The thing about Bell is that it is more or less independent of whether local reality or QM is correct. It says they both cannot be correct, which was not obvious at the time. So let's get specific.

QM says the coincidence rate for entangled photon pairs at 60 degrees is 25%. Local realistic theories say the true coincidence rate is at least 33%. Raedt is saying that the true rate is [insert your guess here since he skips this step]% but that experiments will always support QM.

Now, once again, how are we supposed to conclude there is anything wrong with Bell? Clearly, the entire issue here is Raedt trying to explain why a LR theory, which makes predictions incompatible with QM, actually provides an experimental result compatible with QM. So clearly, this is not about Bell at all. You may as well say that all experiments supporting General Relativity are actually evidence of Newtonian gravity.

Now, get serious. Even Raedt ought to be able to see why the argument falls flat. It is going to take experimental evidence IN FAVOR of a local realistic theory to convince anyone of their result. If they really had a bead on anything, they would be proposing an experiment to test their ideas. Rather than writing a paper saying they are correct in the face of evidence to the contrary.
 
  • #75


I spent some time reading de Raedt's articles and talking with him about them. Let me answer some of the mn4j's questions in the last postings ("the only two important questions", as you write).

mn4j said:
So what?. You still did not answer the following:
1. Do you deny that they presented event-by-event simulation of EPRB?
2. Do you claim that the model of their simulations is not local realistic?

These are the only two important questions. If you agree that they have indeed presented an event-by-event simulation of EPRB, then you end up with only two options

a) Their model is not local realistic or
b) Their model is local realistic contrary to the claims of Bell.

1. Yes, they did present event-by-event simulations of the certain experiments (like Aspect's and Weihs' et al.), that often are thought to be conclusive evidences that Bell's equality is violated.

2. Yes, their models are local realistic.

But you are totally wrong about two options that I'm supposedly left with. The important thing to realize is that no conclusive test of EPRB has ever been done. Every experiment that has been conducted has certain loopholes (there's even a special wikipedia article about those). This means that all those experiments are not ideal, and it's possible to explain their results with a local realist theory. This is well-known to everybody who is interested in QM foundations, and has been known already for ages (Philip Pearle showed in the late 1970s how this can be done using one of the loopholes).

de Raedt presents yet another model of how these loopholes can be used to still "give some chance" to local realism. This is certainly not a big deal, and has no consequences to Bell's theorem. The usual hope is that in some years the conclusive experiment will be performed (I've heard people hoping that it will occur in 10-20 years).

What I find particularly confusing in de Raedt's articles is that they are totally out of context: he never mentions a word "loophole", leave alone all existing bulk of knowledge about them. This is misleading to say the least.

See http://arxiv.org/abs/quant-ph/0703120 for this critique (yes, I do know that there's a reply by de Raedt; I think that his reply misses the point).


mn4j said:
This seems rather dismissive. Raedt's work is not an attack on QM. They have developed a local realistic hidden variable model which gives the same result as QM in EPR type experiments and explains double-slit diffraction among other phenomena.
The matter is very simple, do you claim their model is not local realistic?

This paper about double-slit (http://arxiv.org/abs/0809.0616) is a different story, but also very telling. Have you read it? Did you realize that this model works only because many photons "get lost"? If we imagine a perfect emitter that emits 1 photon per second and we let it emit 1000 photons, and then we count how many photons hit the screen and how many photons hit the double-slit screen, and then we add those two numbers together, then the result according to this model will be a lot less then 1000.

This is clearly a prediction different from that of QM. This model can be tested and falsified. I'm absolutely sure that it's just wrong.

Of course such an experiment is tremendously difficult to perform, but there's an easier test. This model works because the detectors have memory and are "learning". Now if we start to jiggle the screen back and forth (parallel to itself) sufficiently fast, then de Raedt's model predicts that the interference image will get smeared (I think it's stated in the paper). Now, here's the question for you: what is the prediction of QM?

I think that QM predicts that the interference picture will stay the same. I asked de Raedt this question, and he replied that in his opinion QM predicts nothing, because it requires the experimental apparatus to be completely fixed during the experiment and not "jiggled". Well, then I asked him what would he say if such an experiment is performed and the interference picture does not change.

He said that in that case he would (I quote here) retire.

The bottomline is that what de Raedt proposes are some local realistic explanations of the certain experiments. All his models are in principle distinguishable from QM (as Bell always told us). And personally I'm quite sure that when the tests are done, these models will be proven false.
 
  • #76


DrChinese said:
QM says the coincidence rate for entangled photon pairs at 60 degrees is 25%. Local realistic theories say the true coincidence rate is at least 33%. Raedt is saying that the true rate is [insert your guess here since he skips this step]% but that experiments will always support QM.

Yes, it follows from figure 6 in the first paper that according to their simulation, the true rate, in your example, may actually be 25 %, while being measured at 33 % because of the least correlated photons being registered by the two detectors at a time interval bigger than the time window of the counter. Which leads to discarding the least correlated pairs.
 
  • #77


kobak said:
I spent some time reading de Raedt's articles and talking with him about them. Let me answer some of the mn4j's questions in the last postings ("the only two important questions", as you write).
1. Yes, they did present event-by-event simulations of the certain experiments (like Aspect's and Weihs' et al.), that often are thought to be conclusive evidences that Bell's equality is violated.

2. Yes, their models are local realistic.

But you are totally wrong about two options that I'm supposedly left with. The important thing to realize is that no conclusive test of EPRB has ever been done. Every experiment that has been conducted has certain loopholes (there's even a special wikipedia article about those). This means that all those experiments are not ideal, and it's possible to explain their results with a local realist theory. This is well-known to everybody who is interested in QM foundations, and has been known already for ages (Philip Pearle showed in the late 1970s how this can be done using one of the loopholes).

de Raedt presents yet another model of how these loopholes can be used to still "give some chance" to local realism. This is certainly not a big deal, and has no consequences to Bell's theorem.

...

The bottomline is that what de Raedt proposes are some local realistic explanations of the certain experiments. All his models are in principle distinguishable from QM (as Bell always told us). And personally I'm quite sure that when the tests are done, these models will be proven false.

Welcome to PhysicForums, kobak! And thank you very much for this insight on de Raedt.

I was just looking at the papers in a bit more detail. I have been disappointed by the approach, as it obscures what is being asserted in favor of trying to prove Bell wrong (which I think is overreaching). I do not personally consider these to be counter-examples to Bell, and I seriously doubt they will sway others either.

1. Their model (at least in one paper) does not provide fair sampling (assuming I read it correctly) to deliver an explicitly biased sample. As such, it exploits the loopholes you mention and doesn't really provide anything new (as you also mention). Quote:

"The mathematical structure of Eq. (18) is the same as the one that is used in the derivation of Bell’s results and if we would go ahead in the same way, our model also cannot produce the correlation of the singlet state. However, the real factual situation in the experiment [8] is different: The events are selected using a time window W that the experimenters try to make as small as possible. ...

"In our simulation model, the time delays ti are distributed uniformly over the interval [0, Ti] where T1 = [not random]."

In other words, there is tinkering with the time window and by their choice of how the time window is chosen, combined with time delay parameter choice, they bias the sample. They have to, because otherwise the raw source data would run afoul of Bell.

2. The other paper (also Dec 2007/Feb 2008) relies on so-called DLMs (Deterministic Learning Machines). These purport to satisfy local causality and involve a form of memory from trial to trial:

"A DLM learns by processing successive events but does not store the data contained in the individual events. Connecting the input of a DLM to the output of another DLM yields a locally connected network of DLMs. A DLM within the network locally processes the data contained in an event and responds by sending a message that may be used as input for another DLM. Networks of DLMs process messages in a sequential manner and only communicate with each other by message passing: They satisfy Einstein’s criterion of local causality. For the present purpose, we only need the simplest version of the DLM [11]. The DLM that we use to simulate the operation of the Stern-Gerlach magnet is defined as follows. The internal state of the ith DLM, after the nth event, is described by one real variable un,i. Although irrelevant for what follows, this variable may be thought of as describing the fluctuations of the applied field due to the passage of an uncharged particle that carries a magnetic moment."

and

"A key ingredient of these models, not present in the textbook treatments of the EPRB gedanken experiment, is the time window W that is used to detect coincidences. We have demonstrated (see Section IIG) the importance of the choice of the time window by analyzing a data set of a real EPRB experiment with photons [32].

3. With both of these, the critique is really the same: why not point to the specific difference? They do everything humanly possible to obscure what should be a simple point: what is the difference between QM and their LR? Clearly, they could show how their data points satisfy the Inequality if all trials are considered and are fully independent, while the sub-sample within the time window is biased to yield a result consistent with QM but violating the Inequality.

Specifically: the QM prediction of entangled photon coincidences is .250 at 60 degrees. So we know their adjusted result must therefore also be .250. The LR value must be .333 or greater, so the delta is .0833. Which data items were excluded to get this result? Or why would the results be biased specifically towards that of a wrong theory (QM)? These are the lines in the sand, and they really are not addressed. I can see the hand waving in the equations, but without this simple explanation I don't see where they have anything. Quoting again:

"Extensive tests (data not shown) lead to the conclusion that for d = 3 and to first order in W, our simulation model reproduces the results of quantum theory of two S = 1/2 objects, for both Case I and Case II."

Clearly, for the algorithm to work, the delta must be .0833 at 60 degrees; delta=0 at 0 and 45 degrees; and so on. That delta function, in my opinion, should jump off the page. In reality, I don't think they have identified such a function. They should be the ones to point out the source of the delta. I have tried, but can't really follow their algorithm far enough to generate values.

My point is basically: why not make a testable prediction showing how using the algorithm, the experimental results vary in good agreement with the model but NOT according to any quantum mechanical prediction? I.e. if I change the time window and delay parameters in an actual experiment, the results match the LR model but are not explained by QM. After all, according to the LR model, it is strictly an accident of chance that QM happens to be correct in its predictions regarding how entangled photons behave (since there are no such things as entangled photons in LR, by definition).
 
  • #78


DrChinese said:
I have tried, but can't really follow their algorithm far enough to generate values.

I only read the december 2007 paper with the Deterministic Learning Machines. They give two algorithms, the one with DLM (page 16 : deterministic model), and a pseudo-random one, much simpler (page 16 : pseudorandom model). The way to sort results follows (page 17 : time tags / data analysis).

They suggest a possible physical meaning for this bias : "experimental evidence that the time-of-flight of single photons passing through an electro-optic modulator fluctuates consederably can be found in ref 56"

I find this idea interesting, because it is more realistic to suppose that the time-of-flight of a photon can depend of its polarisation in an environnement sensitive to polarisation, than to suppose that the detector purposely discards detections that would comply with Bell's inequality.
This relation is explicitely proposed page 17 in the last paragraph before "5.Data Analysis" (the formula have no number). They later set d=3 in this formula (for 1/2 spin particles).

Zonde's idea to test coincidence efficiency vs relative angle seems good. We could try it on available data (after checking that it works for all possible scenarii of this kind).
 
  • #79


Pio2001 said:
They suggest a possible physical meaning for this bias : "experimental evidence that the time-of-flight of single photons passing through an electro-optic modulator fluctuates consederably can be found in ref 56"

I find this idea interesting, because it is more realistic to suppose that the time-of-flight of a photon can depend of its polarisation in an environnement sensitive to polarisation, than to suppose that the detector purposely discards detections that would comply with Bell's inequality.
This relation is explicitely proposed page 17 in the last paragraph before "5.Data Analysis" (the formula have no number). They later set d=3 in this formula (for 1/2 spin particles).

Thanks, I'll look again. There must be a connection between the polarization and detection probability (which is here related to the window size and delay factors) in order to get the desired results. I just couldn't figure out where, and I couldn't figure out why that wasn't highlighted.
 
  • #80


DrChinese said:
Let's see if I get this right. The experimental evidence is X, and we are supposed to use that evidence to conclude not-X.

The thing about Bell is that it is more or less independent of whether local reality or QM is correct. It says they both cannot be correct, which was not obvious at the time. So let's get specific.

You do realize that Bell has a definition of local reality which has not been verified experimentally don't you. If you think it has been verified, show me experimental evidence that proves Bell's definition of local reality. Have you not been reading this thread at all? The bulk of the discussion was about this point.

QM says the coincidence rate for entangled photon pairs at 60 degrees is 25%. Local realistic theories say the true coincidence rate is at least 33%. Raedt is saying that the true rate is [insert your guess here since he skips this step]% but that experiments will always support QM.
NO! Bell's local realist theories say the true coincidence rate is 33%. If you disagree, point me to a reference about a local realist theory which makes that claim. Again you will notice that only Bell makes that claim, which it turns out is a straw-man, because there is no experimental validation of it. Do you know of any local realist theory for which that claim is valid? If not, why do you state it as though it was dogmatically accepted to be the case?

So then we have:
1. What QM predicts
2. What Bell claims (and this is crucial) local realist theories should result in
3. What experiments observe

It turns out (1) agrees with (3) but disagrees with (2). If you are thinking intellectually honestly, you must realize that failure of (3) to agree with (2) can mean that Bell's claim about local realist theories is dubious. Yet, for some reason you'd rather think Bell was a god and every claim he made was dogma, which leads you to conclude that both (1) and (3) are results of non-local realist theories. Why is that, I ask? This is not rocket science.

Now, once again, how are we supposed to conclude there is anything wrong with Bell? Clearly, the entire issue here is Raedt trying to explain why a LR theory, which makes predictions incompatible with QM, actually provides an experimental result compatible with QM.
NO! Your bias is clouding your judgement of Raedt's work. Raedt has developed a model which is unmistakably and convincingly LR, and he shows that it agrees with (1) and (3). Again, if you are thinking intellectually honestly, you must realize that according to Bell's definition (and this is crucial) of what LR means, this is impossible.

If you want to criticize Raedt, you have to show that either:
1) The model he has developed is not LR
2) The model he has developed does not reproduce the results of QM and real experiments
You have done neither.
Now, get serious.
No. YOU get serious!
 
  • #81


kobak said:
The important thing to realize is that no conclusive test of EPRB has ever been done.
EXACTLY! So on what basis do followers of Bell purport to have proven that local realist theories should produce a certain result?

Here is an experiment to try:
- use a separate set of apparatus for each pair of photons emitted. If you still obtain the QM result, then Raedt's model is wrong.
 
  • #82


mn4j said:
> The important thing to realize is that
> no conclusive test of EPRB has ever been done.

EXACTLY! So on what basis do followers of Bell purport to have proven that local realist theories should produce a certain result?

Well, I'm glad that we agree on something. However, your question doesn't relate to my statement that you quote. Let me try to clarify things a bit.

There are two things: Bell's theorem as an abstract theorem, and its experimental tests. Bell's theorem states that local realist theories can't reproduce all the predictions of QM. It doesn't need to be proven by experiment, because the proof is given on a piece of paper. The experiment has to show what is correct: QM or local realism. What I said means that no conclusive proof that QM is right and LR is wrong (i.e. no conclusive violation of Bell's inequalities) has ever been done. This has no relation to the validity of the theorem itself.

Now, you seem to claim that Bell's theorem is wrong. But even if it were wrong, de Raedt's articles about EPR wouldn't prove it wrong (because as I explained, those articles only show that the experiments done so far were not perfect).

Finally, one more point. What exactly does "local realism" mean, is a philosophical question. For his proof Bell used a particular equation (P(A|aBL) = P(A|aL) or something like that) and he gave certain "physical" arguments about why this should be true if we assume local realism.

Famous ET Jaynes (and de Raedt follows Jaynes here) wrote a paper that you cited in this discussion, where he claimed that Bell made a stupid mistake when applying rules of probability (this is not a quote, but that's how it sounds). This is absurd. Bell certainly understood the rules of probability perfectly well and he actually did give the physical arguments for his assumption (that Jaynes seemed to fail to either notice or understand).

You may still say that local realism does not necessarily entail this assumption of Bell. Since "local realism" isn't something defined by a formula, this is in principle a meaningful claim. However I never saw any local realist model that would violate Bell's assumption (in the very particular example that Bell is discussing). de Raedt's models of simulations of Aspect-Weihs experiments have no relation to this issue.
 
  • #83


The experiment has to show what is correct: QM or local realism. What I said means that no conclusive proof that QM is right and LR is wrong (i.e. no conclusive violation of Bell's inequalities) has ever been done. This has no relation to the validity of the theorem itself.

Why MUST LR and QM contradict each other? Just because Bell says they must? This is what you fail to realize. The issue here is not whether QM is wrong and LR is right! The issue, whether Bells understanding of LR is correct.

Now, you seem to claim that Bell's theorem is wrong. But even if it were wrong, de Raedt's articles about EPR wouldn't prove it wrong (because as I explained, those articles only show that the experiments done so far were not perfect).
If you agree that the experiments were not perfect, then how come those same experiments are still presented as proof of Bell's theorem. Bell's theorem is a negative theorem.

Bell says, "NO LR can reproduce the QM results". Now Bell better be sure that his definition of LR is such that it accounts for EVERY possible LR theory. If even 1 is found that can not be modeled by Bell's equations, Bell's theorem has to be thrown out. Do you agree with this? De Raedt's articles presents just one such models.

Finally, one more point. What exactly does "local realism" mean, is a philosophical question. For his proof Bell used a particular equation (P(A|aBL) = P(A|aL) or something like that) and he gave certain "physical" arguments about why this should be true if we assume local realism.

Famous ET Jaynes (and de Raedt follows Jaynes here) wrote a paper that you cited in this discussion, where he claimed that Bell made a stupid mistake when applying rules of probability (this is not a quote, but that's how it sounds). This is absurd. Bell certainly understood the rules of probability perfectly well and he actually did give the physical arguments for his assumption (that Jaynes seemed to fail to either notice or understand).
I'll take Jaynes over Bell any day when it comes to who understands probability better. Take a look at De Raedts recent article together with Hess (http://arxiv.org/abs/0901.2546) for a succinct explanation of Bell's error.

You may still say that local realism does not necessarily entail this assumption of Bell. Since "local realism" isn't something defined by a formula, this is in principle a meaningful claim. However I never saw any local realist model that would violate Bell's assumption (in the very particular example that Bell is discussing).
Don't you realize that the type of claim Bell is making about LR models requires that he MUST be absolutely sure that he has presented an exhaustive representation of ALL POSSIBLE LR models. I am perfectly happy to accept that Bell's theorem is true ONLY for the LR models narrowly defined by his assumptions.

de Raedt's models of simulations of Aspect-Weihs experiments have no relation to this issue.
Don't forget that you already agreed that de Raedt's model is LR. So they are relevant. Bell's equations do not apply to a deterministic learning machine model like de Raedt's. How then can Bell claim that No LR can reproduce the QM results?

You see, the problem with Bell's theorem is not that his conclusions can not be drawn from his assumptions. The problem is that those conclusions are interpreted by those who don't know better beyond the scope of the assumptions on which they are based. For someone purporting to characterize all LR models, he chose a severely narrow and handicapped subset of LR to base his calculations on.
 
  • #84


What about the GHZ proof, then ?
 
  • #85


Dear mn4j,
we are already going in circles. I will try to summarize my points as clear as possible and I would like to ask you to comment on each of them, whether you agree or not. If you still don't listen to what I'm saying, then it's better to stop this discussion.

1. All the experiments that has been done so far to test Bell's inequalities ARE. NOT. PERFECT. This is not something to agree or disagree, it's just a fact, and it's admitted by everybody, including of course the experimenters themselves. Agreed?

2. These experiments definitely can't be presented "as proof of Bell's theorem", because they are not. Please stop asking me why they are presented in such way! If anybody does so, he or she just doesn't understand anything here. Bell's theorem is a theoretical construct, it doesn't need to be proven by experiment. Experiment has to show whether Bell's inequalities are violated or if they are not. Agreed?

3. I quote you: "I am perfectly happy to accept that Bell's theorem is true ONLY for the LR models narrowly defined by his assumptions". OK, let's call all theories that Bell's theorem applies to "Bell local realistic" (BLR). Now, Bell's theorem says and proves that BLR theories should obey Bell's inequalities while QM violates them. Agreed?

4. Your main point seems to be that BLR is only a narrow subclass of LR theories. Well, I repeat: what is "local realism" is a philosophical question. I'm personally quite happy to include in my notion of local realism the assumption that the outcomes of Bob's experiments are statistically independent from Alice's choice of experimental settings (this is Bell's assumption and precise definition of BLR). You are not, right?

5. Since scientific consensus is that BLR and LR are the same thing, and you disagree, the only meaningful way to disagree is to give an example of a LR theory that is not BLR. Agreed?

6. In case you want to say that de Raedt's models are such kind of example, I repeat once again: NO, they ARE NOT. de Raedt showed (as was already known) that the experiments to test Bell's inequalities were not perfect (there are loopholes), and because of these experimental flaws their results can be explained in LR way. Agreed?

7. But (the crucial point!) de Raedt's model is obviously not only LR, but BLR as well! IF a loophole-free test of Bell's inequalities is ever done and IF Bell's inequalities are still found to be violated, then de Raedt won't be able to explain this with his model (why? because of Bell's theorem). Agreed? Please think a bit before answering.

I'm asking you think, because you wrote that "Bell's equations do not apply to a deterministic learning machine model like de Raedt's". This is just plain wrong. Of course they do apply! De Raedt's model is perfectly BLR.

8. You mentioned the recent de Raedt's article with Hess (http://arxiv.org/abs/0901.2546). I've seen it and I took a brief look, but I didn't read it carefully and I failed to understand the crux of it. I just don't want to investigate 40+ pages of formulas, when I already know that de Raedt's reasoning is often confusing and misleading, and that Hess is well-known for fighting will Bell's theorem, though his claims were long ago shown wrong by people, whose opinion I respect in this issue (see http://arxiv.org/abs/quant-ph/0208187).

If you have read and understood this 40+ pages article, everybody here will be grateful if you give us the arguments in a concise and clear way.

9. For some reason you completely ignored my point about double-slit paper of de Raedt. You were first to mention it! Did you read it? If you did, could you please comment on what I said earlier? If you didn't, how come you use it in the arguments?


dk
 
Last edited by a moderator:
  • #86


kobak said:
8. You mentioned the recent de Raedt's article with Hess (http://arxiv.org/abs/0901.2546). I've seen it and I took a brief look, but I didn't read it carefully and I failed to understand the crux of it. I just don't want to investigate 40+ pages of formulas, when I already know that de Raedt's reasoning is often confusing and misleading, and that Hess is well-known for fighting will Bell's theorem, though his claims were long ago shown wrong by people, whose opinion I respect in this issue (see http://arxiv.org/abs/quant-ph/0208187).

If you have read and understood this 40+ pages article, everybody here will be grateful if you give us the arguments in a concise and clear way.

Good post. I tried to find the meat in the argument and couldn't either. If anyone had a good counter-argument, they would put their proof up front rather than hide it. Meanwhile, we have the following history of experimental teams seeing violation of Bell Inequalities:

Aspect, 1982: 5 standard deviations.
Kwiat, 1995: 102 standard deviations.
Kurtsiefer, 2002: 204 standard deviations.
Barbieri, 2003: 213 standard deviations.

And since entanglement doesn't even exist in any local realistic theory (by definition), it is interesting to note that last year, Vallone et al were observing hyper-entanglement on photons in 3 independent degrees of freedom. Further, there have been numerous experiments involving time-bin entanglement, including with photons that have never interacted in the past. Just the loophole de Raedt thought to exploit in his later paper (you would think experiments like this would finally end the search for an LR theory).

None of this could be predicted by any local realistic theory. On the other hand, all are predicted by QM. This is why looking for LR theories is a waste of time. It made sense up until the 1970's or so, but not since.
 
Last edited by a moderator:
  • #87


DrChinese said:
Meanwhile, we have the following history of experimental teams seeing violation of Bell Inequalities

Sorry, DrChinese, are you saying that some of these experiments were loophole-free? As far as I know, this has so far never been achieved.

If it is true, then exactly how many standard deviations is observed -- doesn't really matter. A strict believer in local realism still can say: there are this and that loopholes, and so the results can be explained in a sophisticated enough local realistic way. It can be 100000 standard deviations, or whatever. What is important (in the sense of putting a full stop in this discussion) is to conduct an experiment, completely free of any loopholes.
 
  • #88


kobak said:
Sorry, DrChinese, are you saying that some of these experiments were loophole-free? As far as I know, this has so far never been achieved.

If it is true, then exactly how many standard deviations is observed -- doesn't really matter. A strict believer in local realism still can say: there are this and that loopholes, and so the results can be explained in a sophisticated enough local realistic way. It can be 100000 standard deviations, or whatever.

I disagree. What is being asserted is anti-scientific because in a sense, no experiment is loophole free. As you are undoubtedly aware, there are still experiments going on to test General Relativity. At least there, the competing theories (or versions of GR as you may call them) have key elements in common.

On the other hand, there is no existing candidate LR theory on the table to compare to QM at this time. Stochastic Mechanics (Marshall, Santos) is an example of a field of research in that regard, but every candidate SM model is found to have problems and is quickly modified again. And since such models do not predict anything useful, there is no incentive to study them further. We already have a very useful model - QM - and the experiments supporting it are in the thousands. Something useful from the field of study would go a long way towards convincing the scientific community.

So yes, I think quantity does matter, and I think utility matters. And I think the history of the area does matter as well, including when a theory (QM) is supported by improving technology. That doesn't mean that conventional thinking is right always. I just mean to say that science evolves towards ever more useful theories. I do not see how LR theories can ever hope to fall into that category (useful) since they deny the known phenomena of entanglement. I mean, we are at the point now of entangling particles with no common history. Why don't the local realists acknowledge the obvious hurdle such experiments place on LR theories?

And as a practical matter, I disagree that loophole-free experiments have not been performed. In my opinion, the fair-sampling loophole has been closed (Rowe et al, 2001). In my opinion, the strict locality loophole has been closed (Weihs et al, 1998). Etc. Why should you need to close every loophole simultaneously if you can close each separately? If a prisoner cannot escape from the first lock by itself, and cannot escape from the second lock by itself, how can he escape when both locks are present? I don't disagree with a desire to close all loopholes simultaneously; but I think that is a standard that is being applied to Bell tests which is applied nowhere else in science. Surely you must have noticed this as well.
 
  • #89


DrChinese said:
I disagree. What is being asserted is anti-scientific because in a sense, no experiment is loophole free. As you are undoubtedly aware, there are still experiments going on to test General Relativity. ... And as a practical matter, I disagree that loophole-free experiments have not been performed. In my opinion, the fair-sampling loophole has been closed (Rowe et al, 2001). In my opinion, the strict locality loophole has been closed (Weihs et al, 1998). Etc. Why should you need to close every loophole simultaneously if you can close each separately? ... I don't disagree with a desire to close all loopholes simultaneously; but I think that is a standard that is being applied to Bell tests which is applied nowhere else in science. Surely you must have noticed this as well.

Three points. First. I'm not an expert in Bell tests and loopholes issue, so can't really comment on that on the detailed level. I know that there's for example "time-coincidence" loophole (http://arxiv.org/abs/quant-ph/0312035), which is apparently exactly the loophole de Raedt is exploiting (http://arxiv.org/abs/quant-ph/0703120, the link I already gave). I'm not sure that all known loopholes were already closed even separately, though this might be true. In particular, I just don't know any details about this "entanglement" studies that you cite (and don't have time at the moment to start reading them). Do they test Bell inequalities after this entanglement "swapping"? Or how else these findings prove LR false?

Second. I guess that I slightly disagree with you about different standards of tests. Of course there are super-precise tests of GR still being done. But to test GR you need to observe something that is predicted by GR, like light deflection or whatnot. When this is observed, nobody claims that there's a "loophole" in this experiment, and the results can be interpreted such that light is not deflected. It's evident: nobody heard of any loopholes in GR tests. On the other hand, to test QM versus LR one needs to show that the Bell's inequalities are violated. And all the attempts to show it still have some loopholes that allow alternative explanations.

Third. Nobody in his right mind claims that QM is "wrong". For de Raedt, QM is a correct mathematical model working well on the ensemble level only, without saying anything about single events. He is not trying to show that QM is wrong, he is trying to show that it can be completed in a LR way. Well, we know that it's impossible due to Bell. But de Raedt obviously disagrees. And it doesn't make a lot of sense for me to defend de Raedt, but he is most definitely not a crackpot (he has done a huge amount of "real" work in computer simulations of different physical models, including decoherence etc.). I believe (as you do) that his reasoning about Bell is flawed, but he certainly does not try to obscure anything on purpose: I'm quite sure that he is honest.
 
Last edited by a moderator:
  • #90


kobak said:
1. All the experiments that has been done so far to test Bell's inequalities ARE. NOT. PERFECT. This is not something to agree or disagree, it's just a fact, and it's admitted by everybody, including of course the experimenters themselves. Agreed?
Agreed! This is a fact. Not a single loophole-free experiment has ever been performed.
2. These experiments definitely can't be presented "as proof of Bell's theorem", because they are not. Please stop asking me why they are presented in such way! If anybody does so, he or she just doesn't understand anything here.
Agreed!

Bell's theorem is a theoretical construct, it doesn't need to be proven by experiment. Experiment has to show whether Bell's inequalities are violated or if they are not. Agreed?
No. I disagree. So long as Bell's inequalities purport to make claims about reality, the correspondence between those inequalities and reality MUST be independently validated by experiments before any claims they make about reality can be said to be proven.

3. I quote you: "I am perfectly happy to accept that Bell's theorem is true ONLY for the LR models narrowly defined by his assumptions". OK, let's call all theories that Bell's theorem applies to "Bell local realistic" (BLR). Now, Bell's theorem says and proves that BLR theories should obey Bell's inequalities while QM violates them. Agreed?
Agreed without prejudice. Note that every loop-hole found to date is a hidden assumption in Bell's proof. I do not claim by agreeing to the above that all loop-holes have been found.

4. Your main point seems to be that BLR is only a narrow subclass of LR theories. Well, I repeat: what is "local realism" is a philosophical question. I'm personally quite happy to include in my notion of local realism the assumption that the outcomes of Bob's experiments are statistically independent from Alice's choice of experimental settings (this is Bell's assumption and precise definition of BLR). You are not, right?
Again remember that every loop-hole is a hidden assumption of Bell's proof. The fact that there are loop holes tells you that BLR is not exhaustive of all LR.

5. Since scientific consensus is that BLR and LR are the same thing, and you disagree, the only meaningful way to disagree is to give an example of a LR theory that is not BLR. Agreed?
No. If you think scientific consensus is that BLR and LR are the same thing, then you have not been paying attention, and this thread does not exist, and the loop-holes do not exist.

6. In case you want to say that de Raedt's models are such kind of example, I repeat once again: NO, they ARE NOT.
If you say Raedt's modes are not examples of LR which are not accounted for by Bell's LR, I repeat once again: YES THEY ARE. You see this kind of discussions takes us no where. Explain why they are not.

de Raedt showed (as was already known) that the experiments to test Bell's inequalities were not perfect (there are loopholes), and because of these experimental flaws their results can be explained in LR way. Agreed?
That is a very narrow reading of de Raedt's work. Did you completely fail to understand the importance of the Deterministic Learning Machine model of de Raedt's?

7. But (the crucial point!) de Raedt's model is obviously not only LR, but BLR as well!
If de Raedt's model is BLR then how do you explain the fact that the model violates the inequality, when according to Bell it is impossible. Think before you answer. If you want to say that only under certain conditions will violate the inequality, then you still face the question of answering how come some BLR will violate the inequality under certain conditions. There is no escaping here.

IF a loophole-free test of Bell's inequalities is ever done and IF Bell's inequalities are still found to be violated, then de Raedt won't be able to explain this with his model (why? because of Bell's theorem). Agreed? Please think a bit before answering.
This is circular reasoning. A loop-hole free test of Bell's inequality is required to be able to validate the inequality in the first place. Violation of Bell's inequality in any experiment has two possible explanations, not just one.
1) That Bell's inequality is a correct representation of local reality and the experiment is either not real or not local or both
2) That Bell's inequality is not a correct representation of local reality.

Now for some reason, Bell's followers ALWAYS gravitate towards (1). Do you agree that (2) is also a possibility and MUST be considered together with (1) when interpreting the results of these experiments? Please, I need a specific answer to this question.

I'm asking you think, because you wrote that "Bell's equations do not apply to a deterministic learning machine model like de Raedt's". This is just plain wrong. Of course they do apply! De Raedt's model is perfectly BLR.
You have no idea what you are talking about. Even ardent Bell believers have shown that not all LR are accounted for in BLR. See http://arxiv.org/abs/quant-ph/0205016 for one example. Bell's starting equation is the following:
<br /> P(AB) = \sum_i P(A|a_i)P(B|b_i )P(\lambda_i)<br />
This experiment does not apply in situations in which \lambda_{i+1} is dependent on \lambda_{i}, like is the case in de Raedt's model. The reason is simple. If case (i) and case (i+1) are not mutually exclusive, you can integrate or as in this case perform a sum the way Bell did.

8. You mentioned the recent de Raedt's article with Hess (http://arxiv.org/abs/0901.2546). I've seen it and I took a brief look, but I didn't read it carefully and I failed to understand the crux of it. I just don't want to investigate 40+ pages of formulas, when I already know that de Raedt's reasoning is often confusing and misleading, and that Hess is well-known for fighting will Bell's theorem, though his claims were long ago shown wrong by people, whose opinion I respect in this issue (see http://arxiv.org/abs/quant-ph/0208187).
This explains why you will never understand him. Apparently, as soon as you see Hess or de Raedt, you put on green goggles. The article you posted as disproving Hess is nothing short of a joke. (See http://arxiv.org/abs/quant-ph/0307092).
If you have read and understood this 40+ pages article, everybody here will be grateful if you give us the arguments in a concise and clear way.
If you are interested in understanding the opposing position, you will make the effort to read and understand their arguments before purporting to refute it. Since it appears you have access to de Raedt personally, why don't you ask him to explain to you concisely what the article is talking about. That will be much better than any of my efforts to explain his work to a hostile audience.
9. For some reason you completely ignored my point about double-slit paper of de Raedt. You were first to mention it! Did you read it? If you did, could you please comment on what I said earlier? If you didn't, how come you use it in the arguments?
You claimed that some photons were lost in their double slit simulation. This is wrong! All photons reach the detector and affect the outcome of the experiment. Maybe what you were trying to say is that in their model, not all photons result in a click. In any case, do you have experimental evidence proving that all photons leaving the source MUST result in a click at the detector in a double slit experiment?
 
Last edited by a moderator:

Similar threads

  • · Replies 48 ·
2
Replies
48
Views
6K
  • · Replies 80 ·
3
Replies
80
Views
7K
  • · Replies 93 ·
4
Replies
93
Views
7K
  • · Replies 87 ·
3
Replies
87
Views
8K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 28 ·
Replies
28
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 75 ·
3
Replies
75
Views
12K
  • · Replies 33 ·
2
Replies
33
Views
4K
  • · Replies 5 ·
Replies
5
Views
4K