Entanglement and Bell's theorem. Is the non-locality real?

Sylvia Else
Messages
29
Reaction score
0
The results of measurements of phase entangled particles together with Bell's theorem provide pretty convincing evidence that the Universe contains non-local interactions.

Yet I'm lead to wonder.

Let's imagine the usual idealised experimental scenario, where there is an emitter of particles in a twin state and two measuring devices on opposite sides of the system performing measurements in a space-like separated way. The measurements on one side of the system are not interesting in themselves. They are just random. They only become interesting when they are compared with the measurements from the other side, with a correlation being observed. We know that when performed appropriately, this will show that the measurement results are correlated in a way that, by Bell's theorem, cannot be explained by any local interaction - the measurements appear to be non-locally linked.

Now step back. Consider that the above is performed in complete isolation, except that the results of the final step - comparison of the measurements, is transmitted to an outside observer. For the comparison to be made, the results have to be transferred from where they are made to a common place.

Since the experiment, except for the last step, is performed in isolation, the outside observer can regard the entire experimental situation as a superposition of quantum states with no decoherence except at the last step. The "measurements" are nothing but further entanglements between the twin state particles and the measuring apparatus. The last step involves an interaction between particles that represent the results of the earlier "measurements", with the states of particles for the two measurements for a given twin state pair of particles being already entangled. Those particles now further interact locally to produce the transmitted result of the comparison to the outside observer.

So the outside observer can (in principle, at least) calculate the evolution of the system, and the final transmitted results (in a probablistic sense), without needing to assume any non-local interactions. In particular, the outside observer cannot use Bell's theorem to prove that the system is non-local, because nothing in the system has a definite value that Bell's theorem requires.

From this perspective, it looks as if the appearance of non-locality in the system results from a false assumption by observers embedded in the system that they are somehow independent of it, and that their measurement results are definite values before they are compared.

A possible objection is that we can posit yet another observer outside the enlarged system that consists of the first system and the first outside observer, and do that again and again, making this look like some kind of infinite regression. However the first outside observer sits at the first place where a definite result can be obtained immediately without needing the enlarged system to evolve further.

Sylvia.
 
Physics news on Phys.org
Sylvia Else said:
The results of measurements of phase entangled particles together with Bell's theorem provide pretty convincing evidence that the Universe contains non-local interactions.
Depends on what you mean by nonlocal interactions. If you mean instantaneous action at a distance, then that is physically meaningless. If you mean faster than light propagations, then that has not been demonstrated. It is true that, so far, only nonlocal hidden variable models of quantum entanglement are unquestionably viable. But that doesn't mean that they're a true description of reality. There are local hidden variable models of quantum entanglement that are open to question and interpretation. Whether nature is local or nonlocal is still an open question. All that's known for sure is that Bell-type models of quantum entanglement are ruled out. Both mathemetically and experimentally. Whether or not there might be another class of models that might be considered local realistic remains an open question.

The bottom line is that it cannot be definitively said, from Bell tests, that nature is nonlocal. Not because of experimental loopholes, but because the Bell formulation of local hidden variable models might not be general.

Sylvia Else said:
Let's imagine the usual idealised experimental scenario, where there is an emitter of particles in a twin state and two measuring devices on opposite sides of the system performing measurements in a space-like separated way. The measurements on one side of the system are not interesting in themselves. They are just random. They only become interesting when they are compared with the measurements from the other side, with a correlation being observed. We know that when performed appropriately, this will show that the measurement results are correlated in a way that, by Bell's theorem, cannot be explained by any local interaction - the measurements appear to be non-locally linked.
We only know that these experimental results can't be explained by Bell's formulation of a local hidden variable supplement to QM. We don't know that this is general. We don't know that there might not be other ways of formulating viable local hidden variable supplements to QM. It might seem to you that Bell has covered all the bases. But has he? Those who say that Bell's formulation is general posit, from experimental violations of Bell inequalities, that nature is nonlocal. Yet, there's no physical evidence for this. The fact of the matter is that it's currently an interpretational, philosophical issue that experimental results can't definitively provide the answer to.

I've omitted the rest of your post because I think it's irrelevant. Even if all experimental loopholes in Bell tests are eventually closed, then what does that mean? It means that Bell-type formulations of quantum entanglement are not viable. And whether or not nature is nonlocal remains an open question.

Why Bell-type formulations of quantum entanglement preparations are nonviable has been the subject of numerous publications.

Here's another thing to consider. Suppose that it's found that absolutely no local model of quantum entanglement can be formulated. Does that mean that quantum entanglement is nonlocal? No, it doesn't. This is because the correspondence of theory to reality is, and will always be, essentially unknown.
 
Last edited:
nanosiborg said:
We only know that these experimental results can't be explained by Bell's formulation of a local hidden variable supplement to QM. We don't know that this is general. We don't know that there might not be other ways of formulating viable local hidden variable supplements to QM.

It goes somewhat further than that. If the individual measurements have definite values at the time they are made, and the universe isn't cheating by exploiting some experimental loophole, then Bell's theorem shows that it is the measurements themselves that are non-locally linked, and it makes no difference at all what the physical underpinning is, whether it's QM or something else entirely. Whatever it is has to implement the non-locality inherent in the measurements, and the only way to avoid the non-locality is to show that Bell's theorem contains an assumption that is not actually met in reality, thus rendering the theorem inapplicable. One such assumption is that when you make a measurement, you get a definite result, then and there. If you drop that assumption, then Bell's theorem no longer forces you to accept non-locality. Instead the Universe needs some way of unravelling things later (preferably in a local manner) so that the results are consistent.
 
Sylvia Else said:
It goes somewhat further than that. If the individual measurements have definite values at the time they are made, and the universe isn't cheating by exploiting some experimental loophole, then Bell's theorem shows that it is the measurements themselves that are non-locally linked ...
No. It doesn't show that. What Bell's theorem shows is that a certain way of modelling quantum entanglement experiments isn't viable.

Sylvia Else said:
... and it makes no difference at all what the physical underpinning is, whether it's QM or something else entirely.
Nobody knows, or can ever know, what the "physical underpinning" is. It's a matter of metaphysical speculation.

Sylvia Else said:
Whatever it is has to implement the non-locality inherent in the measurements ...
There's no nonlocality inherent in the measurements. The measurements simply indicate correlations between instrumental parameters and rate of coincidental detection. The correlations might be entirely due to local interactions or they might be due to nonlocal interactions. Nobody knows, and experimental violations of Bell inequalities don't inform regarding this.

The correlations are such that empirical laws from classical optics are applicable to the QM treatment of entanglement. This simple fact would seem to indicate that whatever is going on in quantum entanglement has nothing to do with nonlocality.

Bell's theorem (experimental violations of Bell inequalities) can't show that nature is nonlocal. For those who want to believe that nature is nonlocal, then that's a metaphysical assumption for which there's absolutely no evidence.
 
Last edited:
Sylvia Else said:
It goes somewhat further than that. If the individual measurements have definite values at the time they are made, and the universe isn't cheating by exploiting some experimental loophole, then Bell's theorem shows that it is the measurements themselves that are non-locally linked, and it makes no difference at all what the physical underpinning is, whether it's QM or something else entirely. Whatever it is has to implement the non-locality inherent in the measurements, and the only way to avoid the non-locality is to show that Bell's theorem contains an assumption that is not actually met in reality, thus rendering the theorem inapplicable. One such assumption is that when you make a measurement, you get a definite result, then and there. If you drop that assumption, then Bell's theorem no longer forces you to accept non-locality. Instead the Universe needs some way of unravelling things later (preferably in a local manner) so that the results are consistent.

The usual escape from non-locality is denial of realism. That is, there is no definite value for a measurement outcome which is not performed. I believe this is "somewhat" akin to what you are saying. This is a normal part of Bell's Theorem, not a flaw in the theorem.

Your alternative is to postulate macroscopic entanglement, which you would need to keep the entanglement alive. Because usually the results are encoded to computers and then compared. In your view: when the results are brought together, the entanglement ends.
 
DrChinese said:
Your alternative is to postulate macroscopic entanglement, which you would need to keep the entanglement alive. Because usually the results are encoded to computers and then compared. In your view: when the results are brought together, the entanglement ends.

Something like that. I was looking at the decoherence model, in which nothing fundamental changes in a measurement, just that the thing being measured gets entangled with the measuring device, and then ultimately with more and more of its environment. The idea that the entanglement with the measuring device delivers a definite result that can be fed into Bell's theorem looks a bit suspect when the entire experiment is viewed from the outside.

I'm pondering whether one can come up with a version of the EPR experiment where the entanglements are sufficiently clearly specified that the system appears non-local from the inside (an admitedly vague term in this context), but is manifestly local from the outside (ditto). I confess that I haven't made much progress there.

Sylvia.
 
I think this is the essence of the argument made by advocates of the many-worlds interpretation in explaining why there doesn't need to be any violation of locality in their interpretation--basically if each measurement just puts each experimenter into a superposition of different measurement results, then there is no need for nature to decide which measurement result "here" is paired with which measurement result "there" until there's been time for a signal from each experimenter to reach someone at the midpoint. See for example Vindication of Quantum Locality by David Deutsch and The EPR paradox, Bell's inequality, and the question of locality by Guy Blaylock.
 
An interesting paper on this topic that came out today:
We give an extremely simple proof of Bell’s inequality: a single figure suffices. This simplicity may be useful in the unending debate of what exactly the Bell inequality means, since the hypothesis at the basis of the proof become extremely transparent...

Let us define “counterfactual” a theory whose experiments uncover properties that are pre-existing. In other words, in a counterfactual theory it is meaningful to assign a property to a system (e.g. the position of an electron) independently of whether the measurement of such property is carried out. Sometime this counterfactual definiteness property is also called “realism”, but it is best to avoid such philosophically laden term to avoid misconceptions...Bell’s theorem can be phrased as “quantum mechanics cannot be both local and counterfactual”. A logically equivalent way of stating it is “quantum mechanics is either non-local or non-counterfactual”...

This proves Bell’s theorem: all local counterfactual theories must satisfy inequality (1) which is violated by quantum mechanics. Then, quantum mechanics cannot be a local counterfactual theory: it must either be non-counterfactual (as in the Copenhagen interpretation) or non-local (as in the de Broglie-Bohm interpretation).
Simplest proof of Bell’s inequality
http://lanl.arxiv.org/pdf/1212.5214.pdf
 
Last edited:
bohm2 said:
An interesting paper on this topic that came out today:

Simplest proof of Bell’s inequality
http://lanl.arxiv.org/pdf/1212.5214.pdf

People clearly have differing notions of what constitutes simple ;)

The background to my interest in this issue is the experiments that continue to explore the lower bound on the speed of some hypothetical influences that implement the apparent non-locality. I think the experimenters themselves do understand that the results they're getting only have meaning if the influences are real, but this caveat tends to get lost when the results are reported in the popular science media. I was wondering whether these experiments are even worthwhile in the absence of any concrete evidence of the existence of the influences, and while other models of reality remain viable. Of course, one can say that doing experiments that test the limits of theory is necessarily part of what science is about. And that's true, but there are not infinite resources available for doing experiments, so it's inevitable that they get prioritised.

Just yesterday I came across this recent paper

http://arxiv.org/pdf/1210.7308.pdf

in which the authors claim to have proved that if phase entanglement involves influences that travel at a superluminal but finite speed, then it's inevitable that actual FTL communication be possible. That is, it's not possible for such influences to be forever hidden.

Note that their proof does not purport to apply to influences that are instantaneous (necessarily in some privileged reference frame), so some versions of the pilot wave theory would remain possible without allowing FTL communication.

Sylvia
 
  • #10
Sylvia Else said:
Just yesterday I came across this recent paper

http://arxiv.org/pdf/1210.7308.pdf

in which the authors claim to have proved that if phase entanglement involves influences that travel at a superluminal but finite speed, then it's inevitable that actual FTL communication be possible. That is, it's not possible for such influences to be forever hidden.

Interesting paper/hypothesis.

Question: If phase/quantum entanglement is considered random (a basic premise in QM)

then

even if "influences" are assumed to travel FTL, I cannot see how, the author can show that, FLT communication/information would, even theoretically, be possible
 
Last edited:
  • #11
San K said:
Interesting paper/hypothesis.

Question: If phase/quantum entanglement is considered random (a basic premise in QM)

then

even if "influences" are assumed to travel FTL, I cannot see how, the author can show that, FLT communication/information would, even theoretically, be possible

If influences are traveling at some FTL but finite speed then it will necessarily by possible to arrange that some measurements on entangled particles are made so close together that the influence cannot arrive in time. The results would then have to respect Bell's inequality.

So if entanglement is actually mediated by FTL finite speed influences, then QM is capable of being falsified, with the required experiment simply not having been done yet.

But the second question is whether it would actually then be possible to exploit the influences. In the paper they show a simple case where an exploit is possible if entanglement is only mediated by such influences, but then show that the exploit is trivially avoided by adding a shared hidden variable. That leaves the question of whether an exploit could still be achieved despite any mechanism based on both FTL finite speed influences and hidden variables. They appear to be showing (I haven't checked the math - I'd have to learn some more of it first) that based on QM, it is possible to construct an exploit. Though as I write this, it occurs to me that their proof appears to be based on a theory, QM, that is, ex hypothesi, in any case wrong. The ramifications of that are currently beyond me.

Sylvia.
 
  • #12
I posted yesterday a short paper in which I show a deterministic model of quantum spin interactions that achieves the same correlations as QT with only luminal signaling in EPR/CHSH experiments. The paper includes the source code for the simulation. Although not indicated in the paper, in the talk I gave I also showed an experiment than can confirm or rule out the hypothetical mechanism that is the basis for the model. I am working on getting endorsed for arxiv, but in the meantime here it is:

https://docs.google.com/open?id=0BxBGJRkQXyjweXR2R3ExTlEyNm8
 
Last edited:
  • #13
mbd said:
The paper includes the source code for the simulation.

It appears to me from the code that the probability of alice and bob both detecting a given particle is a function of the difference between their respective measurement angles, being higher when the difference between the angles is smaller. This would skew the results.

Second thoughts, scrub that, a model is permitted to do that, provided it achieves it locally.
 
Last edited:
  • #14
bohm2 said:
An interesting paper on this topic that came out today:

Simplest proof of Bell’s inequality
http://lanl.arxiv.org/pdf/1212.5214.pdf
I think "quantumtantra.com/bell2.html" deserves the honor of simplest known proof of Bell's theorem.
 
Last edited by a moderator:
  • #15
Thanks for looking at it, Sylvia! That code is there to identify the alpha2 and beta2 angle set upon coincident detections for which correlation and anti-correlation are swapped in the tally. An experiment by Rowe et al has a really clear articulation of CHSH in the paper. I'm at a bar right now or I'd find a proper reference.
 
  • #16
Ok, I am bothered by the fact that a lower difference in the angles of Bob and Alice leads to a greater probability of simultaneous detection together with the code that inverts the meaning of correlation for angle differences less than PI/4. This latter seems exactly equivalent to inverting one result, say Alice's, immediately after detection, but only when the angles differ by less than PI/4. That would be manifestly non-local.
 
  • #17
It is an exploit of a class of coincidence loopholes. It is based on the relative size of the phenomenon to the size of the coincidence window. The counting is correct. If you add a linked list for the sequence of past states you can expand the window. You will also have to reduce the step of the walk of Emmitt. This is the process of finding the constants in a new theory. It's hard...help needed!
 
Last edited:
  • #18
Note also the other event may never happen. We're not necessarily talking particles traveling and not being counted. Rather, the events actually happen at separate times or perhaps the other never happens.
 
Last edited:
  • #19
I always feel troubled by Bell's Inequality, and it leads me to ask what would the implications be if Bell's Inequality didn't exist? (ie. what could be inferred if the behavior noted through Bell's Inequality didn't actually happen?) Suppose there was no statistical correlation to be found at all between what Alice, Bob, Joe, Gina or whoever observed?

Of course we know that it happens, and so it's troubling, because it means that something is going on that cannot be dismissed or overlooked. We just don't have the means to further probe it in more detail. Do we?

Metaphorically, it feels as if we are trapped inside a Cage known as SpaceTime, and that Bell's Inequality is revealing to us something that is dangling outside the cage beyond our reach. We cannot grab or grasp this thing, because we are constrained by the bars of SpaceTime.
We are unable to discern what is actually going on behind Bell's Inequality, because it is obscured by SpaceTime itself.
 
Last edited:
  • #20
Suppose two points in space have a relationship through which the state of each can be seen by the other via a speed-of-light mechanism, i.e. they see each other in the past, and that they react to that state? Then it is certainly possible for the other event to not happen, or happen at time outside the coincidence window.

Really, it is speed-of-light teleportation with a segment of time going from E to A and E to B, which I have shown can achieve the same results as instantaneous teleportation between A and B.

This makes it Einstein-local.

The model can, with a smart agent at E, achieve CHSH results as high as 4.0. That was the substance of my gedanken experiment. It very much involves the other event not happing at all unless both will correlate. The "smart" agent remembers the set of state information (angles) it gets, uses this to optimize strategy of what angle to send down next, and these can be different for A and B. In this way, the agent can make any combination of events happen. There are even some potentially natural rules that can be devised to make it seem more plausible.
 
Last edited:
  • #21
If someone's read my paper and finds it compelling, and can endorse someone into arxiv, I'd really appreciate it! I'm working entirely independently so I don't have the automatic rights that folks with institutions get. Someone has promised to review it in mid-January, but I want to get it up there sooner.

Private message me if so. Thanks!
 
Last edited:
  • #22
sanman said:
Metaphorically, it feels as if we are trapped inside a Cage known as SpaceTime, and that Bell's Inequality is revealing to us something that is dangling outside the cage beyond our reach. We cannot grab or grasp this thing, because we are constrained by the bars of SpaceTime. We are unable to discern what is actually going on behind Bell's Inequality, because it is obscured by SpaceTime itself.
That is the just of Gisin's argument:
To put the tension in other words: no story in space-time can tell us how nonlocal correlations happen, hence nonlocal quantum correlations seem to emerge, somehow, from outside space-time.
Quantum nonlocality: How does Nature perform the trick?
http://lanl.arxiv.org/pdf/0912.1475.pdf

Summarized here also:
If so, whatever causes entanglement does not travel from one place to the other; the category of “place” simply isn't meaningful to it. It might be said to lie *beyond* spacetime. Two particles that are half a world apart are, in some deeper sense, right on top of each other. If some level of reality underlies quantum mechanics, that level must be non-spatial.
How Quantum Entanglement Transcends Space and Time
http://www.fqxi.org/community/forum/topic/994?search=1
Looking Beyond Space and Time to Cope With Quantum Theory
http://www.sciencedaily.com/releases/2012/10/121028142217.htm
 
  • #23
How Quantum Entanglement Transcends Space and Time
http://www.fqxi.org/community/forum/topic/994?search=1
Looking Beyond Space and Time to Cope With Quantum Theory
http://www.sciencedaily.com/releases/2012/10/121028142217.htm

Or, through a simple, deterministic, arguably classical, mechanism using only speed-of-light interactions as described and linked to above, but here again for convenience:

https://docs.google.com/open?id=0BxBGJRkQXyjweXR2R3ExTlEyNm8

Per Occam's Razor, would you turn first to speed-of-light interactions from E to A and E to B that operate for an interval of time, or an instantaneous interaction between A and B? The former, of course. Unless you have a grant application pending for yet another EPR experiment? I've most certainly shown that the former can achieve the same results as QT and I have (you can too) tweaked the simulation to match any EPR experiment performed thus far.

The simulation I provide in the paper can even be implemented in circuitry with space-like separation between A and B. It will certainly result in CHSH > 2, over a broad range of random walk step sizes of the signal from E to A and E to B as well as over a broad range of coincidence windows.

In other words, I believe the game has been changed, pending experimental confirmation that nature actually works this way. But, Occam clearly suggests what experiment to do next (hint: not another traditional CHSH).

Michael B Devine
 
Last edited:
  • #24
bohm2, thank you for this reply - it was very helpful to me.

But the fact that we can discern at all that "something" is happening through Bell's Inequality is nevertheless a start. I feel that Occam's Razor can help us saw through the bars.

Suppose you have a complicated function H(x) which is producing a complicated result.
Suppose that H(x) can alternatively be expressed as a composite of 2 simpler functions F(x) and G(x)
ie. H(x) = F(G(x))
Then it might be worthwhile to use F(G(x)) instead of H(x)

I feel that the behavior of Bell's Inequality should be decomposed into signal and blur.
Some kind of signal is being passed along, but then blurred before we can see it clearly.
That blurring is what prevents information from being passed FTL, and we are unable to separate the signal from the blurring effect. To me, it seems like the blurring part is occurring because of spacetime itself - ie. spacetime is doing the blurring, through its vacuum fluctuations. If it weren't for the blurring/random-noise component applied by spacetime, then the signal would be clear. But since we are all stuck in spacetime, we are unable to eliminate that blurring/random-noise.

I feel the key lies in investigating the nature of the noise being generated by spacetime via the vacuum fluctuations. We know from QED/Casimir-type experiments that we can shut out or exclude from a region of space some wavelengths of noise which are too large. I'm thinking that such noise reduction or noise manipulation could be used to further probe Bell's Inequality, to help put a bound on the errors from that noise, and to help us retrieve the signal.

For those who say "But you can't retrieve the signal - it's a violation of fundamental rules" I would reply that those rules are predicated on normal spacetime conditions. The manipulated spacetime conditions of QED/Casimir-type experiments could be used to bend the rules, which might help us to get a better glimpse of what has previously been too blurred/obscured to see.
 
  • #25
Bell's Inequality assumes implicity that E can affect A or B only through discrete events. Instead, if there's a particle at E that is in a relationship with a particle at A and a particle at B, through which a stream of common information can be sent from E to A and B, then there is no need for extra dimensions, super-luminal effects, or other forms of religiosity that Einstein correctly dismissed as "spooky action at a distance". My paper proves this.

In fact, Bell's proof assumes a particle view of matter. Instead, if the "star stuff" are relationships, not particles, then Bell's proof is inapplicable. A graph transpose between particles and interactions gets one (mostly) there.
 
Last edited:
  • #26
Functions can be discrete, and there's no requirement they be continuous.

My point is that the overall results of Bell's Inequality need to be decomposed into signal and noise, and that the noise can be attributed to spacetime itself (ie. its natural vacuum fluctuations). That then leaves the signal as the direct "relationship"/"bond"/"connection" between the particles on either side of this entanglement.

Entanglement would be a purer clearer relationship, if it were not for the noise-adding effects of spacetime via its fluctuations.
We cannot separate the noise-adding fluctuations away from the entanglement, because we're all inside this noisy spacetime thing that we can't get out of.

But in my opinion, it still helps to decompose the overall effect into 2 separate ones, even if we can't separate them in reality (yet).
 
  • #27
In a deterministic system, noise and randomness are reduced to being expressions of what one observer element of a system can glean about other elements in the system from the information the observer has access to in an Einstein-local sense. In other words, noise and randomness are conceptual aspects concerning the measurement problem rather than ontological aspects of the system itself.
 
Last edited:
  • #28
Oh, I see what you're saying - you mean that there's no relationship between the different entangled pairs that are observed. Sure, they're each separate and discrete entangled events, as you say - but I'm trying to address something else.

I'm saying that within an individual entangled pair, you want to take the final observed result and decompose it into signal and noise - and then you want to attribute that noise to spacetime itself.

What really needs to be done to make further progress on Bell's Inequality, is to combine it with the QED/Casimir experiments.
These 2 types of experiments (1.Bell's Inequality and 2.Casimir/QED) are like 2 different blind men feeling the same elephant. Both are looking at different aspects of the same thing.

The Casimir/QED experiments allow us to tamper with the noise-inducing fluctuations of spacetime -- the same ones that are obscuring the entanglement results of Bell's Inequality.
There must be some way to use QED manipulation to influence the observed results for Bell's Inequality.
By doing QED and Bell's together, it may hopefully be possible to appreciably reduce/manipulate the spacetime noise-level in a way that changes the observed results for collapsed entangled states.

I think we first have to accept that the noise is spacetime's fault. QED can help to change the level of noise that spacetime has.
 
  • #29
No, I'm not saying that at all. I am offering a more plausible explanation for what we traditionally understand to be an entangled pair that travels like particles in flight in opposite directions from the emitter to the detectors and that we know of only because we detect two events within a coincidence window. More plausible (imho) is a relationship of non-zero temporal length between a particle E at the emitter and particles A and B at the detectors (E:A and E:B in the syntax of my paper) through which state information travels at the speed of light. This system is adequate to achieve correlations that violate Bell's Theorem. That's what my paper proves, linked to above. I provided simulation code to show this too, and that code achieves 2sqrt2.
 
  • #30
So what can usefully be achieved from your explanation then?
 
  • #31
sanman said:
So what can usefully be achieved from your explanation then?

That speed-of-light interactions are sufficient to explain the correlations predicted and observed in EPR experiments. Further, that the system can be Einstein-local and realistic. And, deterministic, not random.

Best of all, it can be experimentally tested by a proposed experiment I briefly describe in the following thread, and that I presented at the APS/NW conference a couple months ago.

https://www.physicsforums.com/showthread.php?t=659363
 
  • #32
Okay, but is there some practical application that can be achieved from it?
Can it be used for communication purposes?
 
  • #33
mbd said:
The paper includes the source code for the simulation. Although not indicated in the paper,

I converted the code to Java, as I'm not set up to run C#, or if I am, I'm not aware of how.

The converted code produced much the same results, though I didn't feel like letting it run for a million iterations (trials).

A particular trial completes if either Alice, or Bob, or both perform a detection, but the angles emittPast, Alice and Bob are only used for calculating the correlation if both
Alice and Bob detect. There is an known assumption in CHSH that the the probablility of a detection by both Alice and Bob must be independent of the particles being measured - this is the fair sampling assumption. Equivalently, for each particle, the probability of a detection by both Alice and Bob must be independent of the actual correlation that would be found between them for that particle.

I inserted some extra code to test a suspicion I had. The code seems to confirm my suspicion, which was that a detection by both Alice and Bob was more likely to occur if their respective results are correlated than if they are not. This appears to violate the fair sampling assumption, and means that the violation of the inequality is not particularly surprising.
 
  • #34
It's light itself, sanman, which, as I understand it, has many practicle purposes
 
  • #35
Sylvia, the simulation counts all events that would be counted in an idealized CHSH experiment where only entangled pairs are involved, only one entangled pair is active at a time, and the coincidence window is less than the time-step.

You can change the code in the following way to convince yourself of this: Continue the random-walk on E until both A and B have been detected. Then, write out all detections with the setting, the result, and the timestamp. Later, process the file to find detections that occurred at both A and B within your coincidence window. If you choose a coincidence window smaller than the time-step, you will get the same result as the code produces.

I question the assumption inherent to Bell and CHSH that entanglement begins with the emission of two particles and ends with the detection of two particles (assuming perfect detection). It is this flawed assumption that leads to spooky action at a distance, too many worlds, and all that.
 
  • #36
I don't understand what bells inequality is refuting. Why does local realism predict a different outcome? The probability for a photon to pass through a polarizer is the Cos of the angle between it and the polarizer. So if you put a source of linearly polarized photons through a polarizer oriented at 45° to the axis of polarization and .707 of them go through it.

If you set up bells experiment then you are making a cut on a statistical sample of photons which average out to be polarized in the direction of polarizer A and asking how many identical ones make it through polarizer B and its .707. Why is this surprising? How could it be any different? I feel like I'm taking crazy pills! I really can't see any reason anyone comes to the conclusion that there is something superluminal going on, especially since the correlation between A and B is not known until Alice walks over to where Bob is and they compare notes.

About your paper, it doesn't seem to matter that E is updated in the iteration of the loop that comes before its comparison with A and B. So how does this make a comment on the speed of information travel? The simulation could be interpreted as a model of two detectors, A and B, at the same location as past E.

I recreated your code in LabVIEW and I get .678 for the correlation.

[STRIKE]Is this
emmitt += (2 * rand.Next(2) - 1) * Math.PI / 4;
the same as
emmitt += (rand.Next(4) - 1) * Math.PI / 4;
which would tend to increase emmitt instead of maintaining equal probability to be negative?[/STRIKE]

This seems like its measuring the probability of your random number generator to generate numbers within a certain range.

Edit: Oh I see rand.Next(2) returns an integer, I get .707 now
 
Last edited:
  • #37
Greg-ulate said:
If you set up bells experiment then you are making a cut on a statistical sample of photons which average out to be polarized in the direction of polarizer A and asking how many identical ones make it through polarizer B and its .707. Why is this surprising? How could it be any different? I feel like I'm taking crazy pills! I really can't see any reason anyone comes to the conclusion that there is something superluminal going on, especially since the correlation between A and B is not known until Alice walks over to where Bob is and they compare notes.

Did you look at this link, posting earlier by bohm2?

http://lanl.arxiv.org/pdf/1212.5214.pdf

In Bell's theorem we're not just concerned with the measurements that are actually made. We're also considering the other measurements that could have been made instead, and how nature balances its books with entangled particles so that the numbers come out right in the end regardless of which measurements we choose to make.

Bell's theorem says in essence that there is no way in which nature can achieve this if the result of a given measurement depends only on that measurement and not on the other measurement. Since measurements can be space-like separated, this means that whatever nature is doing, it cannot be mediated by light-speed influences, and thus that whatever it is must be non-local.

Sylvia.
 
  • #38
Greg-ulate said:
I don't understand what bells inequality is refuting. Why does local realism predict a different outcome?
I think this easy-to-understand explanation of Bell's theorem should help:
http://quantumtantra.com/bell2.html
 
  • #39
mbd said:
Or, through a simple, deterministic, arguably classical, mechanism using only speed-of-light interactions as described and linked to above, but here again for convenience:

https://docs.google.com/open?id=0BxBGJRkQXyjweXR2R3ExTlEyNm8

Per Occam's Razor, would you turn first to speed-of-light interactions from E to A and E to B that operate for an interval of time, or an instantaneous interaction between A and B? The former, of course. Unless you have a grant application pending for yet another EPR experiment? I've most certainly shown that the former can achieve the same results as QT and I have (you can too) tweaked the simulation to match any EPR experiment performed thus far.

The simulation I provide in the paper can even be implemented in circuitry with space-like separation between A and B. It will certainly result in CHSH > 2, over a broad range of random walk step sizes of the signal from E to A and E to B as well as over a broad range of coincidence windows.

In other words, I believe the game has been changed, pending experimental confirmation that nature actually works this way. But, Occam clearly suggests what experiment to do next (hint: not another traditional CHSH).

Michael B Devine

Ah, you are treading into some deep water here!

If you are ready to take the DrChinese challenge, we will find out what your code is made of. To make your claim successfully, you MUST present results at a third angle other than for Alice and Bob. So we will need to see some results for Chris as well. Otherwise you fail the realism requirement.

Let me know if you are ready to go down this road. Else you will be forced to move this line of discussion elsewhere. In fact that may be necessary anyway.
 
  • #40
DrChinese, you are moving the goal line! I have shown that CHSH experiments, as performed thus far, cannot rule out the mechanism I've proposed. And, I have proposed an experimental modification to CHSH to confirm or rule out this mechanism. The mechanism is most certainly local realistic. That said, I am NOT claiming that it is capable of performing everything that nature can.

Further, the reason I have shared it is that I figure others may find it compelling and may want to test it, theoretically, against other phenomena.

Regarding a third angle, note there are four angles in the code. Can you be more specific as to your challenge? Are you asking whether the mechanism honors Malus' Law? (it does)

Lastly, your last paragraph is thoroughy rude and inappropriate.

Perhaps you're reacting to my careless reference to EPR exerpiments when I meant CHSH EPR experiments?
 
Last edited:
  • #41
Greg-ulate said:
I don't understand what bells inequality is refuting. Why does local realism predict a different outcome? The probability for a photon to pass through a polarizer is the Cos of the angle between it and the polarizer. So if you put a source of linearly polarized photons through a polarizer oriented at 45° to the axis of polarization and .707 of them go through it.

If you set up bells experiment then you are making a cut on a statistical sample of photons which average out to be polarized in the direction of polarizer A and asking how many identical ones make it through polarizer B and its .707. Why is this surprising? How could it be any different? I feel like I'm taking crazy pills! I really can't see any reason anyone comes to the conclusion that there is something superluminal going on, especially since the correlation between A and B is not known until Alice walks over to where Bob is and they compare notes.

About your paper, it doesn't seem to matter that E is updated in the iteration of the loop that comes before its comparison with A and B. So how does this make a comment on the speed of information travel? The simulation could be interpreted as a model of two detectors, A and B, at the same location as past E.

I recreated your code in LabVIEW and I get .678 for the correlation.

[STRIKE]Is this
emmitt += (2 * rand.Next(2) - 1) * Math.PI / 4;
the same as
emmitt += (rand.Next(4) - 1) * Math.PI / 4;
which would tend to increase emmitt instead of maintaining equal probability to be negative?[/STRIKE]

This seems like its measuring the probability of your random number generator to generate numbers within a certain range.

Edit: Oh I see rand.Next(2) returns an integer, I get .707 now

The issue relates to values for other angles than A and B. for example, when A, B ,C are 0, 120, 240 respectively, you cannot have a data set in which the cos^2 relationship is maintained independently of selecting which 2 are being actually observed. That means the outcome is observer dependent, which violates the premise of realism. Try it with about 10 sets of triple values and you will quickly see the problem.

A. B. c.
Y n y
Y n n
N y n
Etc

Then compare ab, bc and ac. You want each to match 25% of the time as cos^2(120 degrees) is .25.
 
  • #42
mbd said:
DrChinese, you are moving the goal line! I have shown that CHSH experiments, as performed thus far, cannot rule out the mechanism I've proposed. And, I have proposed an experimental modification to CHSH to confirm or rule out this mechanism. The mechanism is most certainly local realistic. That said, I am NOT claiming that it is capable of performing everything that nature can.

Further, the reason I have shared it is that I figure others may find it compelling and may want to test it, theoretically, against other phenomena.

Regarding a third angle, note there are four angles in the code. Can you be more specific as to your challenge? Are you asking whether the mechanism honors Malus' Law? (it does)

Lastly, your last paragraph is thoroughy rude and inappropriate.

Perhaps you're reacting to my careless reference to EPR exerpiments when I meant CHSH EPR experiments?

Rude, not I my friend. Forum rules are at issue, just trying to point that out. I am not an admin, so I am not the decision maker on this point. Posting unpublished assertions going against the mainstream can bring a thread to a halt. So please be careful how you say it. There are no accepted local realistic explanation for observed results of experiments. Yours does not qualify as such. And I am not the one moving the goal posts, you are. Realism implies observer independence, a key element of the Bell proof.
 
  • #43
DrChinese, the model I've proposed is local realistic. It only involves speed-of-light interaction, albeit direct particle-to-particle. Physics already accepts that particles are affecting each other through fields at the speed of light.

Regarding appropriateness, two participants in this thread have already run the simulation code I provided, one of them being the person who asked the original question, "is the non-locality real?", and both achieved the same results. So, I think this thread is proving to be well within the spirit of the forum.

If nothing else, it serves as an illumination of the significant potential of one of the loopholes to Bell's Theorem. That alone goes well toward answering the question.
 
  • #44
Sylvia Else said:
If influences are traveling at some FTL but finite speed then it will necessarily by possible to arrange that some measurements on entangled particles are made so close together that the influence cannot arrive in time. The results would then have to respect Bell's inequality.

So if entanglement is actually mediated by FTL finite speed influences, then QM is capable of being falsified, with the required experiment simply not having been done yet.

You might find this paper interesting: http://www.gap-optique.unige.ch/wiki/news:20121115-1142-choose_one_non-locality_or_superluminal_signalling
 
  • #45
mbd said:
DrChinese, the model I've proposed is local realistic. It only involves speed-of-light interaction, albeit direct particle-to-particle. Physics already accepts that particles are affecting each other through fields at the speed of light.

I do not see any good proof that the model is indeed realistic. The method DrChinese proposed is the standard way of checking whether it is. This is not moving the goal line by any means. We have seen like 50 to 60 proposals for local realistic models in these forums over the past few years which could reproduce Bell experiments or CHSH experiments for two angles. Some of these proposals contained simple errors in the code, but most of them indeed could not give results when picking three different angles and turned out to be not realistic. So unless you can indeed show that your code works for three arbitrary angles, you have not shown anything interesting so far. It is just a fact that many models, which seem realistic, are in fact not. So you need to rule that out.

Even so, these forums are not the right place for pushing personal theories.
 
  • #46
mbd said:
DrChinese, the model I've proposed is local realistic. It only involves speed-of-light interaction, albeit direct particle-to-particle. Physics already accepts that particles are affecting each other through fields at the speed of light.

Regarding appropriateness, two participants in this thread have already run the simulation code I provided, one of them being the person who asked the original question, "is the non-locality real?", and both achieved the same results. So, I think this thread is proving to be well within the spirit of the forum.

If nothing else, it serves as an illumination of the significant potential of one of the loopholes to Bell's Theorem. That alone goes well toward answering the question.

It is local, not realistic.

So: Loophole in Bell's Theorem is what you just said. Please indicate a suitable place your theory has been published or some other indication your ideas are generally accepted. Barring that, your ideas are inappropriate here.
 
  • #47
DrChinese, I will stipulate to the fact that there may have been a slight breach of forum rules if you will accept that the consequence has been a lively discussion from which we all, including myself, have learned something?

That said, it is patently unfair to single out my postings when this forum is chock full of posts with links to unreviewed, unpublished, articles on arxiv. In fact, another active thread today has multiple instances of that. Will you please inform the participants in that thread too of your objections to their behavior?

https://www.physicsforums.com/showthread.php?t=482657
 
  • #48
mbd said:
DrChinese, I will stipulate to the fact that there may have been a slight breach of forum rules if you will accept that the consequence has been a lively discussion from which we all, including myself, have learned something?

That said, it is patently unfair to single out my postings when this forum is chock full of posts with links to unreviewed, unpublished, articles on arxiv. In fact, another active thread today has multiple instances of that. Will you please inform the participants in that thread too of your objections to their behavior?

https://www.physicsforums.com/showthread.php?t=482657

I must state that I was surprised that I was given infraction points for asking questions pertinent to an article on arxiv, to which ZapperZ and I exchanged private messages on whether peer-review is more superior to unpublished articles.
 
  • #49
According to definitions in Zeilinger (2010), http://www.pnas.org/content/107/46/19708.full.pdf, the mechanism I described is local and realistic:

Realism is a world view “according to which external reality is
assumed to exist and have definite properties, whether or not they
are observed by someone” (22). Locality is the concept that, if
“two systems no longer interact, no real change can take place
in the second system in consequence of anything that may be
done to the first system” (1) .

The mechanism I proposed, however, is not "local realistic" as defined in the same paper. But, "local and realistic" is not the same as "local realistic", hence the confusion here.

Zeilinger is a good example to use too because the mechanism I described can operate through the experiment - it is NOT ruled out by Zeilinger - because, in Zeilinger, the optical pathways are open, albeit changing, for the duration of the experiment.

The Zeilinger paper also contains an excellent explanation of conditional probability and bayes rule and how it leads to the necessity of closing the "freedom of choice" loophole.
 
Last edited:
  • #50
mbd said:
DrChinese, I will stipulate to the fact that there may have been a slight breach of forum rules if you will accept that the consequence has been a lively discussion from which we all, including myself, have learned something?

That said, it is patently unfair to single out my postings when this forum is chock full of posts with links to unreviewed, unpublished, articles on arxiv. In fact, another active thread today has multiple instances of that. Will you please inform the participants in that thread too of your objections to their behavior?

https://www.physicsforums.com/showthread.php?t=482657

No, the issue is whether the concepts are consistent with generally accepted science. I am an advisor here, not a mentor or admin. You are discussing an issue that I follow closely, and frequently debate with folks who are still learning the the nuances of the area. For those who are asking, there is rarely an issue. For those who are telling others, you should follow established protocol.

If you will simply stop pushing your theory, defending it, explaining it, etc, all is fine. Otherwise I will report it. I, unlike you, bear no burden of proof. As I have already told you, your idea is not realistic. I have also defined, for your example, what realism would require. And you have already been told by a second advisor who is knowledgeable that this is standard science.

The OP asked is nonlocality real. It may be. Otherwise, locality is not realistic. This is Bell.
 
Back
Top