What do violations of Bell's inequalities tell us about nature?

Click For Summary
Violations of Bell's inequalities suggest that either non-locality or anti-realism must be true in quantum mechanics, but they do not definitively imply one over the other. Bell's theorem indicates that classical locality cannot be maintained within quantum theory, challenging traditional materialist views. Some participants argue that without a clear mechanism, accepting non-locality is problematic, while others express skepticism about interpretations like superdeterminism or many worlds due to their untestable nature. The discussion highlights a divide in preferences for either anti-realism or non-locality, with many calling for more experimental evidence to clarify these interpretations. Ultimately, the implications of Bell's inequalities remain a complex and unresolved issue in the foundations of quantum physics.

What do observed violation of Bell's inequality tell us about nature?

  • Nature is non-local

    Votes: 10 31.3%
  • Anti-realism (quantum measurement results do not pre-exist)

    Votes: 15 46.9%
  • Other: Superdeterminism, backward causation, many worlds, etc.

    Votes: 7 21.9%

  • Total voters
    32
  • #31
danR said:
I don't see how 'action at a distance' applies to entanglement in quantum world, even by analogy, where/(if) there is no 'action' or 'distance'.

Well, following the Bohm interpretation of quantum mechanics, the weird statistics is explained through action at a distance via an instantaneous "quantum potential" term in the equations of motion.
 
Physics news on Phys.org
  • #32
stevendaryl said:
Here's a wild idea that's probably nonsensical, but I wonder if anyone has investigated it: Kaluza-Klein theory introduced the trick of having extra spatial dimensions that are unobservable because they are wrapped into tiny little circles. I'm wondering if there is some topology that can be constructed using extra dimensions, so that, essentially, every point in space is the same, very short, distance from every other point, if one travels in the hidden dimensions. For illustration, imagine a flat sheet of paper, crumpled into a ball and compressed to a tiny volume. Travel within the plane of the paper is unaffected by the crumpling, but the crumpling allows a "short-cut" between any two points, by traveling perpendicular to the plane of the paper.
Interesting stevendaryl, but I think that whatever you're getting at is way over my head.

stevendaryl said:
So I'm wondering if there is a way to understand the "instantaneous" quantum interactions of Bohm theory as interactions that only seem instantaneous because they only travel a short distance.
In line with danR's statement, I don't think that instantaneous action at a distance is understandable. There's no mechanics, no propagation, no time for any sort of physical interaction. I view it as basically a collection of terms that function as a placeholder for our ignorance and refer to something that happens in the mathematics of a theory.

But it sounds like you might be able to fashion some sort of novel mathematical contrivance or other. Not that that would provide any understanding either, but then mathematical contrivances (and placeholders) don't have to. They just need to help facilitate the calculation of accurate quantitative predictions.
 
  • #33
stevendaryl said:
Well, following the Bohm interpretation of quantum mechanics, the weird statistics is explained through action at a distance via an instantaneous "quantum potential" term in the equations of motion.
But the statistics aren't weird. They're understandable through the QM incorporation and application of classical laws.
 
  • #34
nanosiborg said:
But the statistics aren't weird. They're understandable through the QM incorporation and application of classical laws.

They seem pretty weird to me. When you are measuring, for instance, the projection of the spin of an electron on the z-axis, for example, I think it's understandable that the result may be nondeterministic. The measurement process may interact with the electron in an uncontrollable way, and so a deterministic prediction might not be possible. But if that electron is part of an electron-positron twin pair, then it's weird to me that you can tell with absolute certainty that if you measure spin-up in the z-direction, then whoever checks the spin of the positron will find spin-down in the z-direction.

That's the weirdness of quantum randomness--not the randomness by itself, but the combination of randomness with a kind of certainty of the distant correlations.
 
  • #35
stevendaryl said:
They seem pretty weird to me. When you are measuring, for instance, the projection of the spin of an electron on the z-axis, for example, I think it's understandable that the result may be nondeterministic. The measurement process may interact with the electron in an uncontrollable way, and so a deterministic prediction might not be possible. But if that electron is part of an electron-positron twin pair, then it's weird to me that you can tell with absolute certainty that if you measure spin-up in the z-direction, then whoever checks the spin of the positron will find spin-down in the z-direction.

That's the weirdness of quantum randomness--not the randomness by itself, but the combination of randomness with a kind of certainty of the distant correlations.
So, what can be inferred from the predictability of distant correlations? Can it be said, for example, that there has been an invariant relationship between entangled particles created through the entangling process, ie., through common source, interaction, common motion imparted to particles that don't have a common source and have never interacted, etc.? If so, does this seem weird? It doesn't to me, and the fact that the totality of results of optical Bell tests are in line with the conservation laws and optics principles further supports that view.
 
  • #36
nanosiborg said:
So, what can be inferred from the predictability of distant correlations? Can it be said, for example, that there has been an invariant relationship between entangled particles created through the entangling process, ie., through common source, interaction, common motion imparted to particles that don't have a common source and have never interacted, etc.? If so, does this seem weird?

Yes.

It doesn't to me, and the fact that the totality of results of optical Bell tests are in line with the conservation laws and optics principles further supports that view.

My general feeling is that if you don't find quantum mechanics weird, you haven't thought about it enough. Conservation laws don't by themselves explain the correlations.

Think about the following situation: You prepare an electron with spin-up along some axis \vec{S}. Then later you measure its spin along a different axis \vec{A}. Then the result will be non-deterministic: with a certain probability, the electron will be found afterwards to have spin-up in the \vec{A} direction, and with a certain probability, it will be spin-down. In either case, the angular momentum of the electron was changed by the measurement: its final angular momentum is not the same as its initial angular momentum. That isn't a violation of conservation of angular momentum, because you can attribute the change to the interaction between the detector and particle. The angular momentum of the particle changes, and the angular momentum of the detector changes in a complementary way, so that the total angular momentum is unchanged by the detection process. But note that there is a small amount of angular momentum, \delta \vec{L} transferred from the electron to the detector.

Now, if that electron happened to have come from an EPR twin-pair experiment, then each of the two detectors can be expected to receive a tiny amount of angular momentum from whichever particle is detected. But in the case of perfectly aligned detectors, we know that the \delta \vec{L_1} received by one detector must exactly correlate with the \delta \vec{L_2} received by the other detector, so that the resulting spins of the twin particles are perfectly anti-correlated.

So the perfect anti-correlation is not simply a matter of conservation of angular momentum. Angular momentum would be conserved whether or not the twin particles are found to be anti-correlated--it's just that different amounts of angular momentum would be transferred to the detectors. The perfect anti-correlation of twin pairs is a matter of cooperation between nondeterministic processes involving distant macroscopic objects (the detectors).
 
  • #37
nanosiborg said:
If you take Bell's formulation to be generalizable, and I do, then QM-compatible LHV models of quantum entanglement are definitively ruled out. Beyond that, violations of Bell inequalities tell us nothing about nature.
That's where the disagreement is with those who contend that Bell's formulation does not make those further assumptions, like hidden-variables, realism, etc. As one example of such authors making those arguments consider Norsen:
One can divide reasons for disagreement (with Bell’s own interpretation of the significance of his theorem) into two classes. First, there are those who assert that the derivation of a Bell Inequality relies not just on the premise of locality, but on some additional premises as well. The usual suspects here include Realism, Hidden Variables, Determinism, and Counter-Factual-Definiteness. (Note that the items on this list are highly overlapping, and often commentators use them interchangeably.) The idea is then that, since it is only the conjunction of locality with some other premise which is in conflict with experiment, and since locality is so strongly motivated by SR, we should reject the other premise. Hence the widespread reports that Bell’s theorem finally refutes the hidden variables program, the principle of determinism, the philosophical notion of realism, etc.
Norsen also discusses why Bell felt that his theorem does tell us something about nature:
Since all the crucial aspects of Bell’s formulation of locality are thus meaningful only relative to some candidate theory, it is perhaps puzzling how Bell thought we could say anything about the locally causal character of Nature. Wouldn’t the locality condition only allow us
to assess the local character of candidate theories? How then did Bell think we could end up saying something interesting about Nature?...That is precisely the beauty of Bell’s theorem, which shows that no theory respecting the locality condition (no matter what other properties it may or may not have – e.g., hidden variables or only the non-hidden sort, deterministic or stochastic, particles or fields or both or neither, etc.) can agree with the empirically-verified QM predictions for certain types of experiment. That is (and leaving aside the various experimental loopholes), no locally causal theory in Bell’s sense can agree with experiment, can be empirically viable, can be true. Which means the true theory (whatever it might be) necessarily violates Bell’s locality condition. Nature is not locally causal.
Local Causality and Completeness: Bell vs. Jarrett
http://arxiv.org/pdf/0808.2178v1.pdf

With respect to a discussion of Bell's concept of local causality see this paper with this interesting quote:
That is, the idea that SR is compatible with non-local causal influences (but only prohibits non-local signaling) seems afflicted by the same problem (reviewed in Section III) that necessarily afflicts theories whose formulations involve words like “observable”, “microscopic”, “environment”, etc. In particular, the notion of “signaling” seems somehow too superficial, too anthropocentric, to adequately capture the causal structure of Figure 1.
J.S. Bell’s Concept of Local Causality
http://arxiv.org/pdf/0707.0401.pdf
 
Last edited:
  • #38
bohm2 said:
That's where the disagreement is with those who contend that Bell's formulation does not make those further assumptions, like hidden-variables, realism, etc. As one example of such authors making those arguments consider Norsen:

Norsen also discusses why Bell felt that his theorem does tell us something about nature:

Local Causality and Completeness: Bell vs. Jarrett
http://arxiv.org/pdf/0808.2178v1.pdf

With respect to a discussion of Bell's concept of local causality see this paper with this interesting quote:

J.S. Bell’s Concept of Local Causality
http://arxiv.org/pdf/0707.0401.pdf
Pages 9 &10 of the Bell vs Jarrett paper are about the completeness of λ .
And from both these papers it seems that Bell presupposes that completeness holds.
While at the same time Bell limits and qualifies completeness of λ to properties of
candidate theories. So this is a conflict on completeness. And I cannot agree that because
no local casual theory agrees with experiment that nature is nonlocal, conclusion.
Rather it is that the description of λ the hidden variable that is not complete .
And when it is the violations of the inequalities can be understood.
And I voted to reject realism, in its limited definition
 
  • #39
Hi folks. I voted for "non-locality". And so, incidentally, did Bell -- though, being dead, he is unable to vote in this particular poll. But here are his words (from the classic paper "Bertlmann's socks and the nature of reality"):

"Let us summarize once again the logic that leads to the impasse. The EPRB correlations are such that the result of the experiment on one side immediately foretells that on the other, whenever the analyzers happen to be parallel. If we do not accept the intervention on one side as a causal influence on the other, we seem obliged to admit that the results on both sides are determined in advance anyway, independently of the intervention on the other side, by signals from the source and by the local magnet setting. But this has implications for non-parallel settings which conflict with those of quantum mechanics. So we cannot dismiss intervention on one side as a causal influence on the other."

For the convenience of the people who are confused here (i.e., the people who voted that we should conclude, from Bell's theorem, that "realism" is wrong) I have bolded the relevant part of the argument above. Note that it is just the EPR argument. The point is that "realism" just means the existence of variables which determine, in advance, what the result on each side will be. What Bell points out here -- and what EPR already pointed out long ago -- is that such variables are (i.e., "realism" is) the *only* way to account *locally* for the perfect correlations that are observed "whenever the analyzers happen to be parallel". So the idea that we can still account for the QM predictions with a model that respects locality but denies "realism" is simply wrong. It will not, does not, and can not work.

Still don't agree? Still think that one can have a local explanation of even this small subset of the quantum predictions -- namely, the perfect correlations that are observed "whenever the analyzers happen to be parallel"? Let's see the model. (Note: the model should also respect the "free choice" aka "no conspiracies" assumption, if it is to be taken seriously.)

This is a serious challenge. Anybody who voted for (b) in the poll evidently thinks (or at least is unwittingly committed to thinking) that these perfect correlations can be explained by a local, non-realist model. Let's see it.
 
  • #40
@ bohm2, re your post #38

I agree with Norsen, and Bell, that it's Bell's locality condition that causes Bell's LHV formulation to be incompatible with QM and experiments, and that realism (hidden variable models) is not ruled out. Bell locality is necessarily realistic, but a realistic model need not be Bell local. We know from deBB that realism isn't ruled out. Which leaves only locality.

I disagree with Norsen, and Bell, that violations of Bell's inequalities tells us anything about nature. I think that the incompatibility with QM and experiment is determined by some feature of Bell's locality condition other than the assumption of locality.
 
  • #41
nanosiborg said:
So, what can be inferred from the predictability of distant correlations? Can it be said, for example, that there has been an invariant relationship between entangled particles created through the entangling process, ie., through common source, interaction, common motion imparted to particles that don't have a common source and have never interacted, etc.? If so, does this seem weird?
stevendaryl said:
Yes.
Do you find it weird that particles which have interacted or have a common source are measurably related? Or is it weird that the quantum correlations can only be approximated by classical preparations (and only approximately described by classical LHV models)? I suppose it's the latter. But is the creation of invariant relationships between and among particles, by the means described, beyond any sort of classical comprehension (ie., weird), or is it, as I suggested in an earlier post, just a matter of degree?

stevendaryl said:
My general feeling is that if you don't find quantum mechanics weird, you haven't thought about it enough.
Some of the interpretations of QM are weird, but I don't think of standard QM as weird. Is it possible that those who find QM weird haven't thought about it enough?

On the other hand, some quantum phenomena (the physical, instrumental stuff, not the theory) do seem weird, but I wouldn't include entanglement correlations in there.

stevendaryl said:
Conservation laws don't by themselves explain the correlations.
I agree, and I didn't say they do. But the conservation laws plus the applicable optics laws plus the repeatability of the preparations and the correlations don't seem so weird. The correlations are quite unsurprising when all those things are taken into consideration.

[... snip nice discussion ...]

stevendaryl said:
So the perfect anti-correlation is not simply a matter of conservation of angular momentum. Angular momentum would be conserved whether or not the twin particles are found to be anti-correlated--it's just that different amounts of angular momentum would be transferred to the detectors.
OK.

stevendaryl said:
The perfect anti-correlation of twin pairs is a matter of cooperation between nondeterministic processes involving distant macroscopic objects (the detectors).
As you said in your discussion, it's the individual results that are nondeterministic (ie., random). Because the correlations are predictable (and the unknown underlying processes therefore apparently repeatable) we can retain the assumption that the processes are deterministic.

So, I would change your last sentence to read: the perfect anti-correlation of paired (entangled) particles is a matter of a repeatable relationship between, and deterministic evolution of, certain motional properties of the entangled particles subsequent to their creation via a common source, their interaction, or their being altered by identical stimulii. Which doesn't seem weird to me.
 
  • #42
nanosiborg said:
Bell locality is necessarily realistic, but a realistic model need not be Bell local.

I don't think that's right. Here's a model that non-realistic but perfectly Bell local: each particle has no definite, pre-existing, pre-scripted value for how the measurements will come out. Think of each particle as carrying a coin, which, upon encountering an SG device, it flips -- heads it goes "up", tails it goes "down". That is certainly not "realistic" (in the sense that people are using that term here) since there is no fact of the matter, prior to the measurement, about how a given particle will respond to the measurement; the outcome is "created on the fly", so to speak. And it's also perfectly local in the sense that what particle 1 ends up doing is in no way influenced by anything going on near particle 2, or vice versa. Of course, the model doesn't make the QM/empirical predictions. But it's non-realist and local. And hence a counter-example to any claim that being Bell local requires/implies being "realist".


We know from deBB that realism isn't ruled out.

I think you must be using "realism" in a different way than most other people. deBB is a hidden variable theory, to be sure, but it is *not* a hidden variable theory about spin! That is, there is no fact of the matter, in deBB, about how a given particle will respond to a measurement of some component of its spin. This is sometimes described by saying that, for deBB, spin is a "contextual" property. It would be more accurate, though, to say that, in deBB, the particles simply do not have any such property as spin.


I disagree with Norsen, and Bell, that violations of Bell's inequalities tells us anything about nature. I think that the incompatibility with QM and experiment is determined by some feature of Bell's locality condition other than the assumption of locality.

I would be very interested to hear precisely what you have in mind. Have you carefully studied Bell's paper "la nouvelle cuisine" (where he is most explicit about how "locality" is formulated)? If you think the very formulation of "locality" smuggles in some other requirement, I want to know exactly what and how.
 
  • #43
nanosiborg said:
Do you find it weird that particles which have interacted or have a common source are measurably related?

As I thought I said, but maybe I just thought it :smile: it's certainly not weird that particles with a common history could share state information. For example, two people could agree on some random number, and then separate to large distances. Then there would be a nonlocal correlation due to shared state information from a common past.

It's weird that distant particles would be connected in any way other than shared state information.

But is the creation of invariant relationships between and among particles, by the means described, beyond any sort of classical comprehension (ie., weird), or is it, as I suggested in an earlier post, just a matter of degree?

Yes, I think it's weird.

On the other hand, some quantum phenomena (the physical, instrumental stuff, not the theory) do seem weird, but I wouldn't include entanglement correlations in there.

I don't think you can separate entanglement from measurement. Or rather, entanglement is only weird to the extent that it implies nonlocal correlations between distant macroscopic measurements.

As you said in your discussion, it's the individual results that are nondeterministic (ie., random). Because the correlations are predictable (and the unknown underlying processes therefore apparently repeatable) we can retain the assumption that the processes are deterministic.

So, I would change your last sentence to read: the perfect anti-correlation of paired (entangled) particles is a matter of a repeatable relationship between, and deterministic evolution of, certain motional properties of the entangled particles subsequent to their creation via a common source, their interaction, or their being altered by identical stimulii. Which doesn't seem weird to me.

Are you saying anything different from: It's not weird, because it's predicted by quantum mechanics? Whether something is weird or not is a matter of taste, I suppose.
 
  • #44
nanosiborg said:
Bell locality is necessarily realistic, but a realistic model need not be Bell local.
ttn said:
I don't think that's right. Here's a model that non-realistic but perfectly Bell local: each particle has no definite, pre-existing, pre-scripted value for how the measurements will come out. Think of each particle as carrying a coin, which, upon encountering an SG device, it flips -- heads it goes "up", tails it goes "down". That is certainly not "realistic" (in the sense that people are using that term here) since there is no fact of the matter, prior to the measurement, about how a given particle will respond to the measurement; the outcome is "created on the fly", so to speak. And it's also perfectly local in the sense that what particle 1 ends up doing is in no way influenced by anything going on near particle 2, or vice versa. Of course, the model doesn't make the QM/empirical predictions. But it's non-realist and local. And hence a counter-example to any claim that being Bell local requires/implies being "realist".
I've been using 'hidden variable' to refer to any denotation (in a Bell test model) which refers to an underlying parameter which contributes to the determination of individual results. It doesn't have to include a pre-existing, pre-scripted value for how any specific measurement will come out. It's just included in the model to refer to any underlying parameter which contributes to the determination of individual results.

My understanding of Bell locality is that the denotation of Bell locality in a Bell test model requires some such hidden variable, whether the definition of that hidden variable includes a denotation about precisely how the hidden variable affects individual detection or not.

In other words, I would consider your example to be realistic in the same sense that Bell's λ is realistic, and therefore not a counter-example to my statement.

ttn said:
I think you must be using "realism" in a different way than most other people. deBB is a hidden variable theory, to be sure, but it is *not* a hidden variable theory about spin! That is, there is no fact of the matter, in deBB, about how a given particle will respond to a measurement of some component of its spin. This is sometimes described by saying that, for deBB, spin is a "contextual" property. It would be more accurate, though, to say that, in deBB, the particles simply do not have any such property as spin.
As per my above, the particles don't have to have any property in particular. They're underlying entities (that presumably have some property or properties) that are denoted in the deBB model. As such, and as you note, deBB is a hidden variable theory, and thus, in my lexicon, a realistic theory. But, due to the nonmechanical (ie., nonlocal vis the quantum potential) aspects of the theory it's also not a Bell local theory. I think of standard QM as a nonrealistic theory that is also not a Bell local theory, although not nonlocal in exactly the same sense that deBB is deemed nonlocal.

ttn said:
I would be very interested to hear precisely what you have in mind. Have you carefully studied Bell's paper "la nouvelle cuisine" (where he is most explicit about how "locality" is formulated)?
I haven't studied "la nouvelle cuisine". I have read a few of Norsen's papers, including the one where he discusses Jarrett's parsing of Bell's locality condition. I'm inclined toward Jarrett's interpretation that Bell locality encodes the assumptions of statistical independence (that paired outcomes are statistically independent of each other) as well as the independence defined by the principle of local action (that the result at A is not dependent on the setting at b, and the result at B is not dependent on the setting at a).

Since Bell tests are prepared to produce outcome dependence, and since this does not necessarily inform regarding locality or nonlocality in nature, and since this might be the effective cause of the incompatibility between Bell LHVs and QM, and between Bell LHVs and experimental results, then violations of Bell inequalities don't inform regarding locality/nonlocality in nature.

There is another aspect to the form that Bell locality imposes on LHV models of quantum entanglement to consider. Any Bell LHV model of quantum entanglement must necessarily denote coincidental detection as a function of the product of the independent functions for individual detection at A and B. So the relevant underlying parameter determining coincidental detection is the same underlying parameter determining individual detection. I think the underlying parameter determining coincidental detection can be viewed as an invariant (per any specific run in any specific Bell test preparation) relationship between the motional properties of the entangled particles, and therefore a nonvariable underlying parameter. I'm not sure how to think about this. Is it significant? If so, how do we get from a randomly varying underlying parameter to a nonvarying underlying parameter?
 
  • #45
stevendaryl said:
As I thought I said, but maybe I just thought it :smile: it's certainly not weird that particles with a common history could share state information. For example, two people could agree on some random number, and then separate to large distances. Then there would be a nonlocal correlation due to shared state information from a common past.

It's weird that distant particles would be connected in any way other than shared state information.
I agree. That (eg., nonlocally connected) would be weird. But I hope I've made it clear that I don't think the particles are connected in any way other than statistically through shared information imparted through local channels (common source, interaction, common 'zapping', etc.).

nanosiborg said:
But is the creation of invariant relationships between and among particles, by the means described, beyond any sort of classical comprehension (ie., weird), or is it, as I suggested in an earlier post, just a matter of degree?
stevendaryl said:
Yes, I think it's weird.
Ok, so I take it that you find the invariance of the relationship between entangled particles in any particular run of any particular Bell test to be weird. But why should that be weird?

Consider, for example, the polarization entangled photons created via atomic cascades. Entangled photons are assumed to be emitted from the same atom (albeit a different atom for each entangled pair). Is it surprising (weird) that their spins and therefore their polarizations would be related in a predictable way via the application of the law of conservation of angular momentum? Is it surprising that each entangled pair would be related in the same way? After all, the emission process is presumably the same for each pair, and the selection process is the same for each pair.

stevendaryl said:
I don't think you can separate entanglement from measurement. Or rather, entanglement is only weird to the extent that it implies nonlocal correlations between distant macroscopic measurements.
Ok, I agree with this, and since I don't think the correlations imply nonlocal connections between distant macroscopic measurements (because I think they can be understood in terms of related properties produced via local channels, and because the correlations are in line with empirically based optics laws involving the analysis of polarizations via crossed polarizers), then I don't view the correlations as being weird.

stevendaryl said:
Are you saying anything different from: It's not weird, because it's predicted by quantum mechanics?
I think so. I'm saying that we can understand why QM predicts what it does in the case of Bell tests by referring to the applicable (eg., conservation and optics) classical laws which are preserved in the QM treatment.

stevendaryl said:
Whether something is weird or not is a matter of taste, I suppose.
I would say that it's a matter of interpretation, and that interpretation isn't solely a matter of taste.
 
  • #46
nanosiborg said:
I've been using 'hidden variable' to refer to any denotation (in a Bell test model) which refers to an underlying parameter which contributes to the determination of individual results. It doesn't have to include a pre-existing, pre-scripted value for how any specific measurement will come out. It's just included in the model to refer to any underlying parameter which contributes to the determination of individual results.

Yes, OK. So then the point is just that "hidden variable theories" (like, e.g., deBB) need not be "realist theories".

My understanding of Bell locality is that the denotation of Bell locality in a Bell test model requires some such hidden variable, whether the definition of that hidden variable includes a denotation about precisely how the hidden variable affects individual detection or not.

It's not correct that Bell's formulation of locality (i.e., "Bell locality") assumes the existence of hidden variables. Maybe we're still not quite on the same page about what "hidden variables" means, because we're not on the same page about what "underlying" means in your formulation above. Usually the phrase "hidden variable" is used to mean some *extra* thing, beyond just the standard wave function of ordinary quantum theory, that is in the mix. So then, e.g., deBB is a hidden variable theory because it uses not only the wave function, but also the added "definite particle positions", to account for the results. In any case, though, the point is that "Bell locality" does not presuppose "realism" and it also does not presuppose "hidden variables". You can meaningfully ask whether ordinary QM (not a hidden variable theory!) respects or violates "Bell locality". (It violates it.)


In other words, I would consider your example to be realistic in the same sense that Bell's λ is realistic, and therefore not a counter-example to my statement.

OK, but then you're using the word "realistic" in a different way than (I think) most other people here do. I think most people use that word to mean that there are definite values pre-encoded in the particles somehow, such that there are meaningful answers to questions like: "What would the outcome had been if, instead of measuring along x, I had measured along y?"


As per my above, the particles don't have to have any property in particular. They're underlying entities (that presumably have some property or properties) that are denoted in the deBB model.

I certainly agree that it makes sense to call deBB "realist" by some meanings of the word "realist". But it is important to understand that the theory is *not* "realist" in the narrow sense I explained above. Stepping back, that's what I wanted to point out here. The word "realism" is a slippery bugger. Different people use it to mean all kinds of different things, such that miscommunication and misunderstanding tends to be rampant.

I think of standard QM as a nonrealistic theory that is also not a Bell local theory, although not nonlocal in exactly the same sense that deBB is deemed nonlocal.

Me too, though I'm not sure what the two "senses" of nonlocality here might be. They both violate "Bell locality". What other well-defined sense does anybody have in mind?


I haven't studied "la nouvelle cuisine". I have read a few of Norsen's papers, including the one where he discusses Jarrett's parsing of Bell's locality condition. I'm inclined toward Jarrett's interpretation that Bell locality encodes the assumptions of statistical independence (that paired outcomes are statistically independent of each other) as well as the independence defined by the principle of local action (that the result at A is not dependent on the setting at b, and the result at B is not dependent on the setting at a).

I'm this "norsen" guy, by the way. So, you know what I think of Jarrett already.


Since Bell tests are prepared to produce outcome dependence, and since this does not necessarily inform regarding locality or nonlocality in nature, and since this might be the effective cause of the incompatibility between Bell LHVs and QM, and between Bell LHVs and experimental results, then violations of Bell inequalities don't inform regarding locality/nonlocality in nature.

I can't follow this. Are you just repeating Jarrett's idea that "Bell locality" is actually the conjunction of two things, only one of which really deserves to be called "locality"? So then, from the mere fact that "Bell locality" is violated, we can't necessarily infer the (genuine) "locality" is violated? If that's it, you know I disagree, but if the "Bell vs. Jarrett" paper didn't convince you, nothing I can say here will either. =)
 
  • #47
Gordon Watson said:
Dear Travis, I'd be happy to submit a (say) 3-page PDF to support my rejection of nonlocality.

Would it directly answer the "challenge" I posted above (to explain the perfect correlations locally but without "realism")? If so, I don't see why you shouldn't be permitted to post it here. That's perfectly relevant to this thread.
 
  • #48
nanosiborg said:
I've been using 'hidden variable' to refer to any denotation (in a Bell test model) which refers to an underlying parameter which contributes to the determination of individual results. It doesn't have to include a pre-existing, pre-scripted value for how any specific measurement will come out. It's just included in the model to refer to any underlying parameter which contributes to the determination of individual results.

My understanding of Bell locality is that the denotation of Bell locality in a Bell test model requires some such hidden variable, whether the definition of that hidden variable includes a denotation about precisely how the hidden variable affects individual detection or not.

In other words, I would consider your example to be realistic in the same sense that Bell's λ is realistic, and therefore not a counter-example to my statement.

If the heads/tails value of Norsen's coin is considered realistic before we've flipped it, I'm not sure what you'd consider not to be realistic. Could I ask for an example?

That's a trick question, of course. If you do come up with such an example I'll use it instead of Norsen's coin in his example to produce a local but not realistic model. If you can't, then I'll argue that something is wrong with your definition of realism because it includes everything.
 
Last edited:
  • #49
ttn said:
Yes, OK. So then the point is just that "hidden variable theories" (like, e.g., deBB) need not be "realist theories".
I'm using hidden variable theory and realistic theory interchangeably. So, any hidden variable theory is a realistic theory. Any theory which does not incorporate hidden variables is a nonrealistic theory.

ttn said:
It's not correct that Bell's formulation of locality (i.e., "Bell locality") assumes the existence of hidden variables. Maybe we're still not quite on the same page about what "hidden variables" means, because we're not on the same page about what "underlying" means in your formulation above. Usually the phrase "hidden variable" is used to mean some *extra* thing, beyond just the standard wave function of ordinary quantum theory, that is in the mix. So then, e.g., deBB is a hidden variable theory because it uses not only the wave function, but also the added "definite particle positions", to account for the results. In any case, though, the point is that "Bell locality" does not presuppose "realism" and it also does not presuppose "hidden variables". You can meaningfully ask whether ordinary QM (not a hidden variable theory!) respects or violates "Bell locality". (It violates it.)
If Bell locality doesn't require hidden variable representation, then how would Bell locality be formulated and incorporated into a model of a Bell test without the explicit denotation of a hidden variable, such as Bell's λ, that contributes to the determination of individual results?

Ok, you could write A(a) = ±1 and B(b) = ±1, but then your formulation has already deviated from one of the primary requirements of the exercise aimed at finding an answer to the suggestion that QM might be made a more complete theory, perhaps a more accurate (or at least a more heuristic) description of the physical reality with the addition of supplementary 'hidden' variables.

To further clarify how I'm using the terms underlying and hidden variable, underlying refers to the sub-instrumental 'quantum realm' where the evolution of the 'system' being instrumentally analyzed is assumed to be occurring. Hidden variable refers to unknown variable parameter(s) or property(ies) of the quantum system being instrumentally analyzed that are assumed to exist 'out there' in the 'quantum realm' in the pre-detection evolution of the system.

ttn said:
OK, but then you're using the word "realistic" in a different way than (I think) most other people here do. I think most people use that word to mean that there are definite values pre-encoded in the particles somehow, such that there are meaningful answers to questions like: "What would the outcome had been if, instead of measuring along x, I had measured along y?"
A hidden variable, such as Bell's λ, need not provide a meaningful answer to a question such as, "What would the outcome at A have been if, instead of the polarizer being set at 20° it had been set at 80°?", because λ can refer to any variable underlying parameter(s) or property(ies) of the system, or any collection thereof. The denotation of λ in the model acts as a placeholder for any unknown underlying parameter(s) or property(ies) which, together with the relevant instrumental variable(s), contribute to the determination of individual results. The hidden variable is needed in this way in order to explicitly denote that something in addition to the instrumental variable, something to do with the 'system' being analyzed, is determining the individual results, because this is what the LHV program, the attempt to answer the question of whether or not QM can be viably supplemented with underlying system parameters and made explicity local, is predicated on.

ttn said:
I certainly agree that it makes sense to call deBB "realist" by some meanings of the word "realist". But it is important to understand that the theory is *not* "realist" in the narrow sense I explained above. Stepping back, that's what I wanted to point out here. The word "realism" is a slippery bugger. Different people use it to mean all kinds of different things, such that miscommunication and misunderstanding tends to be rampant.
I understand, I think. But I'm just using realistic synonymously with hidden parameter. If a theory includes explicit notation representing non-instrumental hidden (or underlying or unknown ... however it might be phrased) parameter(s), then it's a realistic theory, if not, then it isn't.

ttn said:
Me too, though I'm not sure what the two "senses" of nonlocality here might be. They both violate "Bell locality". What other well-defined sense does anybody have in mind?
Yes, I agree that the fact that they both violate Bell locality is the unambiguous criterion and statement of their non-(Bell)localness. What I had in mind was that the way in which deBB is explicitly nonlocal (and nonmechanical) through the quantum potential is a bit different than the way standard QM is (to some) explicitly nonlocal (and nonmechanical) through instantaneous collapse and establishment and projection of a principle axis subsequent to detection at one end or the other.

ttn said:
I'm this "norsen" guy, by the way. So, you know what I think of Jarrett already.
Oh, cool. Yes, I read that paper some time ago. I think that I don't quite understand your reason, your argument for dismissing Jarrett's idea. Maybe after reading it again I'll get it. If you have time, would a brief synopsis here, outlining the principle features of your argument, be possible?

ttn said:
I can't follow this. Are you just repeating Jarrett's idea that "Bell locality" is actually the conjunction of two things, only one of which really deserves to be called "locality"? So then, from the mere fact that "Bell locality" is violated, we can't necessarily infer the (genuine) "locality" is violated? If that's it, you know I disagree, but if the "Bell vs. Jarrett" paper didn't convince you, nothing I can say here will either. =)
Yes, that's basically it. I would say, following Jarrett, that Bell locality encodes two assumptions, one of which, the assumption that paired outcomes are statistically independent, is the effective cause of the incompatibility between Bell LHV and QM, and the incompatibility between Bell LHV and experiment, and that this doesn't tell us anything about locality or nonlocality in nature.

But, as I mentioned, I still have this feeling that I don't fully understand your argument against Jarrett ... but will say that if your argument is correct, then there wouldn't seem to be anything left but to conclude that nonlocality must be present in nature. (Unless the idea that this nonlocality must refer to instantaneous action at a distance is also correct, and then I have no idea what it could possibly mean.)
 
  • #50
Nugatory said:
If the heads/tails value of Norsen's coin is considered realistic before we've flipped it, I'm not sure what you'd consider not to be realistic.

Good point! But I think the real lesson here is again just that "realistic" is used to mean all kinds of different things by all kinds of different people in all kinds of different contexts. There is surely a sense in which the coin-flipping-particles model could be considered "realistic" -- namely, it tells a perfectly clear and definite story about really-existing processes. There's nothing the least bit murky, unspeakable, metaphysically indefinite, or quantumish about it. So, if that's what "realistic" means, then it's realistic. But if "realistic" means instead specifically that there are pre-existing definite values (supporting statements about counter-factuals) then the coin-flipping-particles model is clearly not realistic.

So... anybody who talks about "realism" (and in particular, anybody who says that Bell's theorem leaves us the choice of abandoning "realism" to save locality) better say really really carefully exactly what they mean.

Incidentally, equivocation on the word "realism" is exactly how muddle-headed people manage to infer, from something like the Kochen-Specker theorem (which shows that you cannot consistently assign pre-existing definite values to a certain set of "observables"), that the moon isn't there when nobody looks.
 
  • #51
Nugatory said:
If the heads/tails value of Norsen's coin is considered realistic before we've flipped it, I'm not sure what you'd consider not to be realistic. Could I ask for an example?
How would you represent it in a model? Can it be one of many possible hidden parameters collectively represented by λ. Let's say that λ is the universal convention for denoting hidden parameters, and, following Bell, that λ refers to any relevant underlying parameter. (We have no way of knowing what the relevant underlying parameters are, but whatever they are, λ refers to them.) Theories which include λ would be called realistic, and theories which don't include λ would be called nonrealistic.

Nugatory said:
That's a trick question, of course. If you do come up with such an example I'll use it instead of Norsen's coin in his example to produce a local but not realistic model. If you can't, then I'll argue that something is wrong with your definition of realism because it includes everything.
As I mentioned in my most recent reply to Norsen, I suppose you can make a model that's, in some sense, Bell local without λ. But that would pretty much defeat the purpose, which is to determine whether or not QM can be supplemented by hidden parameters, λ, and also be made explicitly local. (And of course Bell proved that it can't be. But Norsen maintains that Bell also proved that nature is nonlocal. Which I don't get.)

If you think that there's something wrong with λ including anything and everything, then your argument is with Bell's formulation ... I think.
 
  • #52
nanosiborg said:
I'm using hidden variable theory and realistic theory interchangeably. So, any hidden variable theory is a realistic theory. Any theory which does not incorporate hidden variables is a nonrealistic theory.

Well that's liable to cause confusion when you talk to other people here. But whatever. The main question is: do you think that Bell's theorem leaves us a choice of giving up locality OR giving up hidden variables? If so, perhaps you can answer my challenge: provide an example of a local (toy) model that successfully predicts the perfect correlations but without "hidden variables".


If Bell locality doesn't require hidden variable representation, then how would Bell locality be formulated and incorporated into a model of a Bell test without the explicit denotation of a hidden variable, such as Bell's λ, that contributes to the determination of individual results?

See Bell's paper "la nouvelle cuisine" (in the 2nd edition of "speakable and unspeakable"). Or see section 6 of

http://www.scholarpedia.org/article/Bell's_theorem

or (for more detail) this paper of mine:

http://arxiv.org/abs/0707.0401



Ok, you could write A(a) = ±1 and B(b) = ±1, but then your formulation has already deviated from one of the primary requirements of the exercise aimed at finding an answer to the suggestion that QM might be made a more complete theory, perhaps a more accurate (or at least a more heuristic) description of the physical reality with the addition of supplementary 'hidden' variables.

This way of writing it also presupposes determinism. See how Bell formulated locality in such a way that neither determinism nor hidden variables are presupposed.


A hidden variable, such as Bell's λ, need not provide a meaningful answer to a question such as, "What would the outcome at A have been if, instead of the polarizer being set at 20° it had been set at 80°?", because λ can refer to any variable underlying parameter(s) or property(ies) of the system, or any collection thereof. The denotation of λ in the model acts as a placeholder for any unknown underlying parameter(s) or property(ies) which, together with the relevant instrumental variable(s), contribute to the determination of individual results. The hidden variable is needed in this way in order to explicitly denote that something in addition to the instrumental variable, something to do with the 'system' being analyzed, is determining the individual results, because this is what the LHV program, the attempt to answer the question of whether or not QM can be viably supplemented with underlying system parameters and made explicity local, is predicated on.

I don't really disagree with any of that, except the implication that this λ represents a (specifically) *"hidden"* variable -- i.e., something supplementary to the usual QM wave function. It is better to understand the λ as denoting "whatever a given theory says constitutes a complete description of the system being analyzed". For ordinary QM, λ would thus (in the usual EPR-Bell kind of setup) just be the 2-particle wave function of the particle pair. For deBB it would be the wave function plus the two particle positions. And so on. Of course the point is then that you can derive the inequality without any constraints on λ.


Yes, I agree that the fact that they both violate Bell locality is the unambiguous criterion and statement of their non-(Bell)localness. What I had in mind was that the way in which deBB is explicitly nonlocal (and nonmechanical) through the quantum potential is a bit different than the way standard QM is (to some) explicitly nonlocal (and nonmechanical) through instantaneous collapse and establishment and projection of a principle axis subsequent to detection at one end or the other.

I agree that the violation of Bell locality looks a bit different, or manifests differently, in the two theories. My point was just that, in the abstract as it were, the two non-localities are "the same" in the sense that, for both theories, something that happens at a certain space-time point is *affected* by something outside its past light cone.

Incidentally, I think you have the wrong idea about how deBB actually works. The "quantum potential" is a kind of pointless and weird way of formulating the theory that Bohm of course used, but basically nobody in the last 20-30 years who works on the theory thinks of it in those terms anymore. See this recent paper of mine (intended as an accessible introduction to the theory for physics students) to get a sense of how the theory should actually be understood:

http://arxiv.org/abs/1210.7265


Oh, cool. Yes, I read that paper some time ago. I think that I don't quite understand your reason, your argument for dismissing Jarrett's idea. Maybe after reading it again I'll get it. If you have time, would a brief synopsis here, outlining the principle features of your argument, be possible?

Sure. How about this super-brief one: Jarrett only thought that Bell's formulation of locality could be broken into two parts -- one that captures genuine relativistic causality, and the other some other unrelated thing -- because he misunderstood a crucial aspect of Bell's formulation. In particular, he didn't (fully) understand that (roughly speaking) what we were calling "λ" above should be understood as denoting what some candidate theory says constitutes a *complete* description of the state of the system prior to measurement. (He missed the "complete" part. Then he discovered that, if λ does *not* provide a complete description of the system, then violation of the condition does not necessarily imply non-locality! The violation could instead be blamed on the use of incomplete state descriptions! Hence his idea that "Bell locality" = "genuine locality" + "completeness". But in fact Bell already saw this coming and carefully formulated the condition to ensure that its violation would indicate genuine nonlocality. Jarrett simply missed this.)
 
  • #53
Gordon Watson said:
my theory is a proposed refutation of all Bell inequalities.

I do remember you from a year or so ago when I last posted here. I am not exactly chomping at the bit to discuss this with you. But if you email me something short (3 pages) I'll look at it and tell you what's wrong with it. I'm sure this won't convince you and I probably won't want to talk about it further, but I always enjoy finding the errors in such "refutations".
 
  • #54
ttn said:
But if "realistic" means instead specifically that there are pre-existing definite values (supporting statements about counter-factuals) then the coin-flipping-particles model is clearly not realistic.

It seems to me that the definition of "realistic" should not imply deterministic.
 
  • #55
stevendaryl said:
It seems to me that the definition of "realistic" should not imply deterministic.

Who's to say? Maybe the people who voted for (b) in the poll should say what they think they mean by it?

Incidentally, I wrote a whole paper about how "realism" is used to mean about 5 different things, none of which actually have anything to do with Bell's theorem:

http://arxiv.org/abs/quant-ph/0607057
 
  • #56
Gordon Watson said:
You issued an open challenge, so let's find a space to discuss it openly on-line.

PS: You're the physicist; surely physicists have such places?
..

:rolleyes: You said you wanted to learn. I made a generous offer. Not good enough? OK, forget it then.
 
  • #57
ttn said:
Incidentally, I think you have the wrong idea about how deBB actually works. The "quantum potential" is a kind of pointless and weird way of formulating the theory that Bohm of course used, but basically nobody in the last 20-30 years who works on the theory thinks of it in those terms anymore.
I know this is a major aside issue but I find it so interesting, I thought I'd try to sneak it in since 'quantum potential' was brought up. I realize it is a minority position within the Bohmian camp, but some Bohmians do seem sympathetic to Bohm's suggestion of "quantum potential" versus minimalist Bohmians like Durr, Goldstein, Zanghi (DGZ). They suggest that Bohm's concept of quantum potential may be useful in comparison to the minimalist Bohmian scheme. For example, Belousek writes:
On the DGZ view, then, the guidance equation allows for only the prediction of particle trajectories. And while correct numerical prediction via mathematical deduction is constitutive of a good physical explanation, it is not by itself exhaustive thereof, for equations are themselves 'causes' (in some sense) of only their mathematical-logical consequences and not of the phenomena they predict. So we are left with just particles and their trajectories as the basis within the DGZ view of Bohmian mechanics. But, again, are particle trajectories by themselves sufficient to explain quantum phenomena? Or, rather are particle trajectories, considered from the point of view of Bohmian mechanics itself, as much a part of the quantum phenomena that needs to be explained?...the mere existence of those trajectories is by itself insufficient for explanation. For example, to simply specify correctly the motion of a body with a certain mass and distance from the sun in terms of elliptical space-time orbit is not to explain the Earth's revolving around the sun but rather to redescribe that state of affairs in a mathematically precise way. What remains to be explained is how it is that the Earth revolves around the sun in that way, and within classical mechanics, Newton's law of universal gravitation and second law provide that explanation.
Formalism, Ontology and Methodology in Bohmian Mechanics
https://springerlink3.metapress.com...b5nwspxhjssd4c5c3cpgr&sh=www.springerlink.com

This was also discussed on another thread and the following comment by Maaneli makes a similar point:
There is a very serious and obvious problem with their interpretation; in claiming that the wavefunction is nomological (a law-like entity like the Hamiltonian as you said), and because they want to claim deBB is a fundamentally complete formulation of QM, they also claim that there are no underlying physical fields/variables/mediums in 3-space that the wavefunction is only a mathematical approximation to (unlike in classical mechanics where that is the case with the Hamiltonian or even statistical mechanics where that is the case with the transition probability solution to the N-particle diffusion equation). For these reasons, they either refuse to answer the question of what physical field/variable/entity is causing the physically real particles in the world to move with a velocity field so accurately prescribed by this strictly mathematical wavefunction, or, when pressed on this issue (I have discussed this issue before with DGZ), they simply deny that this question is meaningful. The only possiblity on their view then is that the particles, being the only physically real things in the world (along with their mass and charge properties of course), just somehow spontaneously move on their own in such a way that this law-like wavefunction perfectly prescribes via the guiding equation. This is totally unconvincing, in addition to being quite a bizarre view of physics, in my opinion, and is counter to all the evidence that the equations and dynamics from deBB theory are suggesting, namely that the wavefunction is either a physically real field on its own or is a mathematical approximation to an underlying and physically real sort of field/variable/medium, such as in a stochastic mechanical type of theory.
http://74.86.200.109/showthread.php?t=247367&page=2
 
  • #58
ttn said:
Who's to say? Maybe the people who voted for (b) in the poll should say what they think they mean by it?

Incidentally, I wrote a whole paper about how "realism" is used to mean about 5 different things, none of which actually have anything to do with Bell's theorem:

http://arxiv.org/abs/quant-ph/0607057

I'll take a look.

On the other hand, nondeterminism doesn't really change much. With the classical kind of probability, it's always consistent to assume that nondeterminism is due to lack of knowledge of the details of the current state.
 
  • #59
stevendaryl said:
I'll take a look.

On the other hand, nondeterminism doesn't really change much. With the classical kind of probability, it's always consistent to assume that nondeterminism is due to lack of knowledge of the details of the current state.

Here's the real distinction between a quantum notion of "state of the world" and the kind of "state of the world" that is generally assumed in pre-quantum physics.

Classically, the state of the world "factors" into a product of local states. Roughly speaking, imagine dividing all of space into little cubes that are maybe 1 cubic light year. Then classically, everything there is to know about the state of the universe can be described by giving the state of things in each cube (the locations and momenta of particles within the cube, the values of fields within the cube), together with saying which cubes share a border with which other cubes.

What notion of the "state of the universe" doesn't meet this definition? Well, a probabilistic model need not. For example, if you say that a particular object has a 50/50 chance of being on Earth or on some other planet 10 light-years away (but not both), you can't describe this "state" as a product of local states. This is a classical kind of "entanglement", but it never bothered anybody, because nobody takes this kind of probabilistic model seriously as anything but a model of our knowledge of the unverse, rather than the universe itself.
 
  • #60
ttn said:
Still think that one can have a local explanation of even this small subset of the quantum predictions -- namely, the perfect correlations that are observed "whenever the analyzers happen to be parallel"? Let's see the model.
It is called quantum mechanics.
(Note: the model should also respect the "free choice" aka "no conspiracies" assumption, if it is to be taken seriously.)
That is a bit strange! Why should it respect that to be taken seriously? Many, taken seriously, physics theories do not respect that.
 

Similar threads

  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
10
Views
3K
Replies
6
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
63
Views
8K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
58
Views
4K