What do violations of Bell's inequalities tell us about nature?

In summary: don't imply that nature is nonlocal ... though it's tempting to assume that nature is nonlocal by virtue of the fact that nonlocal hidden variable models of quantum entanglement are viable.

What do observed violation of Bell's inequality tell us about nature?

  • Nature is non-local

    Votes: 10 31.3%
  • Anti-realism (quantum measurement results do not pre-exist)

    Votes: 15 46.9%
  • Other: Superdeterminism, backward causation, many worlds, etc.

    Votes: 7 21.9%

  • Total voters
    32
  • #36
nanosiborg said:
So, what can be inferred from the predictability of distant correlations? Can it be said, for example, that there has been an invariant relationship between entangled particles created through the entangling process, ie., through common source, interaction, common motion imparted to particles that don't have a common source and have never interacted, etc.? If so, does this seem weird?

Yes.

It doesn't to me, and the fact that the totality of results of optical Bell tests are in line with the conservation laws and optics principles further supports that view.

My general feeling is that if you don't find quantum mechanics weird, you haven't thought about it enough. Conservation laws don't by themselves explain the correlations.

Think about the following situation: You prepare an electron with spin-up along some axis [itex]\vec{S}[/itex]. Then later you measure its spin along a different axis [itex]\vec{A}[/itex]. Then the result will be non-deterministic: with a certain probability, the electron will be found afterwards to have spin-up in the [itex]\vec{A}[/itex] direction, and with a certain probability, it will be spin-down. In either case, the angular momentum of the electron was changed by the measurement: its final angular momentum is not the same as its initial angular momentum. That isn't a violation of conservation of angular momentum, because you can attribute the change to the interaction between the detector and particle. The angular momentum of the particle changes, and the angular momentum of the detector changes in a complementary way, so that the total angular momentum is unchanged by the detection process. But note that there is a small amount of angular momentum, [itex]\delta \vec{L}[/itex] transferred from the electron to the detector.

Now, if that electron happened to have come from an EPR twin-pair experiment, then each of the two detectors can be expected to receive a tiny amount of angular momentum from whichever particle is detected. But in the case of perfectly aligned detectors, we know that the [itex]\delta \vec{L_1}[/itex] received by one detector must exactly correlate with the [itex]\delta \vec{L_2}[/itex] received by the other detector, so that the resulting spins of the twin particles are perfectly anti-correlated.

So the perfect anti-correlation is not simply a matter of conservation of angular momentum. Angular momentum would be conserved whether or not the twin particles are found to be anti-correlated--it's just that different amounts of angular momentum would be transferred to the detectors. The perfect anti-correlation of twin pairs is a matter of cooperation between nondeterministic processes involving distant macroscopic objects (the detectors).
 
Physics news on Phys.org
  • #37
nanosiborg said:
If you take Bell's formulation to be generalizable, and I do, then QM-compatible LHV models of quantum entanglement are definitively ruled out. Beyond that, violations of Bell inequalities tell us nothing about nature.
That's where the disagreement is with those who contend that Bell's formulation does not make those further assumptions, like hidden-variables, realism, etc. As one example of such authors making those arguments consider Norsen:
One can divide reasons for disagreement (with Bell’s own interpretation of the significance of his theorem) into two classes. First, there are those who assert that the derivation of a Bell Inequality relies not just on the premise of locality, but on some additional premises as well. The usual suspects here include Realism, Hidden Variables, Determinism, and Counter-Factual-Definiteness. (Note that the items on this list are highly overlapping, and often commentators use them interchangeably.) The idea is then that, since it is only the conjunction of locality with some other premise which is in conflict with experiment, and since locality is so strongly motivated by SR, we should reject the other premise. Hence the widespread reports that Bell’s theorem finally refutes the hidden variables program, the principle of determinism, the philosophical notion of realism, etc.
Norsen also discusses why Bell felt that his theorem does tell us something about nature:
Since all the crucial aspects of Bell’s formulation of locality are thus meaningful only relative to some candidate theory, it is perhaps puzzling how Bell thought we could say anything about the locally causal character of Nature. Wouldn’t the locality condition only allow us
to assess the local character of candidate theories? How then did Bell think we could end up saying something interesting about Nature?...That is precisely the beauty of Bell’s theorem, which shows that no theory respecting the locality condition (no matter what other properties it may or may not have – e.g., hidden variables or only the non-hidden sort, deterministic or stochastic, particles or fields or both or neither, etc.) can agree with the empirically-verified QM predictions for certain types of experiment. That is (and leaving aside the various experimental loopholes), no locally causal theory in Bell’s sense can agree with experiment, can be empirically viable, can be true. Which means the true theory (whatever it might be) necessarily violates Bell’s locality condition. Nature is not locally causal.
Local Causality and Completeness: Bell vs. Jarrett
http://arxiv.org/pdf/0808.2178v1.pdf

With respect to a discussion of Bell's concept of local causality see this paper with this interesting quote:
That is, the idea that SR is compatible with non-local causal influences (but only prohibits non-local signaling) seems afflicted by the same problem (reviewed in Section III) that necessarily afflicts theories whose formulations involve words like “observable”, “microscopic”, “environment”, etc. In particular, the notion of “signaling” seems somehow too superficial, too anthropocentric, to adequately capture the causal structure of Figure 1.
J.S. Bell’s Concept of Local Causality
http://arxiv.org/pdf/0707.0401.pdf
 
Last edited:
  • #38
bohm2 said:
That's where the disagreement is with those who contend that Bell's formulation does not make those further assumptions, like hidden-variables, realism, etc. As one example of such authors making those arguments consider Norsen:

Norsen also discusses why Bell felt that his theorem does tell us something about nature:

Local Causality and Completeness: Bell vs. Jarrett
http://arxiv.org/pdf/0808.2178v1.pdf

With respect to a discussion of Bell's concept of local causality see this paper with this interesting quote:

J.S. Bell’s Concept of Local Causality
http://arxiv.org/pdf/0707.0401.pdf
Pages 9 &10 of the Bell vs Jarrett paper are about the completeness of λ .
And from both these papers it seems that Bell presupposes that completeness holds.
While at the same time Bell limits and qualifies completeness of λ to properties of
candidate theories. So this is a conflict on completeness. And I cannot agree that because
no local casual theory agrees with experiment that nature is nonlocal, conclusion.
Rather it is that the description of λ the hidden variable that is not complete .
And when it is the violations of the inequalities can be understood.
And I voted to reject realism, in its limited definition
 
  • #39
Hi folks. I voted for "non-locality". And so, incidentally, did Bell -- though, being dead, he is unable to vote in this particular poll. But here are his words (from the classic paper "Bertlmann's socks and the nature of reality"):

"Let us summarize once again the logic that leads to the impasse. The EPRB correlations are such that the result of the experiment on one side immediately foretells that on the other, whenever the analyzers happen to be parallel. If we do not accept the intervention on one side as a causal influence on the other, we seem obliged to admit that the results on both sides are determined in advance anyway, independently of the intervention on the other side, by signals from the source and by the local magnet setting. But this has implications for non-parallel settings which conflict with those of quantum mechanics. So we cannot dismiss intervention on one side as a causal influence on the other."

For the convenience of the people who are confused here (i.e., the people who voted that we should conclude, from Bell's theorem, that "realism" is wrong) I have bolded the relevant part of the argument above. Note that it is just the EPR argument. The point is that "realism" just means the existence of variables which determine, in advance, what the result on each side will be. What Bell points out here -- and what EPR already pointed out long ago -- is that such variables are (i.e., "realism" is) the *only* way to account *locally* for the perfect correlations that are observed "whenever the analyzers happen to be parallel". So the idea that we can still account for the QM predictions with a model that respects locality but denies "realism" is simply wrong. It will not, does not, and can not work.

Still don't agree? Still think that one can have a local explanation of even this small subset of the quantum predictions -- namely, the perfect correlations that are observed "whenever the analyzers happen to be parallel"? Let's see the model. (Note: the model should also respect the "free choice" aka "no conspiracies" assumption, if it is to be taken seriously.)

This is a serious challenge. Anybody who voted for (b) in the poll evidently thinks (or at least is unwittingly committed to thinking) that these perfect correlations can be explained by a local, non-realist model. Let's see it.
 
  • #40
@ bohm2, re your post #38

I agree with Norsen, and Bell, that it's Bell's locality condition that causes Bell's LHV formulation to be incompatible with QM and experiments, and that realism (hidden variable models) is not ruled out. Bell locality is necessarily realistic, but a realistic model need not be Bell local. We know from deBB that realism isn't ruled out. Which leaves only locality.

I disagree with Norsen, and Bell, that violations of Bell's inequalities tells us anything about nature. I think that the incompatibility with QM and experiment is determined by some feature of Bell's locality condition other than the assumption of locality.
 
  • #41
nanosiborg said:
So, what can be inferred from the predictability of distant correlations? Can it be said, for example, that there has been an invariant relationship between entangled particles created through the entangling process, ie., through common source, interaction, common motion imparted to particles that don't have a common source and have never interacted, etc.? If so, does this seem weird?
stevendaryl said:
Yes.
Do you find it weird that particles which have interacted or have a common source are measurably related? Or is it weird that the quantum correlations can only be approximated by classical preparations (and only approximately described by classical LHV models)? I suppose it's the latter. But is the creation of invariant relationships between and among particles, by the means described, beyond any sort of classical comprehension (ie., weird), or is it, as I suggested in an earlier post, just a matter of degree?

stevendaryl said:
My general feeling is that if you don't find quantum mechanics weird, you haven't thought about it enough.
Some of the interpretations of QM are weird, but I don't think of standard QM as weird. Is it possible that those who find QM weird haven't thought about it enough?

On the other hand, some quantum phenomena (the physical, instrumental stuff, not the theory) do seem weird, but I wouldn't include entanglement correlations in there.

stevendaryl said:
Conservation laws don't by themselves explain the correlations.
I agree, and I didn't say they do. But the conservation laws plus the applicable optics laws plus the repeatability of the preparations and the correlations don't seem so weird. The correlations are quite unsurprising when all those things are taken into consideration.

[... snip nice discussion ...]

stevendaryl said:
So the perfect anti-correlation is not simply a matter of conservation of angular momentum. Angular momentum would be conserved whether or not the twin particles are found to be anti-correlated--it's just that different amounts of angular momentum would be transferred to the detectors.
OK.

stevendaryl said:
The perfect anti-correlation of twin pairs is a matter of cooperation between nondeterministic processes involving distant macroscopic objects (the detectors).
As you said in your discussion, it's the individual results that are nondeterministic (ie., random). Because the correlations are predictable (and the unknown underlying processes therefore apparently repeatable) we can retain the assumption that the processes are deterministic.

So, I would change your last sentence to read: the perfect anti-correlation of paired (entangled) particles is a matter of a repeatable relationship between, and deterministic evolution of, certain motional properties of the entangled particles subsequent to their creation via a common source, their interaction, or their being altered by identical stimulii. Which doesn't seem weird to me.
 
  • #42
nanosiborg said:
Bell locality is necessarily realistic, but a realistic model need not be Bell local.

I don't think that's right. Here's a model that non-realistic but perfectly Bell local: each particle has no definite, pre-existing, pre-scripted value for how the measurements will come out. Think of each particle as carrying a coin, which, upon encountering an SG device, it flips -- heads it goes "up", tails it goes "down". That is certainly not "realistic" (in the sense that people are using that term here) since there is no fact of the matter, prior to the measurement, about how a given particle will respond to the measurement; the outcome is "created on the fly", so to speak. And it's also perfectly local in the sense that what particle 1 ends up doing is in no way influenced by anything going on near particle 2, or vice versa. Of course, the model doesn't make the QM/empirical predictions. But it's non-realist and local. And hence a counter-example to any claim that being Bell local requires/implies being "realist".


We know from deBB that realism isn't ruled out.

I think you must be using "realism" in a different way than most other people. deBB is a hidden variable theory, to be sure, but it is *not* a hidden variable theory about spin! That is, there is no fact of the matter, in deBB, about how a given particle will respond to a measurement of some component of its spin. This is sometimes described by saying that, for deBB, spin is a "contextual" property. It would be more accurate, though, to say that, in deBB, the particles simply do not have any such property as spin.


I disagree with Norsen, and Bell, that violations of Bell's inequalities tells us anything about nature. I think that the incompatibility with QM and experiment is determined by some feature of Bell's locality condition other than the assumption of locality.

I would be very interested to hear precisely what you have in mind. Have you carefully studied Bell's paper "la nouvelle cuisine" (where he is most explicit about how "locality" is formulated)? If you think the very formulation of "locality" smuggles in some other requirement, I want to know exactly what and how.
 
  • #43
nanosiborg said:
Do you find it weird that particles which have interacted or have a common source are measurably related?

As I thought I said, but maybe I just thought it :smile: it's certainly not weird that particles with a common history could share state information. For example, two people could agree on some random number, and then separate to large distances. Then there would be a nonlocal correlation due to shared state information from a common past.

It's weird that distant particles would be connected in any way other than shared state information.

But is the creation of invariant relationships between and among particles, by the means described, beyond any sort of classical comprehension (ie., weird), or is it, as I suggested in an earlier post, just a matter of degree?

Yes, I think it's weird.

On the other hand, some quantum phenomena (the physical, instrumental stuff, not the theory) do seem weird, but I wouldn't include entanglement correlations in there.

I don't think you can separate entanglement from measurement. Or rather, entanglement is only weird to the extent that it implies nonlocal correlations between distant macroscopic measurements.

As you said in your discussion, it's the individual results that are nondeterministic (ie., random). Because the correlations are predictable (and the unknown underlying processes therefore apparently repeatable) we can retain the assumption that the processes are deterministic.

So, I would change your last sentence to read: the perfect anti-correlation of paired (entangled) particles is a matter of a repeatable relationship between, and deterministic evolution of, certain motional properties of the entangled particles subsequent to their creation via a common source, their interaction, or their being altered by identical stimulii. Which doesn't seem weird to me.

Are you saying anything different from: It's not weird, because it's predicted by quantum mechanics? Whether something is weird or not is a matter of taste, I suppose.
 
  • #44
nanosiborg said:
Bell locality is necessarily realistic, but a realistic model need not be Bell local.
ttn said:
I don't think that's right. Here's a model that non-realistic but perfectly Bell local: each particle has no definite, pre-existing, pre-scripted value for how the measurements will come out. Think of each particle as carrying a coin, which, upon encountering an SG device, it flips -- heads it goes "up", tails it goes "down". That is certainly not "realistic" (in the sense that people are using that term here) since there is no fact of the matter, prior to the measurement, about how a given particle will respond to the measurement; the outcome is "created on the fly", so to speak. And it's also perfectly local in the sense that what particle 1 ends up doing is in no way influenced by anything going on near particle 2, or vice versa. Of course, the model doesn't make the QM/empirical predictions. But it's non-realist and local. And hence a counter-example to any claim that being Bell local requires/implies being "realist".
I've been using 'hidden variable' to refer to any denotation (in a Bell test model) which refers to an underlying parameter which contributes to the determination of individual results. It doesn't have to include a pre-existing, pre-scripted value for how any specific measurement will come out. It's just included in the model to refer to any underlying parameter which contributes to the determination of individual results.

My understanding of Bell locality is that the denotation of Bell locality in a Bell test model requires some such hidden variable, whether the definition of that hidden variable includes a denotation about precisely how the hidden variable affects individual detection or not.

In other words, I would consider your example to be realistic in the same sense that Bell's λ is realistic, and therefore not a counter-example to my statement.

ttn said:
I think you must be using "realism" in a different way than most other people. deBB is a hidden variable theory, to be sure, but it is *not* a hidden variable theory about spin! That is, there is no fact of the matter, in deBB, about how a given particle will respond to a measurement of some component of its spin. This is sometimes described by saying that, for deBB, spin is a "contextual" property. It would be more accurate, though, to say that, in deBB, the particles simply do not have any such property as spin.
As per my above, the particles don't have to have any property in particular. They're underlying entities (that presumably have some property or properties) that are denoted in the deBB model. As such, and as you note, deBB is a hidden variable theory, and thus, in my lexicon, a realistic theory. But, due to the nonmechanical (ie., nonlocal vis the quantum potential) aspects of the theory it's also not a Bell local theory. I think of standard QM as a nonrealistic theory that is also not a Bell local theory, although not nonlocal in exactly the same sense that deBB is deemed nonlocal.

ttn said:
I would be very interested to hear precisely what you have in mind. Have you carefully studied Bell's paper "la nouvelle cuisine" (where he is most explicit about how "locality" is formulated)?
I haven't studied "la nouvelle cuisine". I have read a few of Norsen's papers, including the one where he discusses Jarrett's parsing of Bell's locality condition. I'm inclined toward Jarrett's interpretation that Bell locality encodes the assumptions of statistical independence (that paired outcomes are statistically independent of each other) as well as the independence defined by the principle of local action (that the result at A is not dependent on the setting at b, and the result at B is not dependent on the setting at a).

Since Bell tests are prepared to produce outcome dependence, and since this does not necessarily inform regarding locality or nonlocality in nature, and since this might be the effective cause of the incompatibility between Bell LHVs and QM, and between Bell LHVs and experimental results, then violations of Bell inequalities don't inform regarding locality/nonlocality in nature.

There is another aspect to the form that Bell locality imposes on LHV models of quantum entanglement to consider. Any Bell LHV model of quantum entanglement must necessarily denote coincidental detection as a function of the product of the independent functions for individual detection at A and B. So the relevant underlying parameter determining coincidental detection is the same underlying parameter determining individual detection. I think the underlying parameter determining coincidental detection can be viewed as an invariant (per any specific run in any specific Bell test preparation) relationship between the motional properties of the entangled particles, and therefore a nonvariable underlying parameter. I'm not sure how to think about this. Is it significant? If so, how do we get from a randomly varying underlying parameter to a nonvarying underlying parameter?
 
  • #45
stevendaryl said:
As I thought I said, but maybe I just thought it :smile: it's certainly not weird that particles with a common history could share state information. For example, two people could agree on some random number, and then separate to large distances. Then there would be a nonlocal correlation due to shared state information from a common past.

It's weird that distant particles would be connected in any way other than shared state information.
I agree. That (eg., nonlocally connected) would be weird. But I hope I've made it clear that I don't think the particles are connected in any way other than statistically through shared information imparted through local channels (common source, interaction, common 'zapping', etc.).

nanosiborg said:
But is the creation of invariant relationships between and among particles, by the means described, beyond any sort of classical comprehension (ie., weird), or is it, as I suggested in an earlier post, just a matter of degree?
stevendaryl said:
Yes, I think it's weird.
Ok, so I take it that you find the invariance of the relationship between entangled particles in any particular run of any particular Bell test to be weird. But why should that be weird?

Consider, for example, the polarization entangled photons created via atomic cascades. Entangled photons are assumed to be emitted from the same atom (albeit a different atom for each entangled pair). Is it surprising (weird) that their spins and therefore their polarizations would be related in a predictable way via the application of the law of conservation of angular momentum? Is it surprising that each entangled pair would be related in the same way? After all, the emission process is presumably the same for each pair, and the selection process is the same for each pair.

stevendaryl said:
I don't think you can separate entanglement from measurement. Or rather, entanglement is only weird to the extent that it implies nonlocal correlations between distant macroscopic measurements.
Ok, I agree with this, and since I don't think the correlations imply nonlocal connections between distant macroscopic measurements (because I think they can be understood in terms of related properties produced via local channels, and because the correlations are in line with empirically based optics laws involving the analysis of polarizations via crossed polarizers), then I don't view the correlations as being weird.

stevendaryl said:
Are you saying anything different from: It's not weird, because it's predicted by quantum mechanics?
I think so. I'm saying that we can understand why QM predicts what it does in the case of Bell tests by referring to the applicable (eg., conservation and optics) classical laws which are preserved in the QM treatment.

stevendaryl said:
Whether something is weird or not is a matter of taste, I suppose.
I would say that it's a matter of interpretation, and that interpretation isn't solely a matter of taste.
 
  • #46
nanosiborg said:
I've been using 'hidden variable' to refer to any denotation (in a Bell test model) which refers to an underlying parameter which contributes to the determination of individual results. It doesn't have to include a pre-existing, pre-scripted value for how any specific measurement will come out. It's just included in the model to refer to any underlying parameter which contributes to the determination of individual results.

Yes, OK. So then the point is just that "hidden variable theories" (like, e.g., deBB) need not be "realist theories".

My understanding of Bell locality is that the denotation of Bell locality in a Bell test model requires some such hidden variable, whether the definition of that hidden variable includes a denotation about precisely how the hidden variable affects individual detection or not.

It's not correct that Bell's formulation of locality (i.e., "Bell locality") assumes the existence of hidden variables. Maybe we're still not quite on the same page about what "hidden variables" means, because we're not on the same page about what "underlying" means in your formulation above. Usually the phrase "hidden variable" is used to mean some *extra* thing, beyond just the standard wave function of ordinary quantum theory, that is in the mix. So then, e.g., deBB is a hidden variable theory because it uses not only the wave function, but also the added "definite particle positions", to account for the results. In any case, though, the point is that "Bell locality" does not presuppose "realism" and it also does not presuppose "hidden variables". You can meaningfully ask whether ordinary QM (not a hidden variable theory!) respects or violates "Bell locality". (It violates it.)


In other words, I would consider your example to be realistic in the same sense that Bell's λ is realistic, and therefore not a counter-example to my statement.

OK, but then you're using the word "realistic" in a different way than (I think) most other people here do. I think most people use that word to mean that there are definite values pre-encoded in the particles somehow, such that there are meaningful answers to questions like: "What would the outcome had been if, instead of measuring along x, I had measured along y?"


As per my above, the particles don't have to have any property in particular. They're underlying entities (that presumably have some property or properties) that are denoted in the deBB model.

I certainly agree that it makes sense to call deBB "realist" by some meanings of the word "realist". But it is important to understand that the theory is *not* "realist" in the narrow sense I explained above. Stepping back, that's what I wanted to point out here. The word "realism" is a slippery bugger. Different people use it to mean all kinds of different things, such that miscommunication and misunderstanding tends to be rampant.

I think of standard QM as a nonrealistic theory that is also not a Bell local theory, although not nonlocal in exactly the same sense that deBB is deemed nonlocal.

Me too, though I'm not sure what the two "senses" of nonlocality here might be. They both violate "Bell locality". What other well-defined sense does anybody have in mind?


I haven't studied "la nouvelle cuisine". I have read a few of Norsen's papers, including the one where he discusses Jarrett's parsing of Bell's locality condition. I'm inclined toward Jarrett's interpretation that Bell locality encodes the assumptions of statistical independence (that paired outcomes are statistically independent of each other) as well as the independence defined by the principle of local action (that the result at A is not dependent on the setting at b, and the result at B is not dependent on the setting at a).

I'm this "norsen" guy, by the way. So, you know what I think of Jarrett already.


Since Bell tests are prepared to produce outcome dependence, and since this does not necessarily inform regarding locality or nonlocality in nature, and since this might be the effective cause of the incompatibility between Bell LHVs and QM, and between Bell LHVs and experimental results, then violations of Bell inequalities don't inform regarding locality/nonlocality in nature.

I can't follow this. Are you just repeating Jarrett's idea that "Bell locality" is actually the conjunction of two things, only one of which really deserves to be called "locality"? So then, from the mere fact that "Bell locality" is violated, we can't necessarily infer the (genuine) "locality" is violated? If that's it, you know I disagree, but if the "Bell vs. Jarrett" paper didn't convince you, nothing I can say here will either. =)
 
  • #47
Gordon Watson said:
Dear Travis, I'd be happy to submit a (say) 3-page PDF to support my rejection of nonlocality.

Would it directly answer the "challenge" I posted above (to explain the perfect correlations locally but without "realism")? If so, I don't see why you shouldn't be permitted to post it here. That's perfectly relevant to this thread.
 
  • #48
nanosiborg said:
I've been using 'hidden variable' to refer to any denotation (in a Bell test model) which refers to an underlying parameter which contributes to the determination of individual results. It doesn't have to include a pre-existing, pre-scripted value for how any specific measurement will come out. It's just included in the model to refer to any underlying parameter which contributes to the determination of individual results.

My understanding of Bell locality is that the denotation of Bell locality in a Bell test model requires some such hidden variable, whether the definition of that hidden variable includes a denotation about precisely how the hidden variable affects individual detection or not.

In other words, I would consider your example to be realistic in the same sense that Bell's λ is realistic, and therefore not a counter-example to my statement.

If the heads/tails value of Norsen's coin is considered realistic before we've flipped it, I'm not sure what you'd consider not to be realistic. Could I ask for an example?

That's a trick question, of course. If you do come up with such an example I'll use it instead of Norsen's coin in his example to produce a local but not realistic model. If you can't, then I'll argue that something is wrong with your definition of realism because it includes everything.
 
Last edited:
  • #49
ttn said:
Yes, OK. So then the point is just that "hidden variable theories" (like, e.g., deBB) need not be "realist theories".
I'm using hidden variable theory and realistic theory interchangeably. So, any hidden variable theory is a realistic theory. Any theory which does not incorporate hidden variables is a nonrealistic theory.

ttn said:
It's not correct that Bell's formulation of locality (i.e., "Bell locality") assumes the existence of hidden variables. Maybe we're still not quite on the same page about what "hidden variables" means, because we're not on the same page about what "underlying" means in your formulation above. Usually the phrase "hidden variable" is used to mean some *extra* thing, beyond just the standard wave function of ordinary quantum theory, that is in the mix. So then, e.g., deBB is a hidden variable theory because it uses not only the wave function, but also the added "definite particle positions", to account for the results. In any case, though, the point is that "Bell locality" does not presuppose "realism" and it also does not presuppose "hidden variables". You can meaningfully ask whether ordinary QM (not a hidden variable theory!) respects or violates "Bell locality". (It violates it.)
If Bell locality doesn't require hidden variable representation, then how would Bell locality be formulated and incorporated into a model of a Bell test without the explicit denotation of a hidden variable, such as Bell's λ, that contributes to the determination of individual results?

Ok, you could write A(a) = ±1 and B(b) = ±1, but then your formulation has already deviated from one of the primary requirements of the exercise aimed at finding an answer to the suggestion that QM might be made a more complete theory, perhaps a more accurate (or at least a more heuristic) description of the physical reality with the addition of supplementary 'hidden' variables.

To further clarify how I'm using the terms underlying and hidden variable, underlying refers to the sub-instrumental 'quantum realm' where the evolution of the 'system' being instrumentally analyzed is assumed to be occurring. Hidden variable refers to unknown variable parameter(s) or property(ies) of the quantum system being instrumentally analyzed that are assumed to exist 'out there' in the 'quantum realm' in the pre-detection evolution of the system.

ttn said:
OK, but then you're using the word "realistic" in a different way than (I think) most other people here do. I think most people use that word to mean that there are definite values pre-encoded in the particles somehow, such that there are meaningful answers to questions like: "What would the outcome had been if, instead of measuring along x, I had measured along y?"
A hidden variable, such as Bell's λ, need not provide a meaningful answer to a question such as, "What would the outcome at A have been if, instead of the polarizer being set at 20° it had been set at 80°?", because λ can refer to any variable underlying parameter(s) or property(ies) of the system, or any collection thereof. The denotation of λ in the model acts as a placeholder for any unknown underlying parameter(s) or property(ies) which, together with the relevant instrumental variable(s), contribute to the determination of individual results. The hidden variable is needed in this way in order to explicitly denote that something in addition to the instrumental variable, something to do with the 'system' being analyzed, is determining the individual results, because this is what the LHV program, the attempt to answer the question of whether or not QM can be viably supplemented with underlying system parameters and made explicity local, is predicated on.

ttn said:
I certainly agree that it makes sense to call deBB "realist" by some meanings of the word "realist". But it is important to understand that the theory is *not* "realist" in the narrow sense I explained above. Stepping back, that's what I wanted to point out here. The word "realism" is a slippery bugger. Different people use it to mean all kinds of different things, such that miscommunication and misunderstanding tends to be rampant.
I understand, I think. But I'm just using realistic synonymously with hidden parameter. If a theory includes explicit notation representing non-instrumental hidden (or underlying or unknown ... however it might be phrased) parameter(s), then it's a realistic theory, if not, then it isn't.

ttn said:
Me too, though I'm not sure what the two "senses" of nonlocality here might be. They both violate "Bell locality". What other well-defined sense does anybody have in mind?
Yes, I agree that the fact that they both violate Bell locality is the unambiguous criterion and statement of their non-(Bell)localness. What I had in mind was that the way in which deBB is explicitly nonlocal (and nonmechanical) through the quantum potential is a bit different than the way standard QM is (to some) explicitly nonlocal (and nonmechanical) through instantaneous collapse and establishment and projection of a principle axis subsequent to detection at one end or the other.

ttn said:
I'm this "norsen" guy, by the way. So, you know what I think of Jarrett already.
Oh, cool. Yes, I read that paper some time ago. I think that I don't quite understand your reason, your argument for dismissing Jarrett's idea. Maybe after reading it again I'll get it. If you have time, would a brief synopsis here, outlining the principle features of your argument, be possible?

ttn said:
I can't follow this. Are you just repeating Jarrett's idea that "Bell locality" is actually the conjunction of two things, only one of which really deserves to be called "locality"? So then, from the mere fact that "Bell locality" is violated, we can't necessarily infer the (genuine) "locality" is violated? If that's it, you know I disagree, but if the "Bell vs. Jarrett" paper didn't convince you, nothing I can say here will either. =)
Yes, that's basically it. I would say, following Jarrett, that Bell locality encodes two assumptions, one of which, the assumption that paired outcomes are statistically independent, is the effective cause of the incompatibility between Bell LHV and QM, and the incompatibility between Bell LHV and experiment, and that this doesn't tell us anything about locality or nonlocality in nature.

But, as I mentioned, I still have this feeling that I don't fully understand your argument against Jarrett ... but will say that if your argument is correct, then there wouldn't seem to be anything left but to conclude that nonlocality must be present in nature. (Unless the idea that this nonlocality must refer to instantaneous action at a distance is also correct, and then I have no idea what it could possibly mean.)
 
  • #50
Nugatory said:
If the heads/tails value of Norsen's coin is considered realistic before we've flipped it, I'm not sure what you'd consider not to be realistic.

Good point! But I think the real lesson here is again just that "realistic" is used to mean all kinds of different things by all kinds of different people in all kinds of different contexts. There is surely a sense in which the coin-flipping-particles model could be considered "realistic" -- namely, it tells a perfectly clear and definite story about really-existing processes. There's nothing the least bit murky, unspeakable, metaphysically indefinite, or quantumish about it. So, if that's what "realistic" means, then it's realistic. But if "realistic" means instead specifically that there are pre-existing definite values (supporting statements about counter-factuals) then the coin-flipping-particles model is clearly not realistic.

So... anybody who talks about "realism" (and in particular, anybody who says that Bell's theorem leaves us the choice of abandoning "realism" to save locality) better say really really carefully exactly what they mean.

Incidentally, equivocation on the word "realism" is exactly how muddle-headed people manage to infer, from something like the Kochen-Specker theorem (which shows that you cannot consistently assign pre-existing definite values to a certain set of "observables"), that the moon isn't there when nobody looks.
 
  • #51
Nugatory said:
If the heads/tails value of Norsen's coin is considered realistic before we've flipped it, I'm not sure what you'd consider not to be realistic. Could I ask for an example?
How would you represent it in a model? Can it be one of many possible hidden parameters collectively represented by λ. Let's say that λ is the universal convention for denoting hidden parameters, and, following Bell, that λ refers to any relevant underlying parameter. (We have no way of knowing what the relevant underlying parameters are, but whatever they are, λ refers to them.) Theories which include λ would be called realistic, and theories which don't include λ would be called nonrealistic.

Nugatory said:
That's a trick question, of course. If you do come up with such an example I'll use it instead of Norsen's coin in his example to produce a local but not realistic model. If you can't, then I'll argue that something is wrong with your definition of realism because it includes everything.
As I mentioned in my most recent reply to Norsen, I suppose you can make a model that's, in some sense, Bell local without λ. But that would pretty much defeat the purpose, which is to determine whether or not QM can be supplemented by hidden parameters, λ, and also be made explicitly local. (And of course Bell proved that it can't be. But Norsen maintains that Bell also proved that nature is nonlocal. Which I don't get.)

If you think that there's something wrong with λ including anything and everything, then your argument is with Bell's formulation ... I think.
 
  • #52
nanosiborg said:
I'm using hidden variable theory and realistic theory interchangeably. So, any hidden variable theory is a realistic theory. Any theory which does not incorporate hidden variables is a nonrealistic theory.

Well that's liable to cause confusion when you talk to other people here. But whatever. The main question is: do you think that Bell's theorem leaves us a choice of giving up locality OR giving up hidden variables? If so, perhaps you can answer my challenge: provide an example of a local (toy) model that successfully predicts the perfect correlations but without "hidden variables".


If Bell locality doesn't require hidden variable representation, then how would Bell locality be formulated and incorporated into a model of a Bell test without the explicit denotation of a hidden variable, such as Bell's λ, that contributes to the determination of individual results?

See Bell's paper "la nouvelle cuisine" (in the 2nd edition of "speakable and unspeakable"). Or see section 6 of

http://www.scholarpedia.org/article/Bell's_theorem

or (for more detail) this paper of mine:

http://arxiv.org/abs/0707.0401



Ok, you could write A(a) = ±1 and B(b) = ±1, but then your formulation has already deviated from one of the primary requirements of the exercise aimed at finding an answer to the suggestion that QM might be made a more complete theory, perhaps a more accurate (or at least a more heuristic) description of the physical reality with the addition of supplementary 'hidden' variables.

This way of writing it also presupposes determinism. See how Bell formulated locality in such a way that neither determinism nor hidden variables are presupposed.


A hidden variable, such as Bell's λ, need not provide a meaningful answer to a question such as, "What would the outcome at A have been if, instead of the polarizer being set at 20° it had been set at 80°?", because λ can refer to any variable underlying parameter(s) or property(ies) of the system, or any collection thereof. The denotation of λ in the model acts as a placeholder for any unknown underlying parameter(s) or property(ies) which, together with the relevant instrumental variable(s), contribute to the determination of individual results. The hidden variable is needed in this way in order to explicitly denote that something in addition to the instrumental variable, something to do with the 'system' being analyzed, is determining the individual results, because this is what the LHV program, the attempt to answer the question of whether or not QM can be viably supplemented with underlying system parameters and made explicity local, is predicated on.

I don't really disagree with any of that, except the implication that this λ represents a (specifically) *"hidden"* variable -- i.e., something supplementary to the usual QM wave function. It is better to understand the λ as denoting "whatever a given theory says constitutes a complete description of the system being analyzed". For ordinary QM, λ would thus (in the usual EPR-Bell kind of setup) just be the 2-particle wave function of the particle pair. For deBB it would be the wave function plus the two particle positions. And so on. Of course the point is then that you can derive the inequality without any constraints on λ.


Yes, I agree that the fact that they both violate Bell locality is the unambiguous criterion and statement of their non-(Bell)localness. What I had in mind was that the way in which deBB is explicitly nonlocal (and nonmechanical) through the quantum potential is a bit different than the way standard QM is (to some) explicitly nonlocal (and nonmechanical) through instantaneous collapse and establishment and projection of a principle axis subsequent to detection at one end or the other.

I agree that the violation of Bell locality looks a bit different, or manifests differently, in the two theories. My point was just that, in the abstract as it were, the two non-localities are "the same" in the sense that, for both theories, something that happens at a certain space-time point is *affected* by something outside its past light cone.

Incidentally, I think you have the wrong idea about how deBB actually works. The "quantum potential" is a kind of pointless and weird way of formulating the theory that Bohm of course used, but basically nobody in the last 20-30 years who works on the theory thinks of it in those terms anymore. See this recent paper of mine (intended as an accessible introduction to the theory for physics students) to get a sense of how the theory should actually be understood:

http://arxiv.org/abs/1210.7265


Oh, cool. Yes, I read that paper some time ago. I think that I don't quite understand your reason, your argument for dismissing Jarrett's idea. Maybe after reading it again I'll get it. If you have time, would a brief synopsis here, outlining the principle features of your argument, be possible?

Sure. How about this super-brief one: Jarrett only thought that Bell's formulation of locality could be broken into two parts -- one that captures genuine relativistic causality, and the other some other unrelated thing -- because he misunderstood a crucial aspect of Bell's formulation. In particular, he didn't (fully) understand that (roughly speaking) what we were calling "λ" above should be understood as denoting what some candidate theory says constitutes a *complete* description of the state of the system prior to measurement. (He missed the "complete" part. Then he discovered that, if λ does *not* provide a complete description of the system, then violation of the condition does not necessarily imply non-locality! The violation could instead be blamed on the use of incomplete state descriptions! Hence his idea that "Bell locality" = "genuine locality" + "completeness". But in fact Bell already saw this coming and carefully formulated the condition to ensure that its violation would indicate genuine nonlocality. Jarrett simply missed this.)
 
  • #53
Gordon Watson said:
my theory is a proposed refutation of all Bell inequalities.

I do remember you from a year or so ago when I last posted here. I am not exactly chomping at the bit to discuss this with you. But if you email me something short (3 pages) I'll look at it and tell you what's wrong with it. I'm sure this won't convince you and I probably won't want to talk about it further, but I always enjoy finding the errors in such "refutations".
 
  • #54
ttn said:
But if "realistic" means instead specifically that there are pre-existing definite values (supporting statements about counter-factuals) then the coin-flipping-particles model is clearly not realistic.

It seems to me that the definition of "realistic" should not imply deterministic.
 
  • #55
stevendaryl said:
It seems to me that the definition of "realistic" should not imply deterministic.

Who's to say? Maybe the people who voted for (b) in the poll should say what they think they mean by it?

Incidentally, I wrote a whole paper about how "realism" is used to mean about 5 different things, none of which actually have anything to do with Bell's theorem:

http://arxiv.org/abs/quant-ph/0607057
 
  • #56
Gordon Watson said:
You issued an open challenge, so let's find a space to discuss it openly on-line.

PS: You're the physicist; surely physicists have such places?
..

:rolleyes: You said you wanted to learn. I made a generous offer. Not good enough? OK, forget it then.
 
  • #57
ttn said:
Incidentally, I think you have the wrong idea about how deBB actually works. The "quantum potential" is a kind of pointless and weird way of formulating the theory that Bohm of course used, but basically nobody in the last 20-30 years who works on the theory thinks of it in those terms anymore.
I know this is a major aside issue but I find it so interesting, I thought I'd try to sneak it in since 'quantum potential' was brought up. I realize it is a minority position within the Bohmian camp, but some Bohmians do seem sympathetic to Bohm's suggestion of "quantum potential" versus minimalist Bohmians like Durr, Goldstein, Zanghi (DGZ). They suggest that Bohm's concept of quantum potential may be useful in comparison to the minimalist Bohmian scheme. For example, Belousek writes:
On the DGZ view, then, the guidance equation allows for only the prediction of particle trajectories. And while correct numerical prediction via mathematical deduction is constitutive of a good physical explanation, it is not by itself exhaustive thereof, for equations are themselves 'causes' (in some sense) of only their mathematical-logical consequences and not of the phenomena they predict. So we are left with just particles and their trajectories as the basis within the DGZ view of Bohmian mechanics. But, again, are particle trajectories by themselves sufficient to explain quantum phenomena? Or, rather are particle trajectories, considered from the point of view of Bohmian mechanics itself, as much a part of the quantum phenomena that needs to be explained?...the mere existence of those trajectories is by itself insufficient for explanation. For example, to simply specify correctly the motion of a body with a certain mass and distance from the sun in terms of elliptical space-time orbit is not to explain the Earth's revolving around the sun but rather to redescribe that state of affairs in a mathematically precise way. What remains to be explained is how it is that the Earth revolves around the sun in that way, and within classical mechanics, Newton's law of universal gravitation and second law provide that explanation.
Formalism, Ontology and Methodology in Bohmian Mechanics
https://springerlink3.metapress.com...b5nwspxhjssd4c5c3cpgr&sh=www.springerlink.com

This was also discussed on another thread and the following comment by Maaneli makes a similar point:
There is a very serious and obvious problem with their interpretation; in claiming that the wavefunction is nomological (a law-like entity like the Hamiltonian as you said), and because they want to claim deBB is a fundamentally complete formulation of QM, they also claim that there are no underlying physical fields/variables/mediums in 3-space that the wavefunction is only a mathematical approximation to (unlike in classical mechanics where that is the case with the Hamiltonian or even statistical mechanics where that is the case with the transition probability solution to the N-particle diffusion equation). For these reasons, they either refuse to answer the question of what physical field/variable/entity is causing the physically real particles in the world to move with a velocity field so accurately prescribed by this strictly mathematical wavefunction, or, when pressed on this issue (I have discussed this issue before with DGZ), they simply deny that this question is meaningful. The only possiblity on their view then is that the particles, being the only physically real things in the world (along with their mass and charge properties of course), just somehow spontaneously move on their own in such a way that this law-like wavefunction perfectly prescribes via the guiding equation. This is totally unconvincing, in addition to being quite a bizarre view of physics, in my opinion, and is counter to all the evidence that the equations and dynamics from deBB theory are suggesting, namely that the wavefunction is either a physically real field on its own or is a mathematical approximation to an underlying and physically real sort of field/variable/medium, such as in a stochastic mechanical type of theory.
http://74.86.200.109/showthread.php?t=247367&page=2
 
  • #58
ttn said:
Who's to say? Maybe the people who voted for (b) in the poll should say what they think they mean by it?

Incidentally, I wrote a whole paper about how "realism" is used to mean about 5 different things, none of which actually have anything to do with Bell's theorem:

http://arxiv.org/abs/quant-ph/0607057

I'll take a look.

On the other hand, nondeterminism doesn't really change much. With the classical kind of probability, it's always consistent to assume that nondeterminism is due to lack of knowledge of the details of the current state.
 
  • #59
stevendaryl said:
I'll take a look.

On the other hand, nondeterminism doesn't really change much. With the classical kind of probability, it's always consistent to assume that nondeterminism is due to lack of knowledge of the details of the current state.

Here's the real distinction between a quantum notion of "state of the world" and the kind of "state of the world" that is generally assumed in pre-quantum physics.

Classically, the state of the world "factors" into a product of local states. Roughly speaking, imagine dividing all of space into little cubes that are maybe 1 cubic light year. Then classically, everything there is to know about the state of the universe can be described by giving the state of things in each cube (the locations and momenta of particles within the cube, the values of fields within the cube), together with saying which cubes share a border with which other cubes.

What notion of the "state of the universe" doesn't meet this definition? Well, a probabilistic model need not. For example, if you say that a particular object has a 50/50 chance of being on Earth or on some other planet 10 light-years away (but not both), you can't describe this "state" as a product of local states. This is a classical kind of "entanglement", but it never bothered anybody, because nobody takes this kind of probabilistic model seriously as anything but a model of our knowledge of the unverse, rather than the universe itself.
 
  • #60
ttn said:
Still think that one can have a local explanation of even this small subset of the quantum predictions -- namely, the perfect correlations that are observed "whenever the analyzers happen to be parallel"? Let's see the model.
It is called quantum mechanics.
(Note: the model should also respect the "free choice" aka "no conspiracies" assumption, if it is to be taken seriously.)
That is a bit strange! Why should it respect that to be taken seriously? Many, taken seriously, physics theories do not respect that.
 
  • #61
bohm2 said:
I know this is a major aside issue but I find it so interesting, I thought I'd try to sneak it in since 'quantum potential' was brought up. I realize it is a minority position within the Bohmian camp, but some Bohmians do seem sympathetic to Bohm's suggestion of "quantum potential" versus minimalist Bohmians like Durr, Goldstein, Zanghi (DGZ). They suggest that Bohm's concept of quantum potential may be useful in comparison to the minimalist Bohmian scheme.

Figures that somebody named bohm2 would want to resurrect the quantum potential. =)

(Incidentally, you are who I think you are, right?)

Anyway, some comments. First, there are more than 2 options. That is, it's not the case that we have to choose between the DGZ view that the wf is nomic, and Bohm's formulation in terms of the quantum potential. There is lots of space between these options and off to the sides as well. For example, I don't agree with the "wf as nomic" view, but still consider myself a "minimalist" in the following sense: I think it is silly/pointless/bad to think that the pilot-wave theory should be understood to involve a quantum potential, or definite values for the spin and other properties (like some people, e.g., Holland, have done), etc. That is, for me at least, "minimalism" just means that the ontology of the theory includes the wave function and the particle positions and that's it.

Second, I sort of kinda mostly agree with the attitude expressed by Belousek and Maaneli, that a universe (an ontology) of *just the particles* seems somehow too sparse, that this is perhaps mathematically adequate to account for observations but is somehow physically unsatisfying in that too much is shuffled under the big fancy universal law and we are left with no comprehensible *physical* explanation of how/why the particles move the way they are said to move. It is worth pointing out that Bell also shared this view. "No one can understand this theory until he is willing to think of \Psi as a real objective field..." And, more powerfully to me, here is Bell on the 2-slit experiment: "Is it not clear, from the smallness of the scintillation on the screen, that we have to do with a particle? And is it not clear, from the diffraction and interference patterns, that the motion of the particle is directed by a wave?" I agree with that entirely, and in particular I agree with the implication that we should start reading the ontology off from simple/key experiments and then try to build a mathematically adequate theory on that basis. (The opposite approach, of insisting one start with a full mathematical theory of the whole universe, and only then trying to figure out what is ontic vs nomic, etc., seems strange and rationalistic and unphysical to me. It's the way a mathematician, but not a physicist, would think appropriate.)

That said, and third, we should all be absolutely clear on one thing: supplementing (my sense of) "minimalist pilot-wave theory" with the quantum potential does not help *at all* with these sorts of concerns. If you want/need there to be a physical field that "pilots" the particle around, we've already got one of those in the wave function, so what is the point of adding another one (defined in terms of the wave function)? It truly serves no point that I can see. Of course, there is the big worry that for N-particle systems the wf is not N fields in 3-space, but one field in 3N-space. That, I concur, makes it very very hard to understand what it means to call it a physical field. This is something I have worried a lot about. But again, adding a quantum potential helps *not at all*. Because the Q potential too is a field on configuration space. So... what in the world would be the point of cluttering things up by introducing it? It makes the theory look more like classical mechanics? But that's actually a bad thing, since the theory is certainly *not* classical mechanics. You (or Belousek or Maaneli) will have to tell me what the benefit is supposed to be here.

Finally, there is another aspect of "minimalism" that warrants comment. It isn't just not believing in the (relevance of the) quantum potential; it's also not believing in (e.g.) Holland's properties (like *the* spin of a spin-1/2 particle). Actually, despite saying above that there is no point at all in introducing the quantum potential as an extra thing in the ontology, I feel even more strongly that one should not introduce these sorts of extra (non-position) definite properties. The quantum potential maybe actually has some tiny role to play in discussions of the classical limit or something, whereas these extra Holland type properties really truly accomplish *nothing*. They do not determine -- or even *influence* -- the outcomes of measurements (that is, measurements of the very properties in question). They are true idle wheels. It's an embarrassment to the theory whenever anybody uses them or talks about them.
 
  • #62
martinbn said:
It is called quantum mechanics.

You must have missed the part where I said the theory should be *local*. You think QM explains the perfect correlations *locally*? You better explain exactly what you mean by "local". In my book (and certainly by Bell's careful formulation) it is non-local.

That is a bit strange! Why should it respect that to be taken seriously? Many, taken seriously, physics theories do not respect that.

Example?
 
  • #63
ttn said:
You must have missed the part where I said the theory should be *local*. You think QM explains the perfect correlations *locally*? You better explain exactly what you mean by "local". In my book (and certainly by Bell's careful formulation) it is non-local.
Let me paraphrase something a read recently.

Good point! But I think the real lesson here is again just that "local" is used to mean all kinds of different things by all kinds of different people in all kinds of different contexts.
Example?
Anything I can think of, say Classical Mechanics, no free will there.
 
  • #64
martinbn said:
Let me paraphrase something a read recently.

Good point! But I think the real lesson here is again just that "local" is used to mean all kinds of different things by all kinds of different people in all kinds of different contexts.

Does that mean you're not going to actually say what you mean by "local"? So your claim that QM is local is ... deliberately meaningless?


Anything I can think of, say Classical Mechanics, no free will there.

Sorry, but nobody is talking about "free will". Maybe search for "conspiracy" / "conspiracies" in the article here

http://www.scholarpedia.org/article/Bell's_theorem

to understand what I actually mean by the "no conspiracies" (also sometimes called, misleadingly, "free choice" or "free will") assumption. In short, a theory that violates "no conspiracies" is a theory that is, in Bell's terminology, "super-deterministic". If you think classical mechanics is super-deterministic, you don't know what you're talking about.
 
  • #65
martinbn said:
It is called quantum mechanics.

Well, the "recipe" for quantum predictions is nonlocal. It roughly goes like:
  1. Between observations, the system is described by a wave function that evolves deterministically according to the Schrodinger equation (or Dirac equation, or whatever).
  2. When an observable is measured, the results is an eigenvalue of the corresponding operator. The probability of getting eigenvalue [itex]n[/itex] is proportional to the square of the absolute value of the projection of the wave function onto the basis eigenstate corresponding to that value.
  3. Immediately afterward, the system is in the eigenstate corresponding to the value measured.

Step 3 is explicitly nonlocal. This step (or something like it) is necessary to get the perfect correlations predicted by quantum mechanics in certain cases.
 
  • #66
martinbn said:
That is a bit strange! Why should it respect that to be taken seriously? Many, taken seriously, physics theories do not respect that.

I don't think that's true. The "no conspiracy" assumption in an EPR-type experiment is the assumption that if Alice decides to measure the spin of an electron along axis [itex]\vec{A_1}[/itex] and Bob decides to measure the spin along axis [itex]\vec{A_2}[/itex], then these can be taken to be independent decisions. "Free will" doesn't really need to come in. You could very well believe that Alice's decision is a complicated, deterministic function of how she was raised, what she ate for breakfast, etc, and still believe that her decision is independent of Bob's decision.

Here's an example of how a "conspiracy theory" can explain EPR type correlations: Ahead of time, Alice decides what axis she will use for her detector. Bob decides what axis he will use. Then the twin pair uses this information to randomly generate a pair of results for Alice and Bob consistent with the QM predictions. After these preliminaries, Alice, Bob and the twin particles just carry out their pre-determined plans.

I'm not absolutely sure that nature doesn't make use of such conspiracies, but it's not the normal sort of thing people think about when doing science.
 
  • #67
bohm2 said:
This was also discussed on another thread and the following comment by Maaneli makes a similar point:

http://74.86.200.109/showthread.php?t=247367&page=2

I went back and read some of this old (2008) thread, and learned something new and surprising about myself:

Travis is a staunch subscriber to the DGZ view and always uses the term Bohmian mechanics, as you may have noticed. I don't know though how aware he is of this terminological disagreement.

Actually, as my comments above probably already indicate, this is not true. I *greatly* admire D, G, and Z, whose work and ideas (even the subtle things I disagree with) are simply brilliant, but for various reasons (primarily the fact that de Broglie was really the first to formulate the theory, and the fact that actually Bohm's own 2nd order / Q potential formulation leaves a lot to be desired) I don't think "bohmian mechanics" is the best name for the theory. I do often call it that, when I'm not being careful, or in a context where it would simply be a distraction to start an argument about the name. I frankly don't think it matters that much what you *call* it! But when I am being careful, and especially when I am writing for an audience of "regular" physicists, I prefer to refer to the theory as the "de Broglie - Bohm pilot-wave theory" or just "the pilot-wave theory". See for example this paper

http://arxiv.org/abs/1210.7265

which I think will eventually become the first in a series of papers about "the pilot-wave perspective on ..."

On the other hand, I should also perhaps distance myself somewhat from the camp of people who vehemently object to the theory's being associated with Bohm's name. Part of DGZ's thinking is that, although, yes, de Broglie first formulated the theory, he never actually understood the *crucial* point that it could account, already, without extra postulates, for "measurements". Plus, after only a year or so, he gave the pilot-wave idea up and became converted to Copenhagenism. So it is really quite questionable to give him *full* credit for the theory, as some want to do. de Broglie got there first, but didn't understand it too deeply, and then became convinced actually that the theory didn't work. Bohm independently rediscovered it, understood its significance much more deeply than de Broglie had, but then kind of made a mess of it with all the quantum potential stuff (not to mention the even kookier stuff about active/passive information, unfolding/enfolding, holomovement, etc.). Both in a way were deeply flawed. Bell was really the first to grasp the theory's full significance and to see clearly what was essential and what was distracting fluff. On the other hand, naming the theory after Bell would be rather silly, since, clearly, he understood himself not to be creating a new theory, but to be honing the theory he was so pleased to discover buried in the literature.

This is basically why I think it makes sense to just call it the "pilot-wave theory". This name says something about the actual physical content of the theory, and thus makes it clear what one is talking about without needing to get into debates about exactly who should get (the most) credit, who did or didn't want it to be named after them, who understood it "better", etc.
 
  • #68
Gordon Watson said:
However, noting that I've NOT rejected your offer: Would you like me to try and find an open forum? In the expectation that you'll participate?
..

Not really. I like this forum just fine, and am already doing too many things.

Anyway, it's really simple. If you think you can "refute" Bell's theorem, presumably this means you think you can explain (at least) the perfect correlations (for spin measurements along parallel directions on a pair of appropriately spin-entangled particles) in a local way, but without using the sort of local deterministic hidden variables that (everybody agrees) Bell proved won't work to explain the more general correlations (when the directions are not parallel). In other words, you must think you can address the "challenge" I posed above. So, post something here explaining how you'd do this. It's perfectly germane and appropriate and consistent with the rules as I understand them. If you can't or won't do that for some weird reason I cannot begin to imagine, I'm just not interested.
 
  • #69
ttn said:
Of course, there is the big worry that for N-particle systems the wf is not N fields in 3-space, but one field in 3N-space. That, I concur, makes it very very hard to understand what it means to call it a physical field. This is something I have worried a lot about.
Yes, this is the heart of the issue for me. I think Einstein was very troubled by it, also. A very interesting paragraph by Belousek that summarizes this very nicely is the following:
There are two related problems that immediately arise here. First, if both multi-dimensional configuration space and ordinary 3-dimensional space are to be equally physically real, then unless one spells out the physical relation between them, one will have divided the quantum world into two disparate realms. Second, if the quantum field (in whatever sense it is to be understood) exists in configuration space and particles move in ordinary 3-dimensional space, how is the quantum field to act causally upon the particles in order to guide their trajectories? Solving the second problem depends, of course, upon solving the first. One might reply to the first problem that ordinary 3-dimensional space can be regarded simply as a sub-space projection of the multi-dimensional configuration space.

But, for an N-particle system described by a 3N-dimensional configuration space, there are mutually orthogonal sub-space projections. Do we then have multiple disjoint ordinary spaces for each many-particle system, one for each particle? The significance of this situation can be brought out by considering the case of an N-particle system in a factorizable quantum state– ψ(q1,..., qN) = ψ1(q1)...ψN(qN). In contrast to the general case of a non-factorizable quantum state, in this case one can represent the system in terms of N ‘waves’, where ψi(qi) depends upon only the coordinates of the ith particle so that each ‘wave’ can be associated with a separate particle. But, the sub-spaces of the 3N-dimensional configuration space to which the respective ψi(qi)’s belong are all mutually orthogonal so that the N ‘waves’ and particles do not all exist in one and the same 3-dimensional space (unless one were to equivocate on the meaning of the qi ).

Thus, even in this case, one cannot simply regard the total quantum system as existing in ordinary 3-dimensional space, but rather must still regard it as existing irreducibly in configuration space, with each part existing in a ‘separate’ sub-space. And that would undercut any sense of a single system existing in one and the same physical space, which is surely requisite for a coherent physical theory.
Formalism, Ontology and Methodology in Bohmian Mechanics
http://www.ingentaconnect.com/content/klu/foda/2003/00000008/00000002/05119217 [Broken]

Valentini tries to thread to a middle position something similar to yours (I'm guessing?) but there are problems with this also as Belousek notes:
Next, Valentini claims that his interpretation of ψ as a ‘guiding field of information’ is “free of complications”. In claiming this, he evidently does not see the irreducibly multi-dimensional character of ψ as a “complication”. This point brings out an internal tension in his guidance view. He wants to interpret ψ (via the pilot wave S) in realistic terms as representing a physically real causal entity, yet he never expressly takes a stand regarding the status of the configuration space in which ψ exists. He introduces further ambiguity by equivocating upon the real physical status of ψ itself. While in one place he takes the view that “The pilot-wave theory is much better regarded in terms of an abstract ‘guiding field’ (pilot-wave) in configuration space...” , in another he states that “The quantum mechanical wave function ψ(x, t) is interpreted as an objectively existing ‘guiding field’ (or pilot-wave wave) in configuration space...”. Is ψ a concrete entity existing in a physically real space or is it only an abstract entity existing in a mathematical space? Valentini does, though, somewhat clarify his view elsewhere by stating that “the pilot wave ψ should be interpreted as a new causal agent, more abstract than forces or ordinary fields. This causal agent is grounded in configuration space...” .

Thus, the pilot wave or ‘guiding field’, while being more abstract than forces or classical fields, in the sense of being further removed conceptually from ordinary experience-the concept of ‘guiding field’ is achieved by abstracting the notion of ‘force’ from the classical concept of ‘field’, is nonetheless an objectively existing causal entity. But, that such an entity is grounded in configuration space implies that configuration space itself must be taken to be physically real in some sense. Whereas Albert takes an unequivocal (though perhaps incoherent) stand on this, Valentini leaves us without a clear idea of in what sense configuration space is to be regarded as physically real. Is configuration space itself the only physical reality? Or are both configuration space and ordinary space physically real? And, if so, are they real in the same physical sense? These questions remain to be answered for any interpretation of Bohmian mechanics that would postulate entities in configuration space.
 
Last edited by a moderator:
  • #70
bohm2 said:
Yes, this is the heart of the issue for me. I think Einstein was very troubled by it, also. A very interesting paragraph by Belousek that summarizes this very nicely is the following...

Yes, I think we agree that this is an important and unsolved problem. My own best attempt to solve it (or at least indicate a direction for future work aimed at trying to solve it) is here:

http://arxiv.org/abs/0909.4553

The idea there is in some ways the idea that Belousek suggests, in the passage you quoted: break the 3N-space wave function up into N (or, as it turns out, in my example, a lot more than N) fields on 3-space. Belousek, though, seems to think there is some problem with doing this (associated with the different fields not really living in the same 3-space), but that particular worry makes no sense to me. The conditional wave function (CWF) is perfectly well-defined and there's no reason one cannot think of N such conditional wave functions (one for each particle) living in 3-space. (Just define the CWF for particle i as the wf, evaluated at the actual positions X_j of all particles other than the i'th, and evaluated at x_i = x, the position in physical space.) The problem (related to the recent PBR theorem, incidentally) is that these N CWFs are (radically) insufficient to generate the right dynamics for the particles (except for the unrealistic special case where the wave function is a product state). You need to somehow capture the whole structure of the wave function, including "entanglement", and the CWFs (alone) don't do this. My admittedly silly toy model above is a way to do this, albeit a way that even I can't really take too seriously.


Valentini tries to thread to a middle position something similar to yours (I'm guessing?) but there are problems with this also as Belousek notes:

I agree with Belousek's criticisms of Valentini here. I guess Valentini's views are similar to mine in that we both don't like the nomic interpretation of the wf. But where it seems he doesn't really think there's any problem with saying "the wf is a physically-real guiding-field that lives in configuration space", I am profoundly troubled by this. (Although, to be fair, it's possible Valentini sees no problem because, in fact, he thinks in terms of Albert's "marvellous point" picture. That, to be sure, solves some of the worries. But I think we'll agree it introduces others!)

But, interesting as all these issues are, one should keep in mind that they don't matter at all for a lot of important things -- such as whether Bell's theorem shows that nature is nonlocal!
 
<h2>1. What are Bell's inequalities and how do they relate to nature?</h2><p>Bell's inequalities are a set of mathematical inequalities that describe the limits of classical physics in explaining certain phenomena in nature. They are used to test the validity of quantum mechanics, which is a more accurate and comprehensive theory of nature.</p><h2>2. Why are violations of Bell's inequalities significant?</h2><p>Violations of Bell's inequalities indicate that classical physics is not sufficient to explain certain phenomena in nature, and that quantum mechanics is a more accurate and comprehensive theory. This challenges our understanding of the fundamental laws of nature and opens up new possibilities for scientific exploration.</p><h2>3. How are violations of Bell's inequalities detected?</h2><p>Violations of Bell's inequalities are detected through experiments that involve measuring the properties of entangled particles. These particles are connected in such a way that their properties are correlated, even when they are separated by large distances. By measuring the properties of these particles, scientists can determine if they violate Bell's inequalities.</p><h2>4. What do violations of Bell's inequalities tell us about the nature of reality?</h2><p>Violations of Bell's inequalities suggest that reality is not as deterministic as classical physics suggests. Instead, it supports the idea that quantum mechanics allows for non-local connections between particles, and that the act of measurement can affect the properties of these particles. This challenges our traditional understanding of causality and the nature of reality.</p><h2>5. How do violations of Bell's inequalities impact our understanding of the universe?</h2><p>Violations of Bell's inequalities have significant implications for our understanding of the universe. They suggest that there are fundamental aspects of reality that are beyond our current understanding, and that there may be new laws and principles at work in the universe. This opens up new avenues for research and exploration in the field of quantum mechanics and the nature of the universe.</p>

1. What are Bell's inequalities and how do they relate to nature?

Bell's inequalities are a set of mathematical inequalities that describe the limits of classical physics in explaining certain phenomena in nature. They are used to test the validity of quantum mechanics, which is a more accurate and comprehensive theory of nature.

2. Why are violations of Bell's inequalities significant?

Violations of Bell's inequalities indicate that classical physics is not sufficient to explain certain phenomena in nature, and that quantum mechanics is a more accurate and comprehensive theory. This challenges our understanding of the fundamental laws of nature and opens up new possibilities for scientific exploration.

3. How are violations of Bell's inequalities detected?

Violations of Bell's inequalities are detected through experiments that involve measuring the properties of entangled particles. These particles are connected in such a way that their properties are correlated, even when they are separated by large distances. By measuring the properties of these particles, scientists can determine if they violate Bell's inequalities.

4. What do violations of Bell's inequalities tell us about the nature of reality?

Violations of Bell's inequalities suggest that reality is not as deterministic as classical physics suggests. Instead, it supports the idea that quantum mechanics allows for non-local connections between particles, and that the act of measurement can affect the properties of these particles. This challenges our traditional understanding of causality and the nature of reality.

5. How do violations of Bell's inequalities impact our understanding of the universe?

Violations of Bell's inequalities have significant implications for our understanding of the universe. They suggest that there are fundamental aspects of reality that are beyond our current understanding, and that there may be new laws and principles at work in the universe. This opens up new avenues for research and exploration in the field of quantum mechanics and the nature of the universe.

Similar threads

Replies
50
Views
4K
Replies
25
Views
2K
  • Quantum Physics
Replies
10
Views
2K
  • Quantum Physics
Replies
1
Views
700
  • Quantum Interpretations and Foundations
Replies
2
Views
645
Replies
6
Views
2K
  • Quantum Interpretations and Foundations
6
Replies
175
Views
6K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Physics
Replies
28
Views
1K
Replies
8
Views
2K
Back
Top