Relativity & Quantum Theory: Is Locality Violated?

  • Thread starter Thread starter UglyDuckling
  • Start date Start date
  • Tags Tags
    Relativity
Click For Summary
Modern physics is built on the principles of relativity and quantum theory, which currently appear to be in conflict, particularly regarding locality. Quantum mechanics suggests that entangled particles can influence each other instantaneously, seemingly violating the locality constraint imposed by relativity. However, standard quantum mechanics does not indicate that any physical signal or information travels faster than light between these particles. The discussions highlight that while quantum mechanics and relativity may not fully integrate, this does not necessarily imply a violation of relativity itself. The ongoing debate revolves around understanding the nature of entanglement and its implications for our interpretation of space-time and the foundations of physics.
  • #151
Hurkyl said:
Fine -- flesh the rest out however you want. Let's make the coins Special Relativistic point particles, and each can be in one of three fundamental states: "H, T, and unflipped". The initially start out as "unflipped", and via an interaction called "flipping" can transition to "H" or "T". The transition is nondeterminsitic, and is governed by the joint probability distribution P(TT)=P(HH) = 1/2, P(TH) = P(HT) = 0. This joint probability distribution is a fundamental constant of the theory.

More details just obscures the point -- the theory is simple and clear. It doesn't have messy details to work through and understand, and it's manifestly Lorentz invariant.

You're filling in pointless details and still missing what's crucial. What is the *dynamics* for this theory? Where are the two coins located? Under what circumstances exactly do they make this transition from "unflipped" to either "H" or "T"?

Can you have the particles located at separate locations, and have the transitions occur under a local free choice of some experimenter, and still explain the correlations in a manifestly Lorentz invariant way?
 
Physics news on Phys.org
  • #152
vanesch said:
Why [do I object to a random number being injected into the dynamics at two spatially separated locations] ?

Because this violates what I consider a reasonable criterion of locality!




Yes, but you place extra limits on how this irreducible randomness can be applied. For instance, what's indeed wrong with using the SAME (or correlated) *intrinsic random numbers" at two spatially separated events ?

Well, my "instinct" is that any such intrinsic random number (that is the output of a non-deterministic dynamics) exists at some particular place -- it comes into existence at some particular spacetime event. So then, to inject it into the dynamics at some spacelike separated event, is blatantly nonlocal.



If these numbers have no ontology attached to them,

I don't know what that means. The "random numbers" we're talking about are supposed to be part of a physics theory, right?


Them not having a *mechanism* in them, there's no "causality" involved in this. It "just happens that way". This is the essence of an intrinsically fundamental stochastical theory, no ? "things just happen this way".

This is the ambiguity I can't accept. There's no "mechanism" (by hypothesis) in the production of this, as opposed to that, particular random number. But the random numbers are still part of a *physics theory* which presumably is a candidate for the mechanism of something. So it's not like these random numbers have nothing to do with familiar notions of causality, ontology, etc. Your case here seems to trade on a slide from the random numbers "not having a mechanism" in the first sense, to some much broader claim about the random numbers being totally outside of the normal context of a physics theory and hence totally unanalyzable in normal terms -- just things you have to blindly accept no matter how they act or no matter how non-locally they come into existence and/or affect other things.

Re: "things just happen this way", sure -- at a particular event. But "this just happened this way here, and therefore that just happened the same way over there" I can't accept.


Well, this is only true in the case that one assigns some reality to the concept of the wavefunction ; not if it is an "algorithm to calculate probabilities", right ?

Yes, as we've I think agreed before. If one takes the wf as part of a mere algorithm, then *nothing* is true. If one isn't willing to actually assert a *theory*, then of course there's no particular fact of the matter about whether one's theory is local or nonlocal, etc...



I think that there is a total absense of the notion of causal influence in THE STOCHASTICAL ELEMENT of an *intrinsically* non-deterministic theory. That doesn't mean that the theory as a whole does not have elements of causality to it: the "deterministic part" (the evolution equations of the ontologically postulated objects) does have such a thing of course. But the "random variables" that are supposed to describe the intrinsic randomness of the whole don't have - a priori - to obey any kind of rules, no ?

No, I don't agree with this. Both parts have a causal part to them. I mean, presumably there are some dynamical equations even for the random part (e.g., the collapse postulate in OQM). Otherwise things would be entirely *too* random, yes? Even the randomness is, so to speak, governed by some laws. And more importantly, the randomness is still randomness *about something* -- it's randomly determined values for some allegedly real physical quantities or whatever. And that is the whole meaning of "causality" -- real physical things acting in accordance with their identity. In a non-deterministic theory, their identity is, by hypothesis, such as to produce evolution which isn't "fixed" by initial conditions. But the evolution is still governed by some (stochastical) laws. Otherwise, what exactly is one claiming is a theory?

Well, we're getting pretty distant from the main point, and even from the important and interesting tangent point. The basic question here, as I see it, is whether we should, from the POV of relativity, be troubled by a theory in which some random number produced by the dynamics can "come into existence" or "affect things" at spacelike separated events. My intuitive understanding of relativistic causality bristles at this. Some others' apparently doesn't. Frankly, I don't think either side has yet to make any strong argument in support of their side... so I think we should focus any subsequent discussion on that.

But even that is beside what I consider the main point. I'd like to make sure we don't completely lose sight of the claim I started here with -- namely, that no Bell Local theory can agree with experiment. That, I think, is a surprising claim that deserves to be clarified and scrutinized -- even if, in the end, some people don't think it's an *interesting* claim because they don't think Bell Locality is a correct transcription of relativity's prohibition on superluminal causation (which is what all this stuff about stochastic theories is about).
 
  • #153
You're filling in pointless details and still missing what's crucial. What is the *dynamics* for this theory? Where are the two coins located? Under what circumstances exactly do they make this transition from "unflipped" to either "H" or "T"?
They're Special Relativistic point particles. Do what you will with them. Maybe they have mass and electric charge, who knows. I don't think that's relavent to the issue at hand.

The state of the coin has absolutely no affect on anything. In my toy theory we cannot even observe the state, although it is there.

I don't know what would cause a "flipping" interaction to occur. That is also irrelevant to the issue at hand. We don't need to know -- they just do, and Alice and Bob are both able to control when it happens.

But let's have fun and define something silly. Let's say... a "flipping" interaction occurs when:

In the coin's rest frame, we take the three vectors:
(1) Electric field at the origin
(2) Magnetic field at the origin
(3) Force on the coin due to gravity
and if they are all nearly perpendicular to each other, a "flipping" interaction occurs and the coin transitions nondeterministically to either the "H" or the "T" state. (Nearly meaning the angles are within e radians of perpendicular, where e is some fundamental constant):-p
 
  • #154
Hurkyl said:
They're Special Relativistic point particles. Do what you will with them. Maybe they have mass and electric charge, who knows. I don't think that's relavent to the issue at hand.

The state of the coin has absolutely no affect on anything. In my toy theory we cannot even observe the state, although it is there.

I don't know what would cause a "flipping" interaction to occur. That is also irrelevant to the issue at hand. We don't need to know -- they just do, and Alice and Bob are both able to control when it happens.


OK, it's that last point that is important. So Alice and Bob both have little black boxes with buttons. They choose at some random moment to push the button, and then a screen displays either "H" or "T". And it is found that when they both push the buttons (at the same time, as seen from some particular frame, say) they always get the same outcome, even though the pushings are spacelike separated.

And you're telling me you're willing to just shrug and accept this, without being bothered in the slightest that there's something nonlocal going on?
 
  • #155
And you're telling me you're willing to just shrug and accept this, without being bothered in the slightest that there's something nonlocal going on?
Well, you asked for a theory that can have correlations and yet still respect Lorentz invariance!


Anyways, the only "nonlocality" going on here is the failure of the statistical independence hypothesis, and I have no problem with that. Why do we even have that hypothesis in the first place? I suspect there is no good theoretical reason: either people added it by hand because it fit the data, or worse, people implicitly assumed it while giving heuristic arguments in its favor.



Incidentally, does it bother you at all that you're asking nonlocal questions?
 
  • #156
Hurkyl said:
Anyways, the only "nonlocality" going on here is the failure of the statistical independence hypothesis,

That just begs the very question at issue here.


and I have no problem with that. Why do we even have that hypothesis in the first place? I suspect there is no good theoretical reason: either people added it by hand because it fit the data, or worse, people implicitly assumed it while giving heuristic arguments in its favor.

I don't buy that at all. What about the tons of empirical evidence for physical locality that is nicely summarized by some requirement like Bell Locality? Your point is that we might be being misled by such evidence since we didn't have any examples in the history of science of an irreducibly stochastic (true) theory. So we're duped into thinking that Bell Locality is a reasonable formalization of "local causality" when really it's only reasonable for deterministic theories.

That's a reasonable objection, and a good assignment for further thought and discussion. But it's hardly the same as saying (as I think you are saying above) that there was never any good reason at all to accept something like Bell Locality. The fact is there is very strong reason -- the objection is that maybe the reason isn't quite 100% conclusive.



Incidentally, does it bother you at all that you're asking nonlocal questions?

I don't know what you mean.
 
  • #157
ttn said:
So we're duped into thinking that Bell Locality is a reasonable formalization of "local causality" when really it's only reasonable for deterministic theories.

Yes, that's my point of view. Actually, it is difficult to imagine what it actually means to have an *irreducibly* stochastic theory! The concept itself is rather strange. We never thought of probability that way ; at least in physical science ; in human sciences and theology, it was of course considered - even essential - and was called variably "the will of the gods", "destiny" or "karma" or "providence" or whatever - it is almost at this level that one should indeed consider an irreducibly stochastic theory. Funny that in Greek mythology, even the gods were subjected to the irreducible randomness of "destiny"!

In physical sciences, however, probability was always a "way to quantify our ignorance about what was exactly going on" - implicitly assuming that *if* we could know, somehow, what was going on in detail, then we'd know for sure what was going to happen - call it underlying determinism. And I think that this is what Bell's definition of locality really means, and why it is so plausible. It is hard for a scientist to adhere to something like "destiny" as an irreducible element of his theory.
 
  • #158
vanesch said:
Yes, that's my point of view. Actually, it is difficult to imagine what it actually means to have an *irreducibly* stochastic theory! The concept itself is rather strange. We never thought of probability that way ; at least in physical science ; in human sciences and theology, it was of course considered - even essential - and was called variably "the will of the gods", "destiny" or "karma" or "providence" or whatever - it is almost at this level that one should indeed consider an irreducibly stochastic theory. Funny that in Greek mythology, even the gods were subjected to the irreducible randomness of "destiny"!

In physical sciences, however, probability was always a "way to quantify our ignorance about what was exactly going on" - implicitly assuming that *if* we could know, somehow, what was going on in detail, then we'd know for sure what was going to happen - call it underlying determinism. And I think that this is what Bell's definition of locality really means, and why it is so plausible. It is hard for a scientist to adhere to something like "destiny" as an irreducible element of his theory.


I respect this possible objection to the propriety of Bell Locality as a formalization of what relativity is supposed to require of physical theories. But I still don't see any good argument behind the worry -- and I simply cannot understand why you and others don't find anything problematic (from the POV of local causality) with randomness that injects itself into the dynamics of spacelike separated events.

Think about this silly example of the two spatially separated coin-flipping boxes. Alice makes a *free choice* to press the button (that makes her box transition from the "ready" state to either H or T, at random). And the "outcome" of this free choice -- the causal effect of it -- leaps suddenly into existence not only in the spacetime region where Alice's choice triggered it, but at a distant location as well. There are several possible ways of talking about this, granted. For example, you could say that the same one stochastic transition has a simultaneous effect at the two places. Or you could say that the transition has a direct effect only near Alice, but then somehow that effect is the cause of a further effect at the distant location. The point is: *however* you talk about it, Alice's free choice initiates a causal sequence that results in a physical change over by Bob (as demonstrated by the *assumed* change in the "propensity" for various outcomes on Bob's device -- this being what is meant by the randomness being irreducible: either the state description attributed to something near Bob changes and the relevant laws applying there stay the same, or vice versa).

You all seem so willing to just shrug and talk about things just popping into existence for no reason at all, from which it's only a small step to shrugging at things popping into existence simultaneously at distant locations in a correlated way: if you're not going to ask for an explanation of why the one event happened (as opposed to some other), then why ask for an explanation of why two events are correlated?

The whole attitude here strikes me as blatantly unscientific. Way too much "just shrugging", to put it nicely. But I'm even here setting that kind of bother completely aside, and *still* I cannot get myself to accept the reasonableness of your worry. That the correlated Heads-Tails game involves "spooky action at a distance" (according to the postulated theory in which the outcomes are genuinely random) is, to me, just obvious. So the fact that such a scenario involves a violation of Bell Locality is, to me, a nice *confirmation* of the reasonableness of that criterion.

I know that several of you see it differently. What's frustrating is that we're not making any progress on the point at issue, because *both sides* are simply taking it as "obvious" that this H/T type situation is -- or isn't -- causally local. Perhaps someone else can think of a way to make progress on this. But if not, hopefully we can return to the original question (Can a Bell Local theory exist which is consistent with experiment?) and simply leave this aside for later.
 
  • #159
ttn said:
1. Because this violates what I consider a reasonable criterion of locality!

2. Well, my "instinct" is that any such intrinsic random number (that is the output of a non-deterministic dynamics) exists at some particular place -- it comes into existence at some particular spacetime event. So then, to inject it into the dynamics at some spacelike separated event, is blatantly nonlocal.

3. I'd like to make sure we don't completely lose sight of the claim I started here with -- namely, that no Bell Local theory can agree with experiment. That, I think, is a surprising claim that deserves to be clarified and scrutinized -- even if, in the end, some people don't think it's an *interesting* claim because they don't think Bell Locality is a correct transcription of relativity's prohibition on superluminal causation (which is what all this stuff about stochastic theories is about).

I think Hurkyl and Vanesch have brought things into focus. In 1. 2. and 3., you state that by your definition, the experiments are evidence of non-locality and cannot be interpreted otherwise. Clearly, that is a leap I am not making and neither are the others. To me, it's circular reasoning because you assume (by your definition) that which you want to prove.

You then extend your conclusion so that Lorentz invariance must be dropped as well. So I think that it is actually you who is making the ol' switcheroo between Bell Locality and Lorentz invariance. But I acknowledge that it is *possible* that Lorentz invariance could be respected in a Bell non-local world.
 
  • #160
ttn said:
I respect this possible objection to the propriety of Bell Locality as a formalization of what relativity is supposed to require of physical theories. But I still don't see any good argument behind the worry -- and I simply cannot understand why you and others don't find anything problematic (from the POV of local causality) with randomness that injects itself into the dynamics of spacelike separated events.

Where does the randomness originate? That is a fair question, in my mind, but... You can't ding a theory that works as well as oQM because it doesn't explain it. Theories are supposed to be useful. If there is more utility to be extracted, then great... show us. But there is no specific problem as is.

I like the Beatles, and oQM doesn't explain that either. (Doesn't that bother you?) In other words, what you are asking is a "nice to have" but it is not essential. But I always have the door open for a better theory (i.e. one with more utility). In my opinion, there is no way that Bohmian Mechanics can be considered to have more utility than oQM at this time.
 
  • #161
(I'm dropping continuity with my previous posts -- this is an entirely different line of reasoning, and stands on its own merits... actually it might be two related but separate lines of reasoning)

Hurkyl said:
Incidentally, does it bother you at all that you're asking nonlocal questions?
ttn said:
I don't know what you mean.
One thing that struck me when reading your threads is that the issues you raise can only be noticed by someone external observer capable of observing all of the "beables" in two space-like separated regions of space-time.

The beables in Alice's laboratory are sufficient to completely describe what's going on there: she has a 50% chance of seeing a heads.

The beables in Bob's laboratory are sufficient to completely describe what's going on there: he has a 50% chance of seeing a heads.

If Alice and Bob perform their observations and take the results to Charlie's laboratory for comparison, then the beables in Charlie's laboratory are sufficient to completely describe what's going on there: ther's a 50% chance that they both saw heads, and 50% chance that they both saw tails.

In all of these cases, we're asking for descriptions of localized events: in the first, it's the event where Alice presses her button. In the second, it's the event where Bob presses his button. In the third, it's the event where Alice and Bob meet.


However, your issue is not well-localized: it involves space-like separated events in both Alice's and Bob's laboratories.

(incidentally, all of the beables in Alice's and Bob's laboratories are still sufficient to completely describe what's going on in this non-localized situation)



We don't need to consider space-like separated events to talk about locality. One nice and practical definition of locality is: "Are all the beables here sufficient to describe what's going to happen?" If I could wave my arm and instantly cause gravitational waves over in China, then the answer to this question would be no, because there would be an observable effect that cannot be described by the Chinese beables.

But the beables in Bob's laboratory are enough to completely describe his experiment.


Let's recall the frequentist interpretation of probabilities: if we repeatedly perform identical experiments, the probability of an outcome is defined to be the limiting ratio of the number of times we see that outcome divided by the number of experiments we performed.

Let's suppose our experiment is: "Dave creates the two boxes and gives them to Alice and Bob. Bob takes the box to his laboratory, and then presses the button to see if he gets heads or tails."

As far as I can tell, if Alice presses her button and gets heads, then in this perspective it is still appropriate to say that Bob has a 50% chance of getting heads from his box. (Although it would be correct to say that Bob has a 100% chance of seeing heads, given that Alice saw heads)
 
  • #162
DrChinese said:
To me, it's circular reasoning because you assume (by your definition) that which you want to prove.

That's not really fair. I'm not "just assuming" that the experiments prove non-locality, and then saying "Hey, look, I proved that the experiments prove nonlocality." Rather, I'm arguing that Bell's mathematical definition of local causality is prima facie reasonable as a formalization of relativity's prohibition on superluminal causation. And with that as the definition of locality, it has been rigorously proved that no local theory can agree with experiment. Yes, this leaves semi-open the question of whether this definition of locality really is or really isn't "what relativity really requires." That is indeed a difficult question, but it's a separable one -- and even the restricted claim (no Bell Local theory agrees with experiment) is *stronger* than the claim that most people erroneously think is the lesson of Bell's theorem (namely: no Bell Local *hidden variable* theory agrees with experiment).

So there is a new and important step forward here, even if, as I think we'd all agree, it doesn't answer absolutely every possible sub-question/objection.



You then extend your conclusion so that Lorentz invariance must be dropped as well. So I think that it is actually you who is making the ol' switcheroo between Bell Locality and Lorentz invariance. But I acknowledge that it is *possible* that Lorentz invariance could be respected in a Bell non-local world.

I don't understand the first bit here. I don't think I ever claimed that the failure of Bell Locality requires the failure of Lorentz Invariance. In fact, this paper would be a counterexample to such a claim:

http://www.arxiv.org/abs/quant-ph/0602208

(this is a more readable version of a more technical paper that is referenced in the above)
 
  • #163
DrChinese said:
Where does the randomness originate? That is a fair question, in my mind, but...

My intuitive sense of the right way to answer this (for OQM) is to say: since it's a "measurement" that triggers the collapse, we should think of the randomness as originating ("being injected") at the spacetime event of the triggering measurement. And then it's clear enough, in OQM, that this has a causal effect on spacelike separated events.

But this is all nothing but fleshing out the statement: OQM violates Bell Locality.

I think vanesch and hurkyl would disagree with the first part: you shouldn't (I think they'd say) think of the random # as being injected at that particular spacetime point; rather, think of it as a new universal constant that pops into existence and is immediately accessible everywhere. This, to me, is a very weird way of thinking -- but more to the point, it seems to beg the question in regard to the word "immediately". To make precise what is meant by that information being "immediately" available everywhere, you'd have to specify some spacelike hypersurface... i.e. break lorentz invariance. Of course, we know from tomonaga-schwinger QFT that the empirical predictions come out the same way no matter which way you foliate the spacetime. So we're in the curious situation that the empirical predictions are lorentz invariant, even though the theory itself isn't. But this is the same situation we're in for Bohmian mechanics, where there's an underlying nonlocality that is hidden at the level of signalling / empirical outcomes.


You can't ding a theory that works as well as oQM because it doesn't explain it. Theories are supposed to be useful. If there is more utility to be extracted, then great... show us. But there is no specific problem as is.

I think this comment completely misses the point. I'm not "dinging" theories on the grounds that they don't get the answers right (i.e., "work well"). Everybody knows OQM "works", i.e., gets the answers right. Likewise, everyone should know that there exist other theories (like Bohm's) that "work" equally well -- i.e., predict precisely the same answers. So empirical adequacy just isn't even on the table here as a relevant issue. I only care about theories that are, to begin with, empirically correct. I'm then raising a further and separate question: are the theories *locally causal*? And the answer turns out to be "no", not only for OQM and Bohm's theory but, as proved by Bell, for *any* possible empirically viable theory.


I like the Beatles, and oQM doesn't explain that either. (Doesn't that bother you?) In other words, what you are asking is a "nice to have" but it is not essential. But I always have the door open for a better theory (i.e. one with more utility). In my opinion, there is no way that Bohmian Mechanics can be considered to have more utility than oQM at this time.

I never claimed it did. I would only deny the reverse claim: that OQM can be considered to have more utility than Bohmian Mechanics at this time. The two theories make all the same expeirmental predictions. They're both "equally right" (by that standard of assessment). They are on a completely equal footing (by that standard).

Of course, there are some other standards on which Bohm wins hands down, e.g., not being plagued by the measurement problem. But that's a point for another day.
 
  • #164
Hurkyl said:
One thing that struck me when reading your threads is that the issues you raise can only be noticed by someone external observer capable of observing all of the "beables" in two space-like separated regions of space-time.

That's not true. Just construct the relevant x-t diagram later. Or do you think it's always wrong to draw an x-t diagram, because it includes events at spacelike separated points, which no one observer at those events could be aware of? :smile:


The beables in Alice's laboratory are sufficient to completely describe what's going on there: she has a 50% chance of seeing a heads.

The beables in Bob's laboratory are sufficient to completely describe what's going on there: he has a 50% chance of seeing a heads.

That is completely misleading, though. Because (by your own hypothesis) HT and TH never occur, and they should occur 50% of the time if you mean what you say above *straight* (i.e., not as statements of the marginals of some joint distribution).


If Alice and Bob perform their observations and take the results to Charlie's laboratory for comparison, then the beables in Charlie's laboratory are sufficient to completely describe what's going on there: ther's a 50% chance that they both saw heads, and 50% chance that they both saw tails.

Sure, but that's only consistent with what you say above if the 50/50 H/T outcome for Bob was correlated with the 50/50 H/T outcome for Alice. And then Bell's question is: is this correlation locally explicable? And the answer is: yes, but only by assuming "hidden variables" which determine in advance the outcome. Here's what he says:

"It is important to note that to the limited degree to which *determinism* plays a role in the EPR argument, it is not assumed but *inferred*. What is held sacred is the principle of 'local causality' -- or 'no action at a distance'. Of course, mere *correlation* between distant events does not by itself imply action at a distance, but only correlation between the signals reaching the two places. These signals ... must be sufficient to *determine* whether the particles go up or down. For any residual undeterminism could only spoil the perfect correlation.

"It is remarkably difficult to get this point across, that determinism is not a *presupposition* of the analysis. There is a widespread and erroneous conviction that for Einstein [*] determinism was always *the* sacred principle... [but, as Einstein himself made clear, it isn't]."

There is from the [*] the following footnote: "And his followers [by which Bell clearly means himself]. My own first paper on this subject (Physics 1, 195 (1965)) starts with a summary of the EPR argument *from locality to* deterministic hidden variables. But the commentators have almost universally reported that it begins with deterministic hidden variables."

This footnote is extremely important, because, decades later, "the commentators" are still almost universally confused about this. It is precisely this point that I have been at pains to clarify in this thread (and in some other parts of my life!). Oh, the above quotes are all from the beautiful paper "Bertlmann's Socks and the nature of reality", reprinted in Speakable and Unspeakable.



However, your issue is not well-localized: it involves space-like separated events in both Alice's and Bob's laboratories.

What "issue"? The whole *point* is that space-like separated events that are *correlated* can only be *locally* explained by stuff in the overlapping past light cones. You seem to be dancing around the edges of the MWI line that those "definite correlated events" aren't even *real* -- didn't really *happen*. But I think we've already covered that issue completely; I at least have no more energy for retrying that case.


We don't need to consider space-like separated events to talk about locality. One nice and practical definition of locality is: "Are all the beables here sufficient to describe what's going to happen?"

But this is precisely the condition Bell Locality! That condition can be stated: are all the beables here [i.e., say, in the past light cone of some spacetime event where some "outcome" appears] sufficient to define the probabilities for various possible "outcomes" -- with "sufficient" defined as follows: throwing some additional information about spacelike separated regions into the mix doesn't *change* the probabilities.

Your own example of the H/T devices *violates* this condition. Knowing (what according to your minimalist theory is) all there is to know in the past light cone of Alice's exercise is *not* sufficient (with the above definition) to define the probabilities for the possible outcomes. For example, if we specify in addition that Bob pushed his button and got "H", then the probability for Alice to get "H" changes from 50% to 100% -- even though that 50% was based on a *complete specification of beables* in the past light cone of Alice's event.

So your own theory is nonlocal, as I've been saying all along. Of course, this doesn't mean that the mere fact of perfect correlation between Alice's and Bob's outcomes, proves that nature is nonlocal. The correlation *can* be explained locally by adding "hidden variables", i.e., by considering a different theory than the one *you* proposed.



But the beables in Bob's laboratory are enough to completely describe his experiment.

No, they aren't. Not in the sense defined above.



As far as I can tell, if Alice presses her button and gets heads, then in this perspective it is still appropriate to say that Bob has a 50% chance of getting heads from his box.

If that's what your *theory* says, then your theory is going to be empirically *false* because it'll predict that sometimes Bob gets tails, even though (unknown of course to him) Alice has gotten heads.
 
  • #165
DrChinese said:
1. I think this is the crux of your issue. This is a specific claim of oQM, and is not strictly prohibited by relativity.

2. This is definitely not correct. You cannot objectively demonstrate that the outcome at B is in any way dependent on a measurement at A. If you could, you could perform superluminal signalling. All you can actually demonstrate is the correlated results follow the HUP.

3. This is really part of the interpretation one adopts.

The way I see it, the crux of the matter is the following.

Classically, we are used to equate "correlation between events" and "causality". In quantum mechanics, this link is broken. There may be correlation without a cause/effect relationship.

Would that be a fair statement?

Pat
 
  • #166
nrqed said:
The way I see it, the crux of the matter is the following.

Classically, we are used to equate "correlation between events" and "causality". In quantum mechanics, this link is broken. There may be correlation without a cause/effect relationship.

Would that be a fair statement?

Pat

Yea, I think that is a possibility. And maybe that's even what ttn is arguing at some level. I don't think anyone is really saying that we understand everything that is happening - I certainly don't. For example, and relating to your comment: we define cause/effect relationships to have the cause preceding the effect. In a world in which the laws of physics are time symmetric, is this really a reasonable definition?

If you reverse the flow of time (and therefore the sequence of events), what was formerly a cause might now appear as a random effect. So perhaps the future actually influences the past in some way (this need not violate special relativity, which should operate the same regardless of the direction of time).

So correlations might then appear that seem non-local at the end of the day - as you suggest.
 
  • #167
ttn said:
I think vanesch and hurkyl would disagree with the first part: you shouldn't (I think they'd say) think of the random # as being injected at that particular spacetime point; rather, think of it as a new universal constant that pops into existence and is immediately accessible everywhere. This, to me, is a very weird way of thinking -- but more to the point, it seems to beg the question in regard to the word "immediately".

Even *that* would be "deterministic" because what you now introduce is a physical scalar FIELD over spacetime with a constant (but unknown - hence probabilistically described) value and if only you KNEW the value of that constant field, you would know with certainty what the outcome would be - and hence the theory is being *underlying deterministic* with an unknown beable (the constant scalar field).

It is DAMN HARD to imagine an *irreducibly stochastic* theory, because it means that *one cannot assign any physical existence to the random quantities*. Because from the moment one does, these become "beables" and hence if their values are known, we have changed the thing into a "deterministic theory with unknown beables to which we assign probabilities".

And from the moment that you get rid of that, so that the random quantities are NOT physical, and "just happen", you cannot talk about "their locality" or anything.
 
  • #168
ttn said:
since it's a "measurement" that triggers the collapse, we should think of the randomness as originating ("being injected") at the spacetime event of the triggering measurement.
Oh, that's what you mean by "injecting randomness".

ttn said:
I think vanesch and hurkyl would disagree with the first part: you shouldn't (I think they'd say) think of the random # as being injected at that particular spacetime point; rather, think of it as a new universal constant that pops into existence and is immediately accessible everywhere.
That's not what I think at all! The randomness was always there -- it just manifested itself in the measurement. (Of course, some sort of measurement is the only way for anything physical to manifest themselves)

Well, I almost told the truth -- one of my pet thoughts on QM was that things like "position" and "momentum" do not map onto the fundamental elements of reality... but we can make it look like they do with a bit of randomization. So our devices for measuring such things are actually randomized in some sense. But I haven't thought too much about this and don't give it much weight anymore.



ttn said:
That is completely misleading, though. Because (by your own hypothesis) HT and TH never occur, and they should occur 50% of the time if you mean what you say above *straight* (i.e., not as statements of the marginals of some joint distribution[/color]).
If you really meant what you said in red, then you are way off track. In statistics, you absolutely, positively, cannot ask questions like:

P(Alice sees "H" and Bob sees "H")

or

P(Bob sees "H" | Alice sees "H")

without there being some joint distribution governing both random variables.

In other words, if they aren't marginals of some joint distribution, then you cannot even ask if they're statistically independent -- such a question would be mathematical gibberish!


ttn said:
Sure, but that's only consistent with what you say above if the 50/50 H/T outcome for Bob was correlated with the 50/50 H/T outcome for Alice. And then Bell's question is: is this correlation locally explicable? And the answer is: yes, but only by assuming "hidden variables" which determine in advance the outcome.
You don't need hidden variables: for example, the unitary evolution of QM explains the correlation just fine.


Hurkyl said:
We don't need to consider space-like separated events to talk about locality. One nice and practical definition of locality is: "Are all the beables here sufficient to describe what's going to happen?"
ttn said:
But this is precisely the condition Bell Locality! That condition can be stated: are all the beables here [i.e., say, in the past light cone of some spacetime event where some "outcome" appears] sufficient to define the probabilities for various possible "outcomes" -- with "sufficient" defined as follows: throwing some additional information about spacelike separated regions into the mix doesn't *change* the probabilities.[/color]
No! That part in red is what I did not say.

All of the beables here are sufficient to fully describe what happens here. They're just not sufficient to fully describe any correlations between things that are here with things that are over there. To fully describe those, you need the whole collection of beables that are here and there. (But you don't need any beables from a third place)

That red part is the statistical independence hypothesis.


ttn said:
Your own example of the H/T devices *violates* this condition. Knowing (what according to your minimalist theory is) all there is to know in the past light cone of Alice's exercise is *not* sufficient (with the above definition) to define the probabilities for the possible outcomes. For example, if we specify in addition that Bob pushed his button and got "H", then the probability for Alice to get "H" changes from 50% to 100% -- even though that 50% was based on a *complete specification of beables* in the past light cone of Alice's event.
It's not the probability that changed: its the question you asked.

P(Alice sees "H") is always 50%. It's just that once you learned Bob saw an "H", you started asking for P(Alice sees "H" | Bob sees "H").

But as you said, if you make the statistical independence hypothesis, then that conditional probability is the same as the marginal probability, and so you would be justified in saying the probability changed.

But if you do not make the statistical independence hypothesis, then you cannot conclude that P(Alice sees "H") has changed when Bob sees his "H".
 
  • #169
It is [DARN] HARD to imagine an *irreducibly stochastic* theory, because it means that *one cannot assign any physical existence to the random quantities*. Because from the moment one does, these become "beables" and hence if their values are known, we have changed the thing into a "deterministic theory with unknown beables to which we assign probabilities".
I don't think there's a problem with a random variable evolving deterministically: the result is still a random variable.

But remember what a random variable is: a probability measure on a set of outcomes. So the beables are the probabilities.
 
  • #170
Hurkyl said:
I don't think there's a problem with a random variable evolving deterministically: the result is still a random variable.

But remember what a random variable is: a probability measure on a set of outcomes. So the beables are the probabilities.

Yes, while the lesser mathematicians of us (like me) think of this as "a number (or other object, such as a function) of which we don't know the value, but only a probability distribution". And a "deterministic evolution of a random variable" is then seen as the deterministic evolution of the original object of which we didn't know the value, and hence drags with it its uncertainty, so this results in the 'dragged-along' probability distribution of the result of evolution.
And it is when you picture *this*, with these objects being real beables, that you arrive at Bell's condition. It is difficult to imagine random quantities which do NOT "materialise" this way.
 
  • #171
vanesch said:
It is difficult to imagine random quantities which do NOT "materialise" this way.


They better "materialize" some way, or they will have no dynamical consequences. Nobody thinks that the irreducible randomness at some point brings into existence a new (physical) scalar field, or a big set of polished bronze numerals reading "0.732752" (or whatever the random number was). But if what's being explained by the underlying stochastic theory is some kind of measurement outcome, then obviously the generated random numbers have to have a real physical effect on *something* which then in turn physically influences the macroscopic measurement devices (which *nobody*, except crazy MWI-people, denies are beables).

All of the points in the last few posts have been pointless semantic distractions. It's clear that any random numbers generated by a stochastic theory have to manifest themselves in some physical way -- otherwise they would be irrelevant to empirical observations and there's be no point at all in hypothesizing the theory in question. The only question can be: where (at what spacetime event) do such numbers arise?

To answer this question is to admit non-locality (in the kind of examples we've been discussing). If Bob makes the "first" measurement and there is something random that controls his outcome, then the subsequent effect of that number (or its various causal effects near Bob) constitutes nonlocality.

And to *not* answer this question is to admit non-locality. If Bob makes the "first" measurement and this new random number comes into existence *not* at some spacetime event near Bob, but (say) simultaneously along some spacelike surface through Bob's measurement event, said popping into existence constitutes nonlocality.
 
  • #172
Hurkyl said:
You don't need hidden variables: for example, the unitary evolution of QM explains the correlation just fine.

You've got to be kidding? First off, the unitary evolution is deterministic. Second, it *doesn't* "explain the correlation just fine" since it predicts that Alice's box never ever reads definitely "H" or definitely "T" -- in direct contradiction with what Alice (by assumption in *your* example) sees.

I will grant, however, that if you are going to begin by throwing out the empirical data that was supposed to define this situation (Alice and Bob each see H or T w/ 50/50 probability, but the two outcomes are always paired HH or TT) then, yeah, sure unitary-only QM can explain the correlations. Just like Ptolemy's theory of the solar system can explain the last 100 days of data for the price of tea in china...




All of the beables here are sufficient to fully describe what happens here.

Are you still talking about unitary-only QM?

I don't know what to say. If you think the above, you simply haven't understood Bell Locality at all. The whole point of this condition is to ask: are the beables of a theory sufficient to explain certain observed facts in a local way? For your example of the irreducibly-random theory which purports to explain the HH/TT correlation, Bell Locality is violated: a complete specification of beables along some spacelike surface prior to both measurement events does *not* screen off the correlation.


They're just not sufficient to fully describe any correlations between things that are here with things that are over there. To fully describe those, you need the whole collection of beables that are here and there. (But you don't need any beables from a third place)

The version of Bell Locality that actually gets *used* in the derivation is actually equivalent to this weaker condition. The probability of one event is conditionalized not just on a complete specification of beables in the past light cone of that event, but across a spacelike hypersurface that crosses also the past light cone of the *other* event. That is, we do not presuppose what is nowadays sometimes called "separability".




It's not the probability that changed: its the question you asked.

P(Alice sees "H") is always 50%. It's just that once you learned Bob saw an "H", you started asking for P(Alice sees "H" | Bob sees "H").

But as you said, if you make the statistical independence hypothesis, then that conditional probability is the same as the marginal probability, and so you would be justified in saying the probability changed.

But if you do not make the statistical independence hypothesis, then you cannot conclude that P(Alice sees "H") has changed when Bob sees his "H".

I'm sorry, but every time you start analyzing probabilities and such, you turn into a mathematician -- i.e., you completely forget about the physical situation that we're talking about here. The whole question of locality is whether goings on near Alice are *alone* sufficient to account for all that there is to account for near Alice (her outcomes). What you have now lapsed into calling the "statistical independence hypothesis" is the *physical* requirement that a *local physics theory* shouldn't have its probabilities for one event, *depend* on happenings at spacelike separation, when a *complete specification of beables* in the past light cone of the first event is already given.

Yes, one can *deduce* from this "statistical independence" -- a complete specification of beables in the past of the two events should screen off any correlations between the outcomes. But this is not an arbitrary hypothesis; it is a *consequence* of the basic requirement, which is *locality*.

Let me ask you a serious question: have you ever read Bell's papers on this stuff?
 
  • #173
vanesch said:
Even *that* would be "deterministic" because what you now introduce is a physical scalar FIELD over spacetime with a constant (but unknown - hence probabilistically described) value and if only you KNEW the value of that constant field, you would know with certainty what the outcome would be - and hence the theory is being *underlying deterministic* with an unknown beable (the constant scalar field).

I don't agree; this is not deterministic. There could be irreducible stochasticity in the initial assignment of a value to the "scalar field."

I see no reason to postulate the existence of any physical scalar fields. The point is too simple to deserve such fanciness: you could have a theory in which there is irreducible randomness (the production of some random number from some kind of probability distribution), but in which that number (whatever it turns out to be) is then "available" at other spacetime events to affect beables. And my point is simple: if it is only available at spacetime points in the future light cone, the theory is local; if it's available also outside the future light cone, the theory is nonlocal.


It is DAMN HARD to imagine an *irreducibly stochastic* theory, because it means that *one cannot assign any physical existence to the random quantities*. Because from the moment one does, these become "beables" and hence if their values are known, we have changed the thing into a "deterministic theory with unknown beables to which we assign probabilities".

I don't understand this attitude at all. Beables are beables. I'm happy to permit, under the banner of "irreducibly stochastic theories", theories in which the evolution of beables is non-deterministic. But as I said before, what would be the *point* of the randomness if it didn't affect the beables? It would then have no effect on *anything* because there *is* (by definition) nothing but the beables! You seem to want to parse "irreducibly stochastic theories" as something in which, in addition to the beables, there are these other "things" that "exist", except that they are "random" in the sense that they don't exist in any particular measure/degree/value/whatever. But "random" isn't the same as "indefinite".

You say that as soon as one assigns physical existence to the random quantities, the theory becomes deterministic. I could not disagree more strongly. First, if you *don't* assign physical existence to the random quantities, what the heck is the point? They then play absolutely no role in the dynamics. And second, whether you do or don't assign physical existence to the random quantities, has no bearing whatever on whether the theory is deterministic. A theory in which there is randomness which affects things, is *not deterministic*. For example: orthodox QM (with the collapse postulate) is *not* a deterministic theory, even though there is irreducible randomness (which of the eigenstates the initial state collapses to) and the "outcome" of this "random choice" manifests itself immediately in the beables (the wave function is now that eigenstate).


And from the moment that you get rid of that, so that the random quantities are NOT physical, and "just happen", you cannot talk about "their locality" or anything.

But then there'd be no need to talk about their locality or anything, since they would play no role whatsoever in the evolution of beables (and hence no role whatsoever in the explanation of empirical observations).
 
  • #174
DrChinese said:
If you reverse the flow of time (and therefore the sequence of events), what was formerly a cause might now appear as a random effect. So perhaps the future actually influences the past in some way (this need not violate special relativity, which should operate the same regardless of the direction of time).

So correlations might then appear that seem non-local at the end of the day - as you suggest.


Please don't tell me I've spent all this time trying to explain things to you, only to have *this* appear as your considered view.

Sure, you can explain EPR/Bell data with a theory in which the causes of certain events come from the future. Do you seriously think such a theory would be "locally causal"?
 
  • #175
ttn said:
All of the points in the last few posts have been pointless semantic distractions.
Pointless semantic distractions?? Does that mean you no longer care to assert that what I put forth as an alternative to "locality" is actually Bell-locality?


ttn said:
It's clear that any random numbers generated by a stochastic theory have to manifest themselves in some physical way
The random numbers that are "generated" are the manifestation -- they are not any sort of dynamical entities, and they do not have any sort of effect on anything. They are nothing more than the result when you insist that a stochastic theory produce an actual outcome.

(A stochastic theory, of course, doesn't like to produce outcomes... it prefers to simply stick with a probability distribution on the outcome space)
 
  • #176
Hurkyl said:
Pointless semantic distractions?? Does that mean you no longer care to assert that what I put forth as an alternative to "locality" is actually Bell-locality?

I don't know what you're referring to. What did you put forth as an alternative to "locality"? And I think I'm just confused about what you're asking here: I'm the one who thinks Bell Locality is a perfectly good definition of locality; so if your proposed alternative "is actually Bell Locality" wouldn't that make it not really an alternative at all?



The random numbers that are "generated" are the manifestation -- they are not any sort of dynamical entities, and they do not have any sort of effect on anything. They are nothing more than the result when you insist that a stochastic theory produce an actual outcome.

(A stochastic theory, of course, doesn't like to produce outcomes... it prefers to simply stick with a probability distribution on the outcome space)

The outcomes appear in some physical form -- like, in your example, the positions of a bunch of electrons that make a phosphor screen light up a big glowing green "H" or something. Perhaps this just takes us back to our earlier argument about what constitutes a "theory". I'm taking it for granted that there exist physical things like video screens and electrons, and asking about theories which might explain the underlying dynamics of whatever is at the next-level-down. You (still) seem to think it's ok to assert as a theory some mathematical/probabilistic statement like "P(HH)=.5, P(TT)=.5, P(HT)=P(TH)=0". That may be a correct description of the observed outcomes, but (the way I am thinking about this, as a physicist) it is *not* a *theory*.

If we accept as a given that the observed result is (say) produced by electrons landing "here" instead of "there" on the screen, then your proposed stochastic theoretical explanation of the observations better include some way for the random numbers to affect electrons. If they "do not have any sort of effect on anything" then you are just spinning your wheels, failing in principle to propose the kind of thing that could ever possibly address the issue at hand.

Really, this comes down to the old objection that one could just take the quantum mechanical formalism as a blind algorithm, which makes no claims about any beables... and hence make correct predictions without ever asserting anything that could possibly construed as violating local causality. Of *course* one can do this. One can avoid making nonlocal claims by refusing to claim anything about anything. Duh. But we *know* that big macroscopic things exist, and we *know* they're made of littler things. The question is: is it possible that the dynamics of the little stuff (or the sub-little stuff, or whatever) respects local causality? Bell gave a theorem that the answer is no: no locally causal (bell local) theory can account for what's observed.

Putting tape over your mouth and refusing to assert a theory does not constitute a counterexample to this theorem.
 
  • #177
You've got to be kidding? First off, the unitary evolution is deterministic. Second, it *doesn't* "explain the correlation just fine" since it predicts that Alice's box never ever reads definitely "H" or definitely "T" -- in direct contradiction with what Alice (by assumption in *your* example) sees.
Unitary evolution provides us with a state:

(|HH> + |TT>) / sqrt(2)

from which it's easy to derive the correlation. Furthermore, if we actually conduct an experiment to test if there's a correlation, unitary evolution provides us with the resulting state

(|HH> + |TT>)|correlated> / sqrt(2)

As opposed to the state we'd get when there wasn't any entanglement at all:

((|HH> + |TT>)|correlated> + (|HT> + |TH>)|uncorrelated>) / 2



ttn said:
Are you still talking about unitary-only QM?
I'm talking about any sort of statistical theory.


Before I respond to the next part, allow me to remind you of your post #133 that launched this particular arc:
ttn said:
Maybe it would be useful to ask: could anyone think of a Lorentz invariant candidate toy theory that would predict the "both H or both T" example above?
...
Do we allow (as consistent with relativity) that irreducibly-random events at spacelike separations should nevertheless demonstrate persistent correlations?
...
Is such a thing consistent with relativity? I say "no" and am thus not at all bothered (but rather relieved) that such a scenario violates Bell's local causality requirement.
...
could anyone think of a Lorentz invariant candidate toy theory that would predict the "both H or both T" example above?

Hurkyl said:
We don't need to consider space-like separated events to talk about locality. One nice and practical definition of locality is: "Are all the beables here sufficient to describe what's going to happen?"
ttn said:
But this is precisely the condition Bell Locality! That condition can be stated: ... throwing some additional information about spacelike separated regions into the mix doesn't *change* the probabilities.
Hurkyl said:
No! That part in red is what I did not say.

All of the beables here are sufficient to fully describe what happens here.
ttn said:
The whole point of this condition is to ask: are the beables of a theory sufficient to explain certain observed facts in a local way? For your example of the irreducibly-random theory which purports to explain the HH/TT correlation, Bell Locality is violated: a complete specification of beables along some spacelike surface prior to both measurement events does *not* screen off the correlation.
Yes -- it's Bell Locality that is violated: in particular, it's statistical independence hypothesis. But other forms of locality, such as what I stated here, are not violated.

First off, notice that your first question is very circular. Filling in the implicit stuff (as I understand it), you say:

"The whole point of the Bell locality condition is to ask: are the beables of a theory sufficient to explain certain observed facts in a Bell local way?"

But you did not ask for a toy theory that was Bell local: you asked for a theory that was consistent with Lorentz invariance: with special relativity. (In fact, isn't the whole point of this thread to ask the question of consistence with special relativity?)

Bell locality is, indeed, violated, because one of its underlying assumptions is that there is no statistical dependence. By looking at all of the responses through the filter of Bell locality, you are, in fact, asking:

"Is there any theory consistent with special relativity that is capable of predicting statistical dependence, under the condition that there is no statistical dependence?"


ttn said:
I'm sorry, but every time you start analyzing probabilities and such, you turn into a mathematician -- i.e., you completely forget about the physical situation that we're talking about here.
I am a mathematician, incidentally.

ttn said:
What you have now lapsed into calling the "statistical independence hypothesis" is the *physical* requirement
You try to make it sound important by calling it a "physical requirement" -- but that amounts to nothing more than saying that it's an axiom that you wish to require your mathematical models of the physical universe to satisfy.

ttn said:
Yes, one can *deduce* from this "statistical independence" -- a complete specification of beables in the past of the two events should screen off any correlations between the outcomes.
Try me.

ttn said:
Let me ask you a serious question: have you ever read Bell's papers on this stuff?
I've read a few papers, including stuff you have linked in the past. I never bothered to pay attention to who the author is.
 
  • #178
ttn said:
If we accept as a given that the observed result is (say) produced by electrons landing "here" instead of "there" on the screen, then your proposed stochastic theoretical explanation of the observations better include some way for the random numbers to affect electrons.
Of course the random variables have an effect on electrons.

But this has absolutely nothing to do with the idea of a random number generator you seem to be using in post #171.


You seem to have in your mind that a stochastic universe would be analogous to how a computer program will use a pseudorandom number generator to spit out a sequence of numbers, and then use those numbers to control how things dance across its screen.

But that's not how statistics works! A random variable is nothing more than a measure on a space of outcomes. In fact, it is a very difficult problem to try and give any sort of precise meaning to the word "random number generator".


(In a classical theory)
E.G. one random variable could be on the space of possible positions and momentums of an electron. Another random variable could be on the configurations of the electromagetic field. The dynamics of the theory would allow us to compute a new random variable on the space of possible positions, momentums, and accelerations of the electron.

(Of course, this is just marginalized from the joint distribution over the electron position, momentum and acceleration and the electromagnetic field configuration)
 
Last edited:
  • #179
Hurkyl said:
Unitary evolution provides us with a state:

(|HH> + |TT>) / sqrt(2)

from which it's easy to derive the correlation.

"easy to derive" is irrelevant. The state is already *empirically wrong*. What exists is not a superposition of HH and TT; what exists is *either* HH *or* TT.

You seem to be using/assuming the MWI without being willing to admit the well-known weirdness of such a view. Sure, unitary evolution can get you that superposition, and you can twist and turn and eventually connect this up with what we do experience (by denying that what we see in front of our face is the truth, i.e., by postulating that we're all deluded about what the outcomes of the experiments actually were). But normally when a physicist asks if some theory or other "explains the data" he or she is not looking for a metaphysical-conspiracy-theory about how, really, the data we got directly from looking at an apparatus is a delusion.



Yes -- it's Bell Locality that is violated: in particular, it's statistical independence hypothesis. But other forms of locality, such as what I stated here, are not violated.

I'm sorry, I don't follow you. The alternative you proposed (as near as I can tell) was:

"Are all the beables here sufficient to describe what's going to happen?"

But as I pointed out, this just *is* the Bell Locality condition. Maybe we're not on the same page about what the phrase "sufficient to describe what's going to happen" means. I told you what that phrase means for Bell Locality, but I don't understand what, if anything, you're proposing as an alternative. You seemed to simply reject my proposal for the meaning of that phrase on the grounds that it presupposed the statistical independence hypothesis, but that simply is not true.



First off, notice that your first question is very circular. Filling in the implicit stuff (as I understand it), you say:

"The whole point of the Bell locality condition is to ask: are the beables of a theory sufficient to explain certain observed facts in a Bell local way?"

But you did not ask for a toy theory that was Bell local: you asked for a theory that was consistent with Lorentz invariance: with special relativity. (In fact, isn't the whole point of this thread to ask the question of consistence with special relativity?)

I asked for a theory that was consistent with SR. You just postulated some joint probabilities without ever providing a theory.


Bell locality is, indeed, violated, because one of its underlying assumptions is that there is no statistical dependence.

I'm sorry, but saying this over and over again doesn't make it so. What you are calling "no statistical dependence" is equivalent to the factorization of the joint probability, yes? Here's what Bell says about this: "Very often such factorizability is taken as the starting point of the analysis. Here we have preferred to see it not as the *formulation* of 'local causality', but as a consequence thereof." [from La Nouvelle Cuisine, page 243 of Speakable and Unspeakable, 2nd edition] I've tried and tried to explain this, without success, so I'll just have to refer you to that paper where Bell explains very clearly what the local causality condition is, and how factorization ("statistical independence") follows as a logical consequence. Factorization is *not* simply assumed; it is a consequence of a *physical* assumption -- namely, that there be no superluminal causation.



By looking at all of the responses through the filter of Bell locality, you are, in fact, asking:

"Is there any theory consistent with special relativity that is capable of predicting statistical dependence, under the condition that there is no statistical dependence?"

Obviously I don't agree. I'd say: by thinking (erroneously) that Bell Locality means nothing but statistical independence, you are missing the whole point. Incidentally, I find it interesting that you cannot apparently resist converting Bell Locality (which is a *physical* condition) into factorizability (which is a purely mathematical condition).



I am a mathematician, incidentally.

Not that I think there's anything wrong with that, but I'm not surprised.


You try to make it sound important by calling it a "physical requirement" -- but that amounts to nothing more than saying that it's an axiom that you wish to require your mathematical models of the physical universe to satisfy.

Precisely right. I take as an axiom that physical theories should respect relativistically local causation. And then it is proved that no theory consistent with that axiom can agree with the data. So I say "oops!" I guess that axiom is *false*. No locally causal theory can explain the data.



I've read a few papers, including stuff you have linked in the past. I never bothered to pay attention to who the author is.

I don't think I've ever linked to Bell's papers, because I don't know of any of them being online. Anyway, take it as a friendly recommendation. Bell is a brilliant physicist and a brilliant writer, and if you want to understand where I am coming from you would probably be better off just reading Bell in the original than listening to me. (I am far less brilliant.) Because everything I'm saying here, Bell already said, and said better. Plus, the reason I get so hot under the collar about this stuff is that, despite the incredible clarity of Bell's writings, he has been almost universally misunderstood by the "experts" on these topics. (I quoted Bell himself pointing that out yesterday.) So I find it extremely frustrating that people have such strong opinions on what Bell did or didn't prove, when they haven't even bothered to read Bell's papers. You are obviously a smart guy who has enough background knowledge to get completely clear on these issues; so I say, if you *want* to get clear on these issues, if these are issues you are *interested* in, then it would be a shame if you didn't read Bell's papers.
 
  • #180
ttn said:
Please don't tell me I've spent all this time trying to explain things to you, only to have *this* appear as your considered view.

Sure, you can explain EPR/Bell data with a theory in which the causes of certain events come from the future. Do you seriously think such a theory would be "locally causal"?

No, I am not actually saying this is my opinion since I drift towards oQM most of the time. I am merely pointing out one possibility. Does it really seem so weird that the future might influence the past? And, yes, I definitely consider such a theory to be local in every sense of the word. But it is not realistic. So it would be local non-realistic, and therefore consistent with Bell's Theorem.

And to counter your assertion ("no Bell local theory can agree with experiment"), I instead state that "no Bell realistic theory can agree with experiment". Bell realistic meaning: any theory in which there is a more complete specification of the system than the HUP allows. You cannot beat the HUP!

And please, do not bother with BM as a candidate. I am talking about a theory in which the HUP is beaten. EPR thought they had it, but experiment showed otherwise. If you can't beat the HUP, even in principle, then you are acknowledging that there are no hidden variables in the first place.
 

Similar threads

Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
2
Views
2K
Replies
7
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
58
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 93 ·
4
Replies
93
Views
7K