ttn said:
Can you propose some other good definition of local causality for stochastic theories? And don't tell me "signal/info locality" -- that's a different idea, right? Orthodox QM (treating the wf as a complete description of the ontology) and Bohmian Mechanics are both "signal/info local", yet clearly they are both nonlocal at a deeper level. They both involve FTL causation.
Orthodox QM is not "non-local at a deeper level" in the sense that it proposes a *physical mechanism* that conveys a non-local causation, because orthodox QM is JUST AN ALGORITHM to calculate probabilities of outcomes of experiment. If you see Bohmian mechanics that way, they are on the same level: they spit out probabilities, and one shouldn't look at their mathematical constructions as representing anything physically, because they don't. You could just as well look at the listings of a C-program or anything.
This is one of the reasons why I don't like OQM, because I'd like to have a description of nature, but it is not supposed to be one. It's just a calculational scheme.
Now, from the moment that you start assigning ontology to the wavefunction, then yes, the projection postulate gives you a non-local operation. But if this is seen as "C-code that calculates probabilities" then it is hard to say what is "in its past light cone", no ?
No, this is sliding from talking about a theory's fundamental dynamical probabilities, to talkign about empirical frequencies or something. As long as you remember you're talking about some particular candidate theory, there *is* a way "of telling". This is just exactly what a theory tells us. It tells us what various happenings depend on. It's true that if you just see some event happen, there's no way a priori to know what caused it. But, in the context of a proposed theory, there is no such problem.
Nothing "causes" probabilities, right ? But I guess that you mean: the formula for the probabilities in your theory, does it depend on input you have to give of events in the past light cone only, or others ?
Well, I then tell you that ANY theory has its probabilities depend on things outside of the past lightcone of where the event matters: namely just afterwards.
Probabilities are sensitive to what's in the FUTURE light cone, because mostly they flip then from a real value to 0 or 1 (because in the future, we KNOW what happened).
So if the "algorithm for the probability of event at A" is given as input, only what is in A's past lightcone, it might crank out 0.5.
If we also give it what happens at B (outside of B's past light cone), it might become 0.75.
And if we add the result of the measurement to it at event A' in A's future, then it will become, say, 1.
So a stochastic theory's predictions, or empirically established relative frequencies, are EQUIVALENT. They are just tables of probabilities, generated in the first case by an algorithm, and observed in the second case, by treating data.
A theory tells us what caused it (even if the explanation is merely stochastic) by telling us what the event (or its probability) *depends on* -- and then it makes sense to ask (still in the context of that theory) whether that dependence is or isn't local.
Well, then the probability of all events depend strongly on their future, because that makes their probabilities flip to 0 or 1.
In fact, from this viewpoint, stochastic theories even become deterministic: If you have the result, then you can predict the earlier probabilities with certainty to be 0 or 1.
This is why I am insisting that probabilities are not physical quantities as such, because they CHANGE as a function of what we know.
No, he defined it the way he defined it: stuff outside the past light cone shouldn't affect the probabilities the theory assigns to events. It is also, incidentally, true that for any stochastic theory you can find an underlying deterministic theory. But that really has nothing to do with locality or Bell's definition thereof.
It has much to do with it: Bell's idea is about "causal influence", which means that we are at least proposing a description of the underlying reality of nature in which such a concept could play a role.
But a stochastic theory doesn't. It's a computer program that cranks out probabilities, and is NOT a description of any reality, UNLESS it is a deterministic theory in which things like initial states are recognized to be ignored (as in statistical mechanics, or in Bohmian mechanics, for instance).
What do you mean it's sufficient? Who says? So Bohmian Mechanics is then consistent with relativity? Why in the world, then, would YOU believe in MWI rather than Bohm?!?
The *probabilistic predictions* of Bohmian mechanics, seen as a black box that cranks out probabilities (and not as some kind of ontological description of nature) are compatible with relativity, in the same way as the probabilistic predictions of OQM are (and in the latter case, it is often said that this is nothing else but a black box that calculates probabilities).
As they are equivalent algorithms, there is of course no reason to "believe" one over the other, as they crank out the same numbers (maybe not in the same computing time).
However, Bohmian mechanics doesn't posit itself as a black box cranking out probabilities, right ? It has the pretention to be an ontological description of nature. Well, THEN one has to open the box, and to look if all the formulations are local. If the internal machinery is local. And it isn't. It cannot be written in a Lorentz-invariant way, for instance.
The same happens of course if we would take OQM to be an ontological description of nature, and if we would take the wavefunction as an element of reality. Then we could also not formulate it in a Lorentz invariant way.
But if we both see them just as a machine out of which comes predictions of probabilities of observation, then both are on the same level (and actually totally equivalent ; it is then just a matter of which one is easier in its manipulation to make your choice).
For "boxes that crank out probabilities" but which do not have the pretention of giving us any ontological description of nature, we can take "signal locality" as a criterium, or "Bell locality" as a criterium.
They tell us different things.
Bell locality tells us whether we will, or not, be able to find a local, deterministic theory that can explain the predictions, based upon ignorance of initial state ; such a deterministic theory can then eventually serve as an ontological description - which our probability-spitting box doesn't have.
Signal locality tells us whether or not we will be able to phone to our grandma to tell her not to marry granddad, if the lorentz transformations are correct (mind you, I didn't say: if SR holds :-).
Quantum theory so regarded isn't a theory.
That's why I don't like it

I WOULD like to have a theory that pretends to describe "nature out there" but OQM is not supposed to be so, but just a calculational trick which helps us estimate outcomes of experiment (their probabilities) when we give it the preparation.
That's also why all the beable stuff doesn't really apply to OQM: it doesn't have the pretention to describe anything physical. It just relates "in" states with "out" states.
Huh? Info/Signal locality is just a constraint on the predictions of the theory (it has nothing to do with the underlying guts/mechanics of the theory). What's the problem applying it to deterministic theories? Those too make predictions, yes?
What I meant was that there are not different options for locality for a deterministic theory. A deterministic theory is local or not, depending on whether the DETERMINED outcome at a point depends, or not, on things outside of the lightcone of that event. Given that that outcome is a clear physical thing (as contrasted to the *probability* of the outcome), there's no discussion about what it might mean, to be local, for a deterministic theory. Locality is originally a concept that was only clear for deterministic theories.
A deterministic local theory is both Bell local and Signal local: you cannot have a deterministic theory which is NOT Bell local, but who is signal local.
You're equivocating between two very different things. Stochastic doesn't mean "has no ontology". If you don't think a stochastic theory can have an ontology (fields or whatever) what the heck is OQM?
Eh, I do think that. OQM is not an ontological description of nature, but just an algorithm. That's one of the reasons why I don't like it.
Whether the laws are deterministic or not, is a very different question from whether or not there's a "reality out there." If you really don't make this distinction, it explains why you've been so resistant to understanding Bell Locality correctly. Because even *talking* about Local Causality (which Bell Locality tries to make mathematically precise) obviously presupposes that there's a "reality out there" -- but then you think this already means we presuppose determinism and disallow stochasticity. No wonder you're confused...
Indeed
Mind you, having only a stochastical theory (an "algorithm") doesn't mean that we deny the *existence* of an ontological reality, but only that the algorithm doesn't describe it.
For instance, think of the following situation: there's a 4-dim spacetime manifold, in which an entire list of events is fixed. They have no real relationship amongst themselves, "things just happen". This could be an ontological picture of a "totally arbitrary" universe.
And now, it might be that there are certain relationships in that 'bag of events' which are such that certain ratios of events are respected. Why ? It just is so. If we capture the calculational rules that do so, then that's a stochastical theory. Some algorithm that works more or less when doing statistics about essentially totally arbitrary sets of events.
This has no description power of course, it is just an observation of the respect of certain statistics. That's how I see irreducible statistical theories (such as OQM).
But it's not at all a funny thing *about his definition*. It's just a general point that you can never really have good reason to believe in irreducible stochasticness -- you can *always* get rid of this in favor of determinism by adding variables. And if you restrict your attention to locally causal theories, this general point remains true (of course). But you seem to think this is some kind of skeleton in the closet of Bell's definition. I just don't follow that at all.
Well, from the "random bag of events" story, you figure that capturing regularities in the distribution of arbitrary events (= stochastical theory) or to complete it with extra variables to turn this into a deterministic ontological description of nature, is a whole leap. The statistical rules are just calculational algorithms, while the latter is supposed to describe "what goes on" (while in fact, nothing goes on, and arbitrary events just seem to be distributed in ways which obey certain rules when counting, without any "cause" to it).
Please. Obviously, if you switch the definition of 'local' between the first and second half of a sentence, you can say all kinds of apparently-interesting (but actually false) things.
No, because "locality" for a deterministic theory (pretending at an ontological description) is entirely clear. For a stochastic theory, it depends on how one looks at it.
You're still missing the point that Bell Locality requires a complete state specification (lambda). So if you take seriously the idea that the quantum formalism is just a mere algorithm which doesn't make any claims about what does or doesn't exist, then it IS NOT BELL LOCAL. You can't even ask if it's bell local. It's not yet a *theory* in the sense required to apply Bell's criterion.
You've got it.
That phrase makes no sense. It isn't predictions that are or aren't Bell Local, it's theories. What you can say (and what you probably meant) is that, if the predictions violate the inequalities, then you know that there is no Bell Local theory which can make those predictions.
Yes. A shortcut.
In other words, you *always* assume that probabilities are not fundamental. In other words, you refuse a priori to consider the possibility of a genuinely stochastic theory.
Indeed. I can accept a stochastic theory as "capturing certain regularities of a totally arbitrary distribution of events - things happen" but not as any ontological description of nature.
And as such, there can be other notions of "locality" that apply to *algorithms* and not to *ontological descriptions*.
Someone who doesn't know about "Patrick's Theorem" (which I think was actually proved by Arthur Fine in '82, though it's really a pretty obvious point so I'm sure people knew it before then) might think, based on your way of phrasing this stuff, that we are left with a choice about whether to reject locality or determinism in the face of the Bell-inequality-violating data. It's the same as the confusion that is caused by this stupid recent terminology "local realism." What the hell is "realism"? Somebody tell me please what "realism" is assumed by Bell in deriving the inequality.
Maybe "realism" is the idea that the theory describes an ontology, or is just an algorithm ?
Bell assumed "beables", things that correspond to reality, in a theory.
I don't know what Bell would say about a computer program that spits out probabilities as a function of what one gives it as input
(data about the past light cone, about things happening at spacelike intervals, or data about the future of said event, where the result is hence known).
There isn't any -- at least, not any that can be remotely reasonably denied. Yet still the language caught on, and so now everybody thinks we *either* get to reject locality (which everybody says is crazy, because that means rejecting relativity) *or* reject "realism" (which therefore everybody is in favor of even though none of them know what the hell they mean by it!).
I agree with you: I want to keep both ! But "realism" (a potential description of an ontology) WAS already out of the window with OQM. Only a pattern in observed ratios of observations was to be the object of OQM, with some interdiction of thinking about an underlying ontological picture. Personally, I don't like that idea at all. And in fact, I think most people who pay lip service to it, don't really, and assign some form of ontology to the elements they manipulate. But the "official Bohr doctrine" says that there's no such thing as an "underlying ontology".
But let me repeat a crucial question here. If the lesson from all of this is that Bell Locality is *too strong*, and that *really* all relativity requires is *signal locality* then WHAT OBJECTION COULD YOU POSSIBLY HAVE AGAINST BOHMIAN MECHANICS? This position renders Bohmian Mechanics "local" -- as local as it needs to be to be consistent with relativity. And then why, please tell me, would any remotely sane person not opt for Bohm over OQM, MWI, and all other options? Leaving aside the issue of locality, Bohm is *clearly* the most reasonable option.
As I said, I think that Bell locality is the correct requirement for an ONTOLOGICAL description of nature (which, in my opinion, is also deterministic). However, signal locality is good enough for a probability algorithm if we abandon the idea of giving an ontological description of nature (and reduce to "things happen" in the big bag of events out there), and limit ourselves to observing certain regularities in the distribution of these events, which can be calculated through certain algorithmic specifications.
If we only require that these calculational rules remain invariant under change of observer, then signal locality is ok. If Bohmian mechanics is seen this way (as an algorithm to spew out probabilities) it is fine, as an signal-local procedure of calculating probabilities. The "particles and trajectories and non-local forces" are then not "beables" but just variables in the computer program that help you calculate probabilities.
The wavefunction and the projection are the same in OQM (and have never had any other pretention on OQM).
But then there's no real distinction between Bohm and OQM: they are both black boxes that spew out probabilities. One is not more or less reasonable than the other, because they come to the same numerical results, and both don't represent anything.
However, as a description of nature, where Bell locality is required (and, in my opinion, determinism too) - something OQM doesn't pretend to do, Bohm fails (and any theory that is equivalent to OQM for that matter).
So there IS no local ontological description of nature that can reproduce the OQM predictions. That's the "realism" part I suppose.
So there is just this "bag of events" and a few rules of the statistics about them, without there being an ontological description (apart from a long list of events)... unless we take it that all this is an illusion, to think that events are uniquely happening, and that all randomness is in our perception, and not in nature itself. That's MWI.