How can we test the holographic principle and nonlocality in quantum mechanics?

In summary, the conversation discusses the topic of nonlocality and its relation to quantum mechanics and special relativity. The book, The Holographic Universe by Michael Talbot, is mentioned as a source of information for this topic. The conversation also mentions the book The Dancing Wu Li Masters by Gary Zukav as a good starting point for those who are not well-versed in physics. The conversation then delves into a discussion about Bohm's interpretation of nonlocality and how it relates to special relativity. The concept of entanglement is also brought up as a possible evidence for nonlocality. The conversation ends with a debate about the consistency of quantum theory with special relativity and the existence of
  • #141
ttn said:
Just for the record, you're aware that the "rolling" in the rofl matches perfectly the structure of the logic here: everything you've seen with your eyes all your life convinces you that the best option is to reject as fantasy/delusion what you see with your eyes...

You perfectly understood the message now. :biggrin:
 
Last edited:
Physics news on Phys.org
  • #142
vanesch said:
What I wanted to point out simply is that Bell locality (which is such an extension of the definition of locality, and hence always has some arbitrary component to it over which one can discuss) COINCIDES with the usual word local, when the underlying theory is deterministic and all probabilities are epistemic. And, moreover, in those cases where the underlying theory is not deterministic, we can MAKE it deterministic and still keep locality (in the usual sense) ; that's "Patrick's theorem".

So I READ Bell - even though he didn't have this in mind - as:

The experimental outcomes (or predictions) are COMPATIBLE with an underlying DETERMINISTIC, local theory.

Well, as you've said, we've been over this before. But I still completely fail to understand what you think the logic of your point is supposed to be. Yes, it's always possible to take a stochastic theory and add extra variables to make it deterministic. And for a Bell Local stochastic theory, the deterministic theory you make this way can also be Bell Local. (Note also that for *some* Nonlocal stochastic theories, you can make a Local deterministic theory. This is just what EPR hoped would be possible in regard to nonlocal-stochastic OQM.)

Determinism, though, remains a red herring. By the way, the perfect correlations predicted by QM in the EPR-Bohm setup already *require* determinism. No local *stochastic* theory can predict those perfect correlations, so if you want a local theory you already must have a deterministic theory. This is how Bell set things up in his earlier papers -- he assumed deterministic outcomes because determinism could be deduced *from* locality. Then, in later papers, he was more careful to stress that you don't need any assumption of determinism to get the inequality (an assumption he hadn't even made before, but still, stupid people got this in their heads and couldn't get it out).

You make a different argument for determinism, which is just that it's always possible to make a deterministic theory (from a stochastic one). The question is: so what? You suggest that this means that, really, all Bell proved is that theories that "are COMPATIBLE with an underlying DETERMINISTIC, local theory" must satisfy the inequality. But why in the world would you say it this way? It's simply *less clear*, *less illuminating* than my formulation, which I guess you also agree is true: Bell proved that ALL LOCAL THEORIES (whether deterministic or not) must satisfy the inequality.

When you bring in determinism, you make it *sound* like determinism is some extra assumption that one needs to derive a Bell inequality. And that simply is not true. You can deduce determinism from locality if you want. Or you can just arbitrarily refuse to consider stochastic theories (which is basically what you do -- whenever a stochastic theory is put on the table, you want to say "no, let's replace this with a deterministic theory by adding variables"). But this is all just completely beside the point.

The only important question is: does Bell Locality make sense (as a formalization of relativity's prohibition on superluminal causation)? If it does, then, because the inequalities follow from *just this def'n of locality* there is a serious problem with relativity (assuing the experiments are telling us what we think they're telling us). Of course, one could also say: no, Bell's def'n of locality just doesn't make sense. Which is what you seem to want to be saying, but then you never actually give an argument for that. Instead, you just refuse to *talk* about whether it makes sense for stochastic theories, by immediately switching from any proposed stochastic theory to some deterministic theory. But refusing to address a certain question is not the same as proving that its answer is "no" (i.e., "no, bell locality does *not* make sense as a formalization of ... for stochastic theories").


The nice thing about this is that you even don't have to have the theory. You just have to check whether the Bell conditions hold.
On data, or on the stochastic predictions of any theory. IF the Bell conditions hold (on the DATA or PREDICTIONS, not on the supposed machinery of a hypothetical theory), then these DATA OR PREDICTIONS can also be reproduced by a (potentially ugly) local deterministic theory. If not, then you won't find ANY such local deterministic theory, and hence also no Bell-local stochastic theory.

Sure, once you accept that Bell Locality --> the Bell Inequalities, then you no longer need to talk about any specific candidate theory, etc. You can just see if the inequalities are respected. That's the whole point of the *theorem*, right? But I am trying to clarify the proof of the theorem (we agree about what options exist once you accept it), because that's what you and others seem to muddy with statements that, e.g., Bell made some assumption about "particles" in getting from Locality to the Inequalities... or Bell made some assumption about determinism in getting from Locality to the Inequalities. Both of these are *false*.




No, what I showed, long ago, is that you can EXTEND the lambda of any Bell local theory so as to make it completely deterministic, and that Bell locality is conserved under this operation. Whether you find this an attractive option, and whether the theory resulting has any esthetical or physical appeal is another matter, but it can be done.

Sure but who cares? It's just not relevant. And it fans the flames of the silly people who then say something stupid like "Aha, that's why OQM is local, because it isn't deterministic so Bell's Locality condition doesn't apply to it".



So, again, I'm not making any REQUIREMENT of determinism. Maybe I formulated this badly, I didn't want to mean that Bell insisted on determinism. I'm just saying that the way Bell extended the concept of local (which holds for deterministic theories) to stochastic theories, AUTOMATICALLY IMPLIES that a deterministic, local theory is compatible with the predictions.

But this has nothing to do with "the way Bell extended the concept of local ... to stochastic theories"! Forget about locality. You can always add variables to a stochastic theory and make a new deterministic theory. I don't think it's surprising that, say, stochastic theories about particles might still be about particles even after you add these extra variables to construct a deterministic theory. It's equally unsurprising that a stochastic theory that is *already* consistent with relativity, would remain so when you add extra variables. And surprising or not, it's simply not *relevant* to the question of whether Bell Locality *makes sense* as a definition of locality *for stochastic theories*. Are you saying that it doesn't -- because we never need to consider stochastic theories in the first place? That is a complete non-sequitur. We should just decide first whether you are or aren't willing to consider stochastic theories. If you're not willing, OK, then you just *assume* determinism and then there's no controversy that we can get a Bell Inequality. And if you are willing, do you accept Bell LOcality as a good def'n of local causality? If so, there's no controversy that we can get a Bell Inequality. But if not -- if you accept that stochastic theories should be considered, but don't think Bell Locality makes sense as a definition of local causality for them -- then there's something to discuss... But you'll have to start that discussion by saying what you think is *wrong* with Bell Locality. And anyway, this doesn't seem to be the logical peg you hang your hat on -- instead you exit at the first step and just refuse to consider stochastic theories (and then somehow confuse yourself into thinking you've done something else?).




As such (and probably my wordings were unfortunate), IT IS NOT A RESTRICTION to say that probabilities are ignorance-based in the discussion of Bell locality. Because even if initially they weren't thought to be so (and the theory was irreducibly stochastic), we can swap them for being so (and the NEW, equivalent, theory is now deterministic). So, when talking about Bell locality, there's no NEED (it is not a requirement) to talk about non-epistemic probabilities.

Look, I agree with you about the facts here. It's true that no generality is lost by just assuming determinism from the beginning. But you must be aware of the long history of people not understanding this, and thinking that we get to *choose* whether to reject "locality" or "determinism", and then they opt for the latter and consider it some kind of proof of Bohr over Einstein. It is to answer this wrong argument that I am going out of my way to stress that we do *not* need to assume determinism to get a Bell inequality. You don't actually disagree with me here, do you?


BTW, I have serious conceptual difficulties, as I told you already, with the concept of non-epistemic probabilities, without saying: "things happen". But happily, this concept is NOT NEEDED to discuss Bell's stuff. I'm not saying that it is a RESTRICTION. It is a CONSEQUENCE of the way Bell defined locality for stochastic theories.

OK, good. So then the *only* question is: is the way Bell defined locality for stochastic theories *valid*? Is it *true* that any theory violating Bell Locality thereby isn't respecting "local causality" or whatever exactly we think relativity requires by way of no-superluminal-causation?


And all this was not the point I wanted to make. I wanted to make the point that, PURELY HAVING A SET OF DATA, or HAVING A SET OF PREDICTIONS FROM A BLACK BOX, one can check them against the Bell inequalities. If they don't satisfy these, then THERE IS NO HOPE of finding *either* a Bell-local stochastic theory, OR an underlying local deterministic theory (both being equivalent).

That is exactly right. But I think you can only really grasp this *after* you understand Bell's Theorem -- in particular, after you understand that the inequalities follow from Locality *alone* (no other assumptions like "determinism" or "particles" or "peanut butter").
 
  • #143
Careful said:
To put it more to the edge, IF you take the idea of classical local evolution seriously as well as the idea of initial lack of knowledge of the state, then the notion of stochastic Bell locality is highly unnatural.

Uh, that is not true. For a *deterministic* theory with local evolution, Bell's locality can be shown to be correct. It is for "irreducibly stochastic" theories (where the probabilities are hence NOT due to lack of knowledge) that the issue is more involved.

This is as simple as it is solid.

Let us assume a deterministic theory. I take it that you accept "locality" to be such, that if we have an outcome at event A, (say, A+ or A-), that whether it is A+ or A- is entirely determined by the objectively real things on a spacelike hypersurface, confined to the past lightcone of A. Let us call all these things "T". So there is a function, A(T) which is + or which is -, right ?
Ok, now let us include one more thing: there is a choice made, in A's past lightcone, of one single item (the setting of the analyser angle, say). Now, of course, in predeterminism, even this setting is of course a function of T, but we will assume (unless this is the point where you want to attack Bell), that there is an element of free choice for this variable, which we call a, and which "comes in from the heavens".
So in fact, the outcome at A is a function of T and of this variable a.
We can write this in another way: we can write a function P(A | a, T), such that, if A is to be +, given a and T, this gives P(A+ | a,T) = 1 and Pa(A- | a,T) = 0, or of course vice versa.
So the function Pa(A | a,T) is 0 or 1, and is a (trivial) probability distribution over {A+,A-}.
Same for {B+,B-} and another function Pb(B|b,T).

Clearly, the correlation between the results at A and B is given by:

Pab(A,B | a,b,T) = Pa(A|a,T) Pb(B|b,T)

This can be checked for the 4 possible cases A+B+, A+B-...
and it is either 1 or 0 (which is obvious, given the deterministic character of the theory).

Assume now that we have ignorance over T, which is described by a probability distribution Pt(T).

Evidently the T-weighted correlation is now:

Pabt(A,B|a,b) = integral dT Pt(T) Pa(A|a,T) Pb(B|b,T)

This is Bell's starting point, and it is from this point that, for instance, the Clauser-Holt-Horne-Shimony inequality can be derived. All that is needed is the above form.

Now, Bell extends this to also inherently stochastical theories, where the Pa and Pb functions are still non-trivial probability distributions (irreducibly stochastic), but it can easily be shown (what I called "patrick's theorem") that one can then extend T to T', with extra quantities, such that the new Pa and Pb ARE trivial again (and the new theory hence deterministic).

As I said, the only "loophole" in this business is pre-determination of the "free choices" of a and b. The problem with that loophole is that it also allows you to say that a FTL telephone is "local" because all it will say on your side is already predetermined since the Big Bang.

So I don't see how, apart from predetermination (and then we can go home, because everything can happen and all that is written in a big catalogue of events somewhere), a local, deterministic theory can violate, say, the CHHS inequality for instance (as it is for quantum predictions).

At least if you agree upon the definition of a local deterministic theory we had above (namely, that what happens at an event is fully determined by what "is" in its past lightcone, and not outside of it, so that the choice of b cannot change the outcome at A).
 
  • #144
** Uh, that is not true. For a *deterministic* theory with local evolution, Bell's locality can be shown to be correct. It is for "irreducibly stochastic" theories (where the probabilities are hence NOT due to lack of knowledge) that the issue is more involved. **


You precisely understand where the subtlety enters (see later) - I corrected my previous abuse of language (again sorry for this, it has been a while that I used this terminology) -. It is very natural that spacelike correlations between hidden variables exist a priori given *local* interaction in their past (draw a diagram of lightcones if you want to see this) - there is no appearent reason why all this should be washed away.


**Ok, now let us include one more thing: there is a choice made, in A's past lightcone, of one single item (the setting of the analyser angle, say). Now, of course, in predeterminism, even this setting is of course a function of T, but we will assume (unless this is the point where you want to attack Bell), that there is an element of free choice for this variable, which we call a, and which "comes in from the heavens". **

Right, by making the assumption of free will in the detector settings, you throw determinism out of the window by hand. By assuming no correlations between spacelike separated events, you assume that the correlations created by interactions in the past are more or less washed out. This point is as old as the street and has basically been handwavingly dismissed (with a flawed argumentation) by Bell.

Now, such correlations might not be very visible when we try to determine the initial state (since one can only measure local statistical properties) but might be deeply hidden at the Planckian level. And I do not attack Bell, I simply point out that if you take classical thinking seriously, there is no a priori reason to conclude that the Bell inequalities (whose derivation itself denies this) show in any way that it is forbidden.

People like ttn have good right to believe whatever they want to, but there exist classical local models which violate the Bell inequaties blatantly (but are somewhat unnatural); it however does not testify of good taste to call such people silly, stupid or whatever - one should indeed take the theorem with its assumptions as it stands and reason on basis of logic.

The problem is all this discussion it that people start from free will, and a priori negate that what they want to disprove in the first place.

Careful
 
Last edited:
  • #145
ttn said:
No local *stochastic* theory can predict those perfect correlations, so if you want a local theory you already must have a deterministic theory.

No, and that's the point. It all depends what you call "local" for a stochastic theory. Bell choose one definition (as I said, very reasonable). But it doesn't NEED to be so.
If something is "irreducibly random", then "things can happen". There can be something that looks like a conspiration, but there's no way of telling.

You make a different argument for determinism, which is just that it's always possible to make a deterministic theory (from a stochastic one). The question is: so what? You suggest that this means that, really, all Bell proved is that theories that "are COMPATIBLE with an underlying DETERMINISTIC, local theory" must satisfy the inequality. But why in the world would you say it this way? It's simply *less clear*, *less illuminating* than my formulation, which I guess you also agree is true: Bell proved that ALL LOCAL THEORIES (whether deterministic or not) must satisfy the inequality.

No, I think he didn't PROVE it, he DEFINED it that way. He defined locality for stochastic theories in such a way that I CAN FIND AN underlying deterministic theory. I could define it, for instance, as being *information local*. That's ANOTHER definition of locality for a stochastic theory, it is SUFFICIENT for relativity's sake, and it is LESS SEVERE than Bell locality.
Quantum theory, for instance, PURELY REGARDED AS AN ALGORITHM TO CALCULATE PROBABILITIES OF OUTCOMES, and not regarded as a description of nature, satisfies information locality. It is another way of defining locality for a stochastic theory (an option which doesn't exist for deterministic theories).

The only important question is: does Bell Locality make sense (as a formalization of relativity's prohibition on superluminal causation)? If it does, then, because the inequalities follow from *just this def'n of locality* there is a serious problem with relativity (assuing the experiments are telling us what we think they're telling us). Of course, one could also say: no, Bell's def'n of locality just doesn't make sense. Which is what you seem to want to be saying, but then you never actually give an argument for that.

I'm saying that Bell locality is ONE possible definition of locality for a stochastic theory. It is a reasonable one, but others are possible too, such as information locality. A stochastic theory (IMO) doesn't give an account of reality, but is just an algorithm to help you calculate probabilities of outcomes. There are no "beables" in a stochastic theory, and probabilities are not physical quantities in the same way as fields are. It is not clear what is "real" in a stochastic theory. So "information locality" can do, for instance.

So I consider Bell's work as having mainly a result for the class of deterministic theories, which DO say something about the "reality out there".

Sure, once you accept that Bell Locality --> the Bell Inequalities, then you no longer need to talk about any specific candidate theory, etc. You can just see if the inequalities are respected. That's the whole point of the *theorem*, right? But I am trying to clarify the proof of the theorem (we agree about what options exist once you accept it), because that's what you and others seem to muddy with statements that, e.g., Bell made some assumption about "particles" in getting from Locality to the Inequalities... or Bell made some assumption about determinism in getting from Locality to the Inequalities. Both of these are *false*.

Well, I was trying to REFUTE the first claim (about the particles), but then you came in and spoiled it :grumpy:

I'm not claiming that Bell assumed determinism, I'm saying he EXTENDED the definition of "locality" (which is only clear for deterministic theories) to the realm of stochastic theories. He did this in a reasonable way, but there are other definitions possible. The funny thing about his definition is that the class of stochastic theories it allows ARE COMPATIBLE with underlying local deterministic theories. Now, what came first, the egg or the chicken, is now open to debate.

If I take "information locality" as the definition of locality for a stochastic theory, then the option of an underlying local deterministic theory is NOT open anymore.

Sure but who cares? It's just not relevant. And it fans the flames of the silly people who then say something stupid like "Aha, that's why OQM is local, because it isn't deterministic so Bell's Locality condition doesn't apply to it".

Well, if quantum theory is seen as an ALGORITHM which helps you calculate probabilities of outcomes of measurement, and doesn't have any pretention of giving an account of "the true nature of nature", then this statement is in a way correct. Clearly, it is an algorithm which is not Bell local, but if its pretention is only to be an ALGORITHM and not a DESCRIPTION of nature, then that can do. Given that its predictions are not Bell local, we know that we will not find a Bell local stochastic theory, or a local deterministic theory that can make the same predictions. So Bell locality would have been nice to have, but we don't. That's all.

We should just decide first whether you are or aren't willing to consider stochastic theories.

I can only assume stochastic theories as ALGORITHMS, not as ontological descriptions of nature. "probability" is, to me, not a physical quantity an sich. It is only something that pertains to perception, or to information, or things like that. It is not a field, like "temperature" or "electromagnetic potential" or something. There are physical quantities that look like probabilities, such as "ratios of outcomes of experiment".

But if not -- if you accept that stochastic theories should be considered, but don't think Bell Locality makes sense as a definition of local causality for them -- then there's something to discuss... But you'll have to start that discussion by saying what you think is *wrong* with Bell Locality.

There's nothing WRONG with Bell locality, except that I could choose another definition for a stochastic theory, such as information locality, given that I don't think that a stochastic theory gives an account of NATURE, but is just a trick to calculate probabilities of outcomes. Information locality is less severe than Bell locality (from Bell locality follows information locality, but not vice versa), and is SUFFICIENT to avoid FTL paradoxes.

To me, there's a huge difference between deterministic theories (which have the potential of describing nature), and stochastic theories (which are just algorithms). So Bell locality does have a function: it indicates us what are the possibilities for an underlying local deterministic theories. It would be nice to have it, then we KNOW that we can look for a local, underlying deterministic theory. But it is not because we don't have it, that ALL FORMS OF LOCALITY are now dead. The only thing which is really dead is an ontological (hence deterministic) local description of nature. But there's still another form of locality (the only one that would REALLY put relativity in difficulties), which can still be valid: information locality.

Look, I agree with you about the facts here. It's true that no generality is lost by just assuming determinism from the beginning. But you must be aware of the long history of people not understanding this, and thinking that we get to *choose* whether to reject "locality" or "determinism", and then they opt for the latter and consider it some kind of proof of Bohr over Einstein. It is to answer this wrong argument that I am going out of my way to stress that we do *not* need to assume determinism to get a Bell inequality. You don't actually disagree with me here, do you?

Well, I'm half way between! As I said, I think that it is nice to have determinism (even if it is HIDDEN determinism, in that there are physical reasons so that we never have access to it in practice). I think it is the only sound way of hoping to have an ontological description of nature. As such, Bell locality is great.

But when we consider stochastic theories, which are just ALGORITHMS, then I don't think we have to take Bell locality as the requirement. Information locality is sufficient, because it is sufficient to avoid the paradoxes in relativity (such as phoning to your grandma to tell her not to marry your granddad). Of course the price to pay is that there won't be an ontological description of nature behind it, but ok, so be it.

OK, good. So then the *only* question is: is the way Bell defined locality for stochastic theories *valid*? Is it *true* that any theory violating Bell Locality thereby isn't respecting "local causality" or whatever exactly we think relativity requires by way of no-superluminal-causation?

It depends whether you want to keep the possibility of the ontological description of nature. I would say that if you are (such as me) requiring this, then yes, Bell locality is a correct definition of locality. But if you can live with the fact that there is no ontological description of nature, and that you can only have an algorithm for "things that happen" then no, I don't think that Bell locality is a requirement. The only thing that REALLY gives a problem for relativity is information locality.

So all the time I'm saying that Bell locality is not required for a stochastical theory, I'm actually preaching against my own convictions. I think that Bell locality is required for a theory that gives an ontological description of nature. But as such, I cannot conceive irreducibly stochastic theories.
I can make a leap of conviction, say, and consider that there IS no ontological description of nature, but "things just happen" and we have only algorithms that give us probabilities of outcomes. What is "really out there" is then just a big catalogue of events, which just are, and with no a priori relationship between them. We are only lucky that there is SOMETHING that we can say as we experience our voyage through this catalogue of events, which is given by our irreducibly stochastic theories. As these theories don't give us any cause-effect relationship, they can just as well correctly describe "conspiracies" of "things that happen". In that case, we can STILL keep relativity, by only requiring signal locality.
 
Last edited:
  • #146
Careful said:
Right, by making the assumption of free will in the detector settings, you throw determinism out of the window by hand. By assuming no correlations between spacelike separated events, you assume that the correlations created by interactions in the past are more or less washed out. This point is as old as the street and has basically been handwavingly dismissed (with a flawed argumentation) by Bell.

Mmm, I see. But that's a slippery slope, because if we cannot assume "independence of choice" by an experimenter, and assume that there are exactly the right choices made in the hugely complicated system of the two human observers, to make the correlations come out right, then ANYTHING GOES. ALL experimental data we have somehow, which have indicated us some kind of causal relationship and from which we deduced some laws of nature, is somehow based upon the assumption that we had "fair samples". If we have to assume that all this can be due to spurious correlations of former interactions in the common past lightcones of all that, then EVEN things like measuring the speed of light (where there is a correlation between sending the lightpulse, and receiving it) can be totally wrong. As such, our deduction of relativity might even be entirely wrong - and as such, there's no issue with locality in the first place :-)

Although in principle you are right of course, it is hard to conceive how totally different mechanisms (like having some electronic noise, or a human being, or, I don't know, a rat in a cage or whatever) making the choice of the settings of the polarizer will work out ALWAYS in the right way as to make the right correlations come out. And these subtle correlations are predicted correctly by a theory which IGNORES all this (quantum theory).

A hard-to-believe conspiracy.

The problem is all this discussion it that people start from free will, and a priori negate that what they want to disprove in the first place.

The point is not so much "free will". The point is that we could set up the selection mechanism for the angle in miriads of ways, as I said, and each time this must come out right ?
In other words, the rat will push the same button as the electronic noise in the resistor, as the human being, as the...
There are so many ways to select something pseudo-randomly, that it is hard to believe that ALL these different ways would be correlated in exactly the same way. And IF it is the case, then you can throw in the dustbin about all experimental results we have, for they might all be consequences of these spurious correlations.
So the only way to make any progress is to assume statistical independence of these things. If that is wrong, then astrology might in fact be much closer to the truth than 400 years of science.
 
  • #147
Careful said:
People like ttn have good right to believe whatever they want to, but there exist classical local models which violate the Bell inequaties blatantly (but are somewhat unnatural); ...

Oh really? Can you give an example? Or do you just have in mind the super-determinism scenario, in which you reject the idea of a free choice of parameter settings? But again, that wouldn't be a local model which violates the Bell inequalities; it would be a reason why the Bell inequalities can never be experimentally tested. So I presume you have in mind exactly what you said: a "classical local model which violates the Bell inequalities blatantly". And since Bell's theorem proves this is impossible, I'd be very interested to see your alleged counterexample.
 
  • #148
** Mmm, I see. But that's a slippery slope, because if we cannot assume "independence of choice" by an experimenter, and assume that there are exactly the right choices made in the hugely complicated system of the two human observers, to make the correlations come out right, then ANYTHING GOES. **

A few notes :
(a) first of all we simply don't know whether it is slippery or not, we basically have no intuition about the effective dynamics of a deterministic system scaled up by a factor of 10^15.
(b) It could very well be that this scaled up dynamics has only a few attractors (in terms of the correlation functions) - so then the result would be natural.
(c) the observers do not need to be human :smile:
(d) the no FTL information theorem would certainly put a severe constraint (see your later objections towards relativity).

Basically, I have not a clue, but it certainly deserves to be studied since it seems to be an interesting thing to know.

**
A hard-to-believe conspiracy.
**

Again, perhaps not. The perfect QM results exceed the CHSH inequalities by merely some 30 percent (sorry, did not take a calculator :smile: ). I can imagine that on scales very small compared to the speed of light this could not be an unreasonable figure.

**
So the only way to make any progress is to assume statistical independence of these things. If that is wrong, then astrology might in fact be much closer to the truth than 400 years of science. **

Haha, well show me a convincing mathematical argument that local deterministic systems should behave this way.

Well Patrick, now the question is whether I believe this or not. The thing is I don't know, QM is certainly too weird (and totally incompatible with GR) and nobody virtually knows anything about natural deterministic scenario's to produce such non-local effects. Philosophically, it might seem hard to imagine a world in which free will is an illusion, but it could be so. At least, one would have a local *mechanism* to understand the correlations. In this respect I find the irreducible stochastic models also very interesting, free will being poored in locally :wink:

All I wanted to point out really is that if you examine the Bell inequalities really closely - one can go either way. All these discussions are just philosophy and religion and useless in some way.

Careful
 
Last edited:
  • #149
vanesch said:
No, and that's the point. It all depends what you call "local" for a stochastic theory. Bell choose one definition (as I said, very reasonable). But it doesn't NEED to be so.

Can you propose some other good definition of local causality for stochastic theories? And don't tell me "signal/info locality" -- that's a different idea, right? Orthodox QM (treating the wf as a complete description of the ontology) and Bohmian Mechanics are both "signal/info local", yet clearly they are both nonlocal at a deeper level. They both involve FTL causation.


If something is "irreducibly random", then "things can happen". There can be something that looks like a conspiration, but there's no way of telling.

No, this is sliding from talking about a theory's fundamental dynamical probabilities, to talkign about empirical frequencies or something. As long as you remember you're talking about some particular candidate theory, there *is* a way "of telling". This is just exactly what a theory tells us. It tells us what various happenings depend on. It's true that if you just see some event happen, there's no way a priori to know what caused it. But, in the context of a proposed theory, there is no such problem. A theory tells us what caused it (even if the explanation is merely stochastic) by telling us what the event (or its probability) *depends on* -- and then it makes sense to ask (still in the context of that theory) whether that dependence is or isn't local.



No, I think he didn't PROVE it, he DEFINED it that way. He defined locality for stochastic theories in such a way that I CAN FIND AN underlying deterministic theory.

No, he defined it the way he defined it: stuff outside the past light cone shouldn't affect the probabilities the theory assigns to events. It is also, incidentally, true that for any stochastic theory you can find an underlying deterministic theory. But that really has nothing to do with locality or Bell's definition thereof.


I could define it, for instance, as being *information local*. That's ANOTHER definition of locality for a stochastic theory, it is SUFFICIENT for relativity's sake, and it is LESS SEVERE than Bell locality.

What do you mean it's sufficient? Who says? So Bohmian Mechanics is then consistent with relativity? Why in the world, then, would YOU believe in MWI rather than Bohm?!?


Quantum theory, for instance, PURELY REGARDED AS AN ALGORITHM TO CALCULATE PROBABILITIES OF OUTCOMES, and not regarded as a description of nature, satisfies information locality.

Quantum theory so regarded isn't a theory.


It is another way of defining locality for a stochastic theory (an option which doesn't exist for deterministic theories).

Huh? Info/Signal locality is just a constraint on the predictions of the theory (it has nothing to do with the underlying guts/mechanics of the theory). What's the problem applying it to deterministic theories? Those too make predictions, yes?



A stochastic theory (IMO) doesn't give an account of reality, but is just an algorithm to help you calculate probabilities of outcomes. There are no "beables" in a stochastic theory, and probabilities are not physical quantities in the same way as fields are. It is not clear what is "real" in a stochastic theory. So "information locality" can do, for instance.

You're equivocating between two very different things. Stochastic doesn't mean "has no ontology". If you don't think a stochastic theory can have an ontology (fields or whatever) what the heck is OQM?


So I consider Bell's work as having mainly a result for the class of deterministic theories, which DO say something about the "reality out there".

Whether the laws are deterministic or not, is a very different question from whether or not there's a "reality out there." If you really don't make this distinction, it explains why you've been so resistant to understanding Bell Locality correctly. Because even *talking* about Local Causality (which Bell Locality tries to make mathematically precise) obviously presupposes that there's a "reality out there" -- but then you think this already means we presuppose determinism and disallow stochasticity. No wonder you're confused...



Well, I was trying to REFUTE the first claim (about the particles), but then you came in and spoiled it :grumpy:

Well, refute it the right way next time then! o:)


I'm not claiming that Bell assumed determinism, I'm saying he EXTENDED the definition of "locality" (which is only clear for deterministic theories) to the realm of stochastic theories. He did this in a reasonable way, but there are other definitions possible. The funny thing about his definition is that the class of stochastic theories it allows ARE COMPATIBLE with underlying local deterministic theories.

But it's not at all a funny thing *about his definition*. It's just a general point that you can never really have good reason to believe in irreducible stochasticness -- you can *always* get rid of this in favor of determinism by adding variables. And if you restrict your attention to locally causal theories, this general point remains true (of course). But you seem to think this is some kind of skeleton in the closet of Bell's definition. I just don't follow that at all.



If I take "information locality" as the definition of locality for a stochastic theory, then the option of an underlying local deterministic theory is NOT open anymore.

Please. Obviously, if you switch the definition of 'local' between the first and second half of a sentence, you can say all kinds of apparently-interesting (but actually false) things.



Well, if quantum theory is seen as an ALGORITHM which helps you calculate probabilities of outcomes of measurement, and doesn't have any pretention of giving an account of "the true nature of nature", then this statement is in a way correct. Clearly, it is an algorithm which is not Bell local, but if its pretention is only to be an ALGORITHM and not a DESCRIPTION of nature, then that can do.

You're still missing the point that Bell Locality requires a complete state specification (lambda). So if you take seriously the idea that the quantum formalism is just a mere algorithm which doesn't make any claims about what does or doesn't exist, then it IS NOT BELL LOCAL. You can't even ask if it's bell local. It's not yet a *theory* in the sense required to apply Bell's criterion.


Given that its predictions are not Bell local

That phrase makes no sense. It isn't predictions that are or aren't Bell Local, it's theories. What you can say (and what you probably meant) is that, if the predictions violate the inequalities, then you know that there is no Bell Local theory which can make those predictions.


I can only assume stochastic theories as ALGORITHMS, not as ontological descriptions of nature. "probability" is, to me, not a physical quantity an sich. It is only something that pertains to perception, or to information, or things like that.

In other words, you *always* assume that probabilities are not fundamental. In other words, you refuse a priori to consider the possibility of a genuinely stochastic theory. Which, as we agree, turns out not to matter one way or the other -- but when you are explaining things to people it is extremely misleading to put it this way. Someone who doesn't know about "Patrick's Theorem" (which I think was actually proved by Arthur Fine in '82, though it's really a pretty obvious point so I'm sure people knew it before then) might think, based on your way of phrasing this stuff, that we are left with a choice about whether to reject locality or determinism in the face of the Bell-inequality-violating data. It's the same as the confusion that is caused by this stupid recent terminology "local realism." What the hell is "realism"? Somebody tell me please what "realism" is assumed by Bell in deriving the inequality. There isn't any -- at least, not any that can be remotely reasonably denied. Yet still the language caught on, and so now everybody thinks we *either* get to reject locality (which everybody says is crazy, because that means rejecting relativity) *or* reject "realism" (which therefore everybody is in favor of even though none of them know what the hell they mean by it!).




To me, there's a huge difference between deterministic theories (which have the potential of describing nature), and stochastic theories (which are just algorithms).

Repeating now, but no, that is not what the terms "deterministic" and "stochastic" mean.



So Bell locality does have a function: it indicates us what are the possibilities for an underlying local deterministic theories. It would be nice to have it, then we KNOW that we can look for a local, underlying deterministic theory. But it is not because we don't have it, that ALL FORMS OF LOCALITY are now dead. The only thing which is really dead is an ontological (hence deterministic) local description of nature. But there's still another form of locality (the only one that would REALLY put relativity in difficulties), which can still be valid: information locality.

"ontological (hence deterministic)"? Tsk tsk.

But let me repeat a crucial question here. If the lesson from all of this is that Bell Locality is *too strong*, and that *really* all relativity requires is *signal locality* then WHAT OBJECTION COULD YOU POSSIBLY HAVE AGAINST BOHMIAN MECHANICS? This position renders Bohmian Mechanics "local" -- as local as it needs to be to be consistent with relativity. And then why, please tell me, would any remotely sane person not opt for Bohm over OQM, MWI, and all other options? Leaving aside the issue of locality, Bohm is *clearly* the most reasonable option. So if you want to redefine locality (or more precisely, the requirements of relativity) in a way that removes this one possible objection to Bohm (that it's nonlocal) then what objection remains? Why do you opt for MWI rather than Bohm if you think that all relativity really requires is signal locality.
 
  • #150
http://plato.stanford.edu/entries/qm-bohm/

"In Bohmian mechanics a system of particles is described in part by its wave function, evolving, as usual, according to Schrödinger's equation. However, the wave function provides only a partial description of the system. This description is completed by the specification of the actual positions of the particles..."

Particles, particles, particles. When you realize that a quantum isn't a particle, the particles really catch your eye.
 
Last edited:
  • #151
**Oh really? Can you give an example? Or do you just have in mind the super-determinism scenario, in which you reject the idea of a free choice of parameter settings? But again, that wouldn't be a local model which violates the Bell inequalities; it would be a reason why the Bell inequalities can never be experimentally tested. **

You basically don't know that as I argued in my previous post - so it seems you are just telling us what your pal Bell spits out (without any evidence).

** So I presume you have in mind exactly what you said: a "classical local model which violates the Bell inequalities blatantly". And since Bell's theorem proves this is impossible, I'd be very interested to see your alleged counterexample. **

The models I refer to require indeed correlations beyond the lightcone which might or might not be considered as natural (the references are in the Morgan paper).

The validity of the no-correlation hypothesis has to be judged within the framework of a natural THEORY. As I seem to remember, another ``natural'' assumption such as the fair sampling hypothesis (in the EPR photon experiments) has been proven wrong in the framework of stochastic electrodynamics (which is very natural if you take classical physics seriously). Philosophical prejudices in this matter are of no interest whatsoever.

Careful
 
Last edited:
  • #152
Careful said:
**Oh really? Can you give an example? Or do you just have in mind the super-determinism scenario, in which you reject the idea of a free choice of parameter settings? But again, that wouldn't be a local model which violates the Bell inequalities; it would be a reason why the Bell inequalities can never be experimentally tested. **

You basically don't know that as I argued in my previous post - so it seems you are just telling us what your pal Bell spits out (without any evidence).

** So I presume you have in mind exactly what you said: a "classical local model which violates the Bell inequalities blatantly". And since Bell's theorem proves this is impossible, I'd be very interested to see your alleged counterexample. **

The models I refer to require indeed correlations beyond the lightcone which might or might not be considered as natural (the references are in the Morgan paper).

The validity of the fair sampling hypothesis has to be judged within the framework of a natural THEORY. As I seem to remember, the latter has been proven wrong in the framework of stochastic electrodynamics (which is very natural if you take classical physics seriously). Philosophical prejudices in this matter are of no interest whatsoever.

Careful


So, I take it that means you won't be providing an example of a local theory that makes predictions which violate Bell's inequalities? (as opposed to staking out territory in the experimental efficiency loophole)
 
  • #153
ttn said:
So, I take it that means you won't be providing an example of a local theory that makes predictions which violate Bell's inequalities? (as opposed to staking out territory in the experimental efficiency loophole)
I am not interested in playing stupid wordgames about terminology, I clearly indicated that a super-deterministic theory is Bell local in the sense that the outcome of an experiment is only influenced by the events in its past lightcone. It does not satisy however the no-conspiracy (or no correlation) condition. This was admitted by your hero himself in chapter 12 of the book you like to cite so much. The examples are in the reference list of the paper as I told you before. And as Vanesch points out, irreducible stochastic models (satisfying a reasonable definition of locality) can reproduce the EPR correlations (without appealing to any loophole whatsoever) - Bell does exclude these on grounds of his free will criterion but the latter do appear to satisfy another reasonable form of free will (see Price). For an introduction see http://arxiv.org/PS_cache/quant-ph/pdf/0202/0202064.pdf , a paper by Adrian Kent. So, as I said, Bell's theorem narrows more accurately local possibilities (local in a reasonable sense), but does exclude very little if we critically examine some of its assumptions - assuming we ignore the experimental loopholes so far.

Careful
 
Last edited by a moderator:
  • #154
Careful said:
**
Suppose that particles are all composites, and that in interactions, they are neither created nor destroyed, except in pairs of particle / antiparticle (and then only in virtual form). Then one can rewrite the usual particle interactions in terms of particles in a Bohmian fashion. That is, there will be no particle creation / annihilation, so the whole thing will look again like QM, a problem that Bohmian mechanics has provided a remotely convincing interpretation. **

As far as I recall, the defined worldlines are not Lorentz invariant, i.e. frame dependent; it seems impossible to me to reconcile that with any notion of objective reality. It is also clear that interactions change the particle number, I guess you would have to introduce then a stochastic element in the dynamics which is again dependent upon your choice of foliation as well as a seemingly (limited) ad hoc choice of *when* the particles of the incoming species disappear and the others appear. Or is there some way to avoid these issues recently ?
Careful

The worldlines in Bohmian mechanics haven't been Lorentz invariant since what, 50 years. If you have an objection to what the Nobel prize winning physicist thinks about this, well, you can't argue with him because he is dead, but you can always read chapter 12 of "The Undivided Universe" which is devoted to this subject. It's not like Bohm and friends didn't notice it.

My point was not to force you away from Lorentz invariance (I think it's good that humans have religious beliefs), but instead to show that there is a way of stuffing QFT into a Bohmian form. Bohmian form does not include Lorentz invariance.

Yes, interactions change the particle number, but I'm not proposing a particle solution. What I'm proposing is a preon solution. To get Bohmian mechanics to fit into the QFT form, one must suppose that the elementary particles are not, well, elementary.

In a preon model with preons that are never created nor destroyed, there is no "when" for particle creation and destruction. The assumption is that the (preon) particles are immortal. What appear and disappear are their bound states.

You only need to add one thing to this to get the something like the usual QFT, and that is the ability of particle / anti particles to appear. For this, use the Feynman interpretation that antiparticles are particles moving backwards in time.

And wasn't it Schwinger himself who invented "source theory" QED, a formalism which matches the usual QED but has no unconnected diagrams? That is, in his theory, there is no vacuum in that there are no diagrams except those that connect to external lines.

Let me put it into another way of looking at it. Suppose you were stuck back in the 19th century and you were convinced that atoms are never created nor destroyed. Then what do you say when someone comes up to you and points out that paper is consumed by fire? It's not the particles that are being created and destroyed, it is instead immutable preons switching their bound states.

Carl
 
  • #155
ttn said:
Can you propose some other good definition of local causality for stochastic theories? And don't tell me "signal/info locality" -- that's a different idea, right? Orthodox QM (treating the wf as a complete description of the ontology) and Bohmian Mechanics are both "signal/info local", yet clearly they are both nonlocal at a deeper level. They both involve FTL causation.

Orthodox QM is not "non-local at a deeper level" in the sense that it proposes a *physical mechanism* that conveys a non-local causation, because orthodox QM is JUST AN ALGORITHM to calculate probabilities of outcomes of experiment. If you see Bohmian mechanics that way, they are on the same level: they spit out probabilities, and one shouldn't look at their mathematical constructions as representing anything physically, because they don't. You could just as well look at the listings of a C-program or anything.
This is one of the reasons why I don't like OQM, because I'd like to have a description of nature, but it is not supposed to be one. It's just a calculational scheme.

Now, from the moment that you start assigning ontology to the wavefunction, then yes, the projection postulate gives you a non-local operation. But if this is seen as "C-code that calculates probabilities" then it is hard to say what is "in its past light cone", no ?

No, this is sliding from talking about a theory's fundamental dynamical probabilities, to talkign about empirical frequencies or something. As long as you remember you're talking about some particular candidate theory, there *is* a way "of telling". This is just exactly what a theory tells us. It tells us what various happenings depend on. It's true that if you just see some event happen, there's no way a priori to know what caused it. But, in the context of a proposed theory, there is no such problem.

Nothing "causes" probabilities, right ? But I guess that you mean: the formula for the probabilities in your theory, does it depend on input you have to give of events in the past light cone only, or others ?
Well, I then tell you that ANY theory has its probabilities depend on things outside of the past lightcone of where the event matters: namely just afterwards.
Probabilities are sensitive to what's in the FUTURE light cone, because mostly they flip then from a real value to 0 or 1 (because in the future, we KNOW what happened).

So if the "algorithm for the probability of event at A" is given as input, only what is in A's past lightcone, it might crank out 0.5.
If we also give it what happens at B (outside of B's past light cone), it might become 0.75.
And if we add the result of the measurement to it at event A' in A's future, then it will become, say, 1.

So a stochastic theory's predictions, or empirically established relative frequencies, are EQUIVALENT. They are just tables of probabilities, generated in the first case by an algorithm, and observed in the second case, by treating data.
A theory tells us what caused it (even if the explanation is merely stochastic) by telling us what the event (or its probability) *depends on* -- and then it makes sense to ask (still in the context of that theory) whether that dependence is or isn't local.

Well, then the probability of all events depend strongly on their future, because that makes their probabilities flip to 0 or 1.
In fact, from this viewpoint, stochastic theories even become deterministic: If you have the result, then you can predict the earlier probabilities with certainty to be 0 or 1.

This is why I am insisting that probabilities are not physical quantities as such, because they CHANGE as a function of what we know.

No, he defined it the way he defined it: stuff outside the past light cone shouldn't affect the probabilities the theory assigns to events. It is also, incidentally, true that for any stochastic theory you can find an underlying deterministic theory. But that really has nothing to do with locality or Bell's definition thereof.

It has much to do with it: Bell's idea is about "causal influence", which means that we are at least proposing a description of the underlying reality of nature in which such a concept could play a role.
But a stochastic theory doesn't. It's a computer program that cranks out probabilities, and is NOT a description of any reality, UNLESS it is a deterministic theory in which things like initial states are recognized to be ignored (as in statistical mechanics, or in Bohmian mechanics, for instance).

What do you mean it's sufficient? Who says? So Bohmian Mechanics is then consistent with relativity? Why in the world, then, would YOU believe in MWI rather than Bohm?!?

The *probabilistic predictions* of Bohmian mechanics, seen as a black box that cranks out probabilities (and not as some kind of ontological description of nature) are compatible with relativity, in the same way as the probabilistic predictions of OQM are (and in the latter case, it is often said that this is nothing else but a black box that calculates probabilities).

As they are equivalent algorithms, there is of course no reason to "believe" one over the other, as they crank out the same numbers (maybe not in the same computing time).
However, Bohmian mechanics doesn't posit itself as a black box cranking out probabilities, right ? It has the pretention to be an ontological description of nature. Well, THEN one has to open the box, and to look if all the formulations are local. If the internal machinery is local. And it isn't. It cannot be written in a Lorentz-invariant way, for instance.

The same happens of course if we would take OQM to be an ontological description of nature, and if we would take the wavefunction as an element of reality. Then we could also not formulate it in a Lorentz invariant way.

But if we both see them just as a machine out of which comes predictions of probabilities of observation, then both are on the same level (and actually totally equivalent ; it is then just a matter of which one is easier in its manipulation to make your choice).

For "boxes that crank out probabilities" but which do not have the pretention of giving us any ontological description of nature, we can take "signal locality" as a criterium, or "Bell locality" as a criterium.
They tell us different things.

Bell locality tells us whether we will, or not, be able to find a local, deterministic theory that can explain the predictions, based upon ignorance of initial state ; such a deterministic theory can then eventually serve as an ontological description - which our probability-spitting box doesn't have.

Signal locality tells us whether or not we will be able to phone to our grandma to tell her not to marry granddad, if the lorentz transformations are correct (mind you, I didn't say: if SR holds :-).

Quantum theory so regarded isn't a theory.

That's why I don't like it :smile:
I WOULD like to have a theory that pretends to describe "nature out there" but OQM is not supposed to be so, but just a calculational trick which helps us estimate outcomes of experiment (their probabilities) when we give it the preparation.

That's also why all the beable stuff doesn't really apply to OQM: it doesn't have the pretention to describe anything physical. It just relates "in" states with "out" states.

Huh? Info/Signal locality is just a constraint on the predictions of the theory (it has nothing to do with the underlying guts/mechanics of the theory). What's the problem applying it to deterministic theories? Those too make predictions, yes?

What I meant was that there are not different options for locality for a deterministic theory. A deterministic theory is local or not, depending on whether the DETERMINED outcome at a point depends, or not, on things outside of the lightcone of that event. Given that that outcome is a clear physical thing (as contrasted to the *probability* of the outcome), there's no discussion about what it might mean, to be local, for a deterministic theory. Locality is originally a concept that was only clear for deterministic theories.
A deterministic local theory is both Bell local and Signal local: you cannot have a deterministic theory which is NOT Bell local, but who is signal local.

You're equivocating between two very different things. Stochastic doesn't mean "has no ontology". If you don't think a stochastic theory can have an ontology (fields or whatever) what the heck is OQM?

Eh, I do think that. OQM is not an ontological description of nature, but just an algorithm. That's one of the reasons why I don't like it.

Whether the laws are deterministic or not, is a very different question from whether or not there's a "reality out there." If you really don't make this distinction, it explains why you've been so resistant to understanding Bell Locality correctly. Because even *talking* about Local Causality (which Bell Locality tries to make mathematically precise) obviously presupposes that there's a "reality out there" -- but then you think this already means we presuppose determinism and disallow stochasticity. No wonder you're confused...

Indeed :tongue2:

Mind you, having only a stochastical theory (an "algorithm") doesn't mean that we deny the *existence* of an ontological reality, but only that the algorithm doesn't describe it.

For instance, think of the following situation: there's a 4-dim spacetime manifold, in which an entire list of events is fixed. They have no real relationship amongst themselves, "things just happen". This could be an ontological picture of a "totally arbitrary" universe.

And now, it might be that there are certain relationships in that 'bag of events' which are such that certain ratios of events are respected. Why ? It just is so. If we capture the calculational rules that do so, then that's a stochastical theory. Some algorithm that works more or less when doing statistics about essentially totally arbitrary sets of events.
This has no description power of course, it is just an observation of the respect of certain statistics. That's how I see irreducible statistical theories (such as OQM).

But it's not at all a funny thing *about his definition*. It's just a general point that you can never really have good reason to believe in irreducible stochasticness -- you can *always* get rid of this in favor of determinism by adding variables. And if you restrict your attention to locally causal theories, this general point remains true (of course). But you seem to think this is some kind of skeleton in the closet of Bell's definition. I just don't follow that at all.

Well, from the "random bag of events" story, you figure that capturing regularities in the distribution of arbitrary events (= stochastical theory) or to complete it with extra variables to turn this into a deterministic ontological description of nature, is a whole leap. The statistical rules are just calculational algorithms, while the latter is supposed to describe "what goes on" (while in fact, nothing goes on, and arbitrary events just seem to be distributed in ways which obey certain rules when counting, without any "cause" to it).

Please. Obviously, if you switch the definition of 'local' between the first and second half of a sentence, you can say all kinds of apparently-interesting (but actually false) things.

No, because "locality" for a deterministic theory (pretending at an ontological description) is entirely clear. For a stochastic theory, it depends on how one looks at it.

You're still missing the point that Bell Locality requires a complete state specification (lambda). So if you take seriously the idea that the quantum formalism is just a mere algorithm which doesn't make any claims about what does or doesn't exist, then it IS NOT BELL LOCAL. You can't even ask if it's bell local. It's not yet a *theory* in the sense required to apply Bell's criterion.

You've got it.
That phrase makes no sense. It isn't predictions that are or aren't Bell Local, it's theories. What you can say (and what you probably meant) is that, if the predictions violate the inequalities, then you know that there is no Bell Local theory which can make those predictions.

Yes. A shortcut.

In other words, you *always* assume that probabilities are not fundamental. In other words, you refuse a priori to consider the possibility of a genuinely stochastic theory.

Indeed. I can accept a stochastic theory as "capturing certain regularities of a totally arbitrary distribution of events - things happen" but not as any ontological description of nature.
And as such, there can be other notions of "locality" that apply to *algorithms* and not to *ontological descriptions*.

Someone who doesn't know about "Patrick's Theorem" (which I think was actually proved by Arthur Fine in '82, though it's really a pretty obvious point so I'm sure people knew it before then) might think, based on your way of phrasing this stuff, that we are left with a choice about whether to reject locality or determinism in the face of the Bell-inequality-violating data. It's the same as the confusion that is caused by this stupid recent terminology "local realism." What the hell is "realism"? Somebody tell me please what "realism" is assumed by Bell in deriving the inequality.

Maybe "realism" is the idea that the theory describes an ontology, or is just an algorithm ?
Bell assumed "beables", things that correspond to reality, in a theory.
I don't know what Bell would say about a computer program that spits out probabilities as a function of what one gives it as input
(data about the past light cone, about things happening at spacelike intervals, or data about the future of said event, where the result is hence known).

There isn't any -- at least, not any that can be remotely reasonably denied. Yet still the language caught on, and so now everybody thinks we *either* get to reject locality (which everybody says is crazy, because that means rejecting relativity) *or* reject "realism" (which therefore everybody is in favor of even though none of them know what the hell they mean by it!).

I agree with you: I want to keep both ! But "realism" (a potential description of an ontology) WAS already out of the window with OQM. Only a pattern in observed ratios of observations was to be the object of OQM, with some interdiction of thinking about an underlying ontological picture. Personally, I don't like that idea at all. And in fact, I think most people who pay lip service to it, don't really, and assign some form of ontology to the elements they manipulate. But the "official Bohr doctrine" says that there's no such thing as an "underlying ontology".

But let me repeat a crucial question here. If the lesson from all of this is that Bell Locality is *too strong*, and that *really* all relativity requires is *signal locality* then WHAT OBJECTION COULD YOU POSSIBLY HAVE AGAINST BOHMIAN MECHANICS? This position renders Bohmian Mechanics "local" -- as local as it needs to be to be consistent with relativity. And then why, please tell me, would any remotely sane person not opt for Bohm over OQM, MWI, and all other options? Leaving aside the issue of locality, Bohm is *clearly* the most reasonable option.

As I said, I think that Bell locality is the correct requirement for an ONTOLOGICAL description of nature (which, in my opinion, is also deterministic). However, signal locality is good enough for a probability algorithm if we abandon the idea of giving an ontological description of nature (and reduce to "things happen" in the big bag of events out there), and limit ourselves to observing certain regularities in the distribution of these events, which can be calculated through certain algorithmic specifications.
If we only require that these calculational rules remain invariant under change of observer, then signal locality is ok. If Bohmian mechanics is seen this way (as an algorithm to spew out probabilities) it is fine, as an signal-local procedure of calculating probabilities. The "particles and trajectories and non-local forces" are then not "beables" but just variables in the computer program that help you calculate probabilities.
The wavefunction and the projection are the same in OQM (and have never had any other pretention on OQM).
But then there's no real distinction between Bohm and OQM: they are both black boxes that spew out probabilities. One is not more or less reasonable than the other, because they come to the same numerical results, and both don't represent anything.

However, as a description of nature, where Bell locality is required (and, in my opinion, determinism too) - something OQM doesn't pretend to do, Bohm fails (and any theory that is equivalent to OQM for that matter).

So there IS no local ontological description of nature that can reproduce the OQM predictions. That's the "realism" part I suppose.

So there is just this "bag of events" and a few rules of the statistics about them, without there being an ontological description (apart from a long list of events)... unless we take it that all this is an illusion, to think that events are uniquely happening, and that all randomness is in our perception, and not in nature itself. That's MWI.
 
Last edited:
  • #156
**The worldlines in Bohmian mechanics haven't been Lorentz invariant since what, 50 years. If you have an objection to what the Nobel prize winning physicist thinks about this, well, you can't argue with him because he is dead, but you can always read chapter 12 of "The Undivided Universe" which is devoted to this subject. It's not like Bohm and friends didn't notice it. **

:rolleyes: It is not because I know that they noticed it, that this issue dissapears in thin air ! Of course you can say that you do not need Lorentz invariance at this level of reality (since the lack of does not lead to a falsifiable prediction) but what is the point then of BM, given that it does not solve the measurement problem either and complicates things. So, given that reality in BM is terribly non-local, frame dependent, does not solve the measurement problem, why should we appreciate it ?

**
My point was not to force you away from Lorentz invariance (I think it's good that humans have religious beliefs), but instead to show that there is a way of stuffing QFT into a Bohmian form. Bohmian form does not include Lorentz invariance. **

It is good to have religious belief as long as you do not try to support them by abusing theorems over and over again, ad nauseum :yuck:

**
Yes, interactions change the particle number, but I'm not proposing a particle solution. What I'm proposing is a preon solution. To get Bohmian mechanics to fit into the QFT form, one must suppose that the elementary particles are not, well, elementary. **

Right, and here we agree very well, but you need to go a step higher - a gear up - you have to try to explain why the wavefunction which appears at the same time as statistical tool AND physical guidance mechanism in BM can undergo a terribly non local collapse or/and spontaneously create a new wave (depending on your interpretation). BM is incapable of doing so, and a solution for that problem requires much more...
 
Last edited:
  • #157
Careful said:
Right, and here we agree very well, but you need to go a step higher - a gear up - you have to try to explain why the wavefunction which appears at the same time as statistical tool AND physical guidance mechanism in BM can undergo a terribly non local collaps. BM is incapable of doing so, and a solution for that problem requires much more...

You know, the wavefunction does NOT collapse in BM. That's why I can make Bohmians nervous by saying that it has some MWI flavor to it :biggrin:

Let's give it a go and have them bite :tongue2:

You can, in BM, continue to work with the non-collapsed wave function (as well as you can, FAPP, collapse it, because the part that is not relevant will not change any trajectory anymore in any significant way).

The thing that's much more fuzzy IMO in Bohmian mechanics, is the statement that "perception" is only due to the particle positions and not to the wavefunction, although the wavefunction (with all its ghost solutions - just as in MWI) is STILL part of the ontology of BM, and at the same time deny the perception of the perfect particle positions (in order to be able to satisfy the initial probability condition), and have them agree with the Born rule. I gave it probably less thought than I should, but "something feels fishy" there. If I, as an observer, know *perfectly well* my particle positions, then BM, if I'm not mistaking, will not give me the same predictions as QM, because I need initially to have some "fuzzyness" to it, which will be conserved under Bohmian dynamics ; fuzzyness which needs to correspond to the wavefunction's norm.
But then, this means that my perception of reality is also conditioned in part by this wavefunction (which contains other "ghost" terms).
And I'm really not very far from "MWI with a non-local token".
 
  • #158
Hi Patrick,

I guess we are now at the point where the very complicated issue of free will determines to what extend our laws of nature are wrong/fine : free will, the arrow of time, (relativistic) causality - all things which are inextracibly connected to each other... All choices we know of are so polarized up till now that we probably did not understand anything of it yet. The Bell inequalities deeply rely upon *one* polarization of these issues which explains (a) why the true lack of experimental violation still generates so much heat (here people can have the same notion of free will, but deny nature is non-local) (b) why all this discussion really belongs in the philosophy forum since it seems unlikely to me that the issue will be settled one day (there are so many reasonable alternatives possible). ttn tells us we cannot even define reality, while his entire reasoning is based upon the *reality* of Bell's notion of free will (and his presumptions of a conclusive Bell test in the future) !

Cheers,

Careful
 
Last edited:
  • #159
**You know, the wavefunction does NOT collapse in BM. That's why I can make Bohmians nervous by saying that it has some MWI flavor to it :biggrin: **

:rofl: :rofl: No Patrick, that depends how you interpret BM, as Bohmian you have to say there is a collapse of the wavefunction OR you are indeed slightly *changing* QM (but you can equally do that by slightly modifying Copenhagen) - but also here, some ``consciousness'' is needed (the act of the observer which suddenly creates a new (non-local ?) wave, which has no effect in the PAST lightcone). The latter option, which is what you write below, is worthwhile considering (and obviously I considered this before) albeit I presume it could lead to a fasifiable prediction if one is clever enough. Let's not play these silly games ...


**
You can, in BM, continue to work with the non-collapsed wave function (as well as you can, FAPP, collapse it, because the part that is not relevant will not change any trajectory anymore in any significant way). **

What you write further is indeed more or less what I was appealing to by mentioning the double part of the wave function. Basically, the use of the wave function in BM does not make sense to me vis a vis a one particle scenario.

Careful
 
Last edited:
  • #160
Careful said:
ttn tells us we cannot even define reality, while his entire reasoning is based upon the *reality* of Bell's notion of free will (and his presumptions of a conclusive Bell test in the future) !

The "free will" issue is not so clear, I'd say. For instance, I don't think that there is free will, but this is just an illusion of our passive perception (just as well in an MWI view as in a classical, deterministic view).

The point of Bell is not entirely depending on a notion of free will, but on the notion that a totally distinct process (even with the same past lightcone) will be statistically independent. A rat pushing one of 3 buttons, or an analog amplifier of resistor noise sampling a random value or whatever, although of course entirely determined by the same past light cone, should not, to all reasonable expectation, be FULLY statistically correlated with a light pulse traveling in an optical fibre.

Although in principle you are right that there MIGHT be such a strange correlation (because both are functions of the SAME initial conditions), it would invalidate about all experimental knowledge we have. I was pretty serious about the astrology example: if things are intrinsically correlated in such a way that we should expect *perfect* correlations between about any process that can select one of 3 possibilities (as I said, a rat pushing a button, randomly sampled resistor noise, a human pushing a button, the mechanics of rolling dice...) whenever they are used in an EPR experiment, then you can just as well state that ALL we've ever seen is just the appearance of one big coincidental correlation. There's no hope to ever deduce any law in nature in that case, or to deduce any causal relationship.

So, when we write P(A|a,T) instead of P(A|T), which should be the correct formulation, with a(T) and b(T) the two "decisions" taken to select one of the angles, then it is implicitly assumed that the functions that determine a(T) and b(T) pick out OTHER parts of T than the relevant parts for the light pulses passing the polarizers. T will more have to do with what happened in the light source in the latter, and with whatever was at the origin of the button-pushing process in the former, and it is indeed a hypothesis, but a very reasonable one, that they correspond to distinct parts, in that what is emitted by the light source is not entirely influenced by what happens in the brain of the rat, or vice versa.

You might do on the Planck scale what you want, if this is the conclusion, then we'd better concentrate on table turning, astrology and laying cards, because at least, these people doing that UNDERSTOOD that everything is correlated in very strange ways, and that scientists were totally deluded thinking that they could chunk up physical processes by testing them in the lab using statistical analysis.

As I said, if these correlations are to be taken seriously, an existing FTL telephone is even "local", because there's just this funny coincidence that the FTL phone talking to me tells me the exact same words as you speaking to it on Titan. There's no causal relationship, you were just supposed to say exactly the same thing as the FTL phone was telling me.
 
Last edited:
  • #161
**The "free will" issue is not so clear, I'd say. For instance, I don't think that there is free will, but this is just an illusion of our passive perception (just as well in an MWI view as in a classical, deterministic view).

The point of Bell is not entirely depending on a notion of free will, but on the notion that a totally distinct process (even with the same past lightcone) will be statistically independent. **

Right, so some call that assumption free will, others call it the independent apparatus assumption and link it to local rotational invariance. Anyway, it seems to be unlikely as the outcome of a deterministic theory and to say more about it, study is certainly required.

**A rat pushing one of 3 buttons, or an analog amplifier of resistor noise sampling a random value or whatever, although of course entirely determined by the same past light cone, should not, to all reasonable expectation, be FULLY statistically correlated with a light pulse traveling in an optical fibre. **

Well sure not, but the QM correlations are not that much higher either... what is your point precisely ?

**
Although in principle you are right that there MIGHT be such a strange correlation (because both are functions of the SAME initial conditions), it would invalidate about all experimental knowledge we have. **

But we know such correlations DO exist in irreducible statistical models (for example thermal baths). On the other hand, it is clear that the description of QM (as it stands now) ignores certain interactions which are certainly present in any realistic setup. Moreover, such problem -as you indicate - becomes even WORSE when we give a full quantum mechanical description of the Planck scale degrees of freedom; the QUANTUM statistics of such scenario which should provide our large scale QM theory of particles seems even more conspirational and ILL defined (as it stands now) than the CLASSICAL proposal. Basically, this scaling problem is inherent to quantum gravity and it appears to me that the deterministic scenario is much less exotic/conspirational and certainly easier (although already fairly impossible) to study. Try to see things from that point of view... quantum gravity would make our world even crazier, the deterministic anszatz is actually CONSERVATIVE in the sense that God is assumed to play with gears and slippers on the Planck scale.

I guess this adresses your later issues... This is what I am trying to tell you all the time, that IF we find a natural, deterministic PLANCK scale model which reproduces QM on large scales naturally, THEN this is the utmost supreme understanding of nature we can achieve (otherwise we are left with an even greater problem of why our world comes out of so many crazy possibilities).

You must realize that when a genius as Gerard is going for super-determinism then this is because the alternative to an understanding of nature is even much crazier if we leave QM untouched. Do you see now how poor our understanding of nature is if we take our theories to their consequences ?

Cheers,

Careful
 
Last edited:
  • #162
Careful said:
:rolleyes: It is not because I know that they noticed it, that this issue dissapears in thin air ! Of course you can say that you do not need Lorentz invariance at this level of reality (since the lack of does not lead to a falsifiable prediction) but what is the point then of BM, given that it does not solve the measurement problem either and complicates things. So, given that reality in BM is terribly non-local, frame dependent, does not solve the measurement problem, why should we appreciate it ?

Where did you get the (wrong) idea that BM *doesn't* solve the measurement problem?
 
  • #163
Vanesch, I don't have time to get into all this at such length, so just a few very quick notes.

* A black box algorithm that makes predictions but doesn't make any ontological commitments, isn't a theory. That doesn't mean it's a bad thing to have such algorithms. They're just not theories, that's all, because they're not trying to *explain* -- merely to describe.

* You raise this canard about all probabilities depending on stuff outside the past light cone, namely, stuff in the *future* (which you say renders all the probabilities 0 or 1). But this is based on your AGAIN having switched from the fundamental dynamical probabilities of a theory, to something epistemic. The latter is simply not what the Bell Locality condition is *about*, so all of your comments on this are pointless.

* You're still confused about determinism and ontology. Those aren't the same issue, and I find it very revealing that it is *you* who refuses, on principle a priori, to consider the possibility of a stochastic theory -- even when what got this spat started is your claim that *Bell* arbitrarily assumed determinism. Kettle? You're black.


*
A deterministic local theory is both Bell local and Signal local: you cannot have a deterministic theory which is NOT Bell local, but who is signal local.

Bohmian Mechanics is not Bell local, but is signal local.
 
  • #164
vanesch said:
Let's give it a go and have them bite :tongue2:

Chomp, chomp!



The thing that's much more fuzzy IMO in Bohmian mechanics, is the statement that "perception" is only due to the particle positions and not to the wavefunction, although the wavefunction (with all its ghost solutions - just as in MWI) is STILL part of the ontology of BM,

What's the problem? All along we thought we were perceiving matter made of particles. BM just keeps that (and adds a spooky mysterious invisible thing which is orchestrating the movements of the particles).


and at the same time deny the perception of the perfect particle positions (in order to be able to satisfy the initial probability condition), and have them agree with the Born rule.

Huh? It's a theorem in BM that measurements (which remember in BM are made using devices that are made of particles obeying BM!) cannot give us more knowledge of the particle positions than is implied by applying the Born rule to their effective wave functions. There's no *extra* assumption about a "fuzziness" in perception that maintains Born.


But then, this means that my perception of reality is also conditioned in part by this wavefunction (which contains other "ghost" terms).
And I'm really not very far from "MWI with a non-local token".

As you said, you just haven't understood BM well enough on this point. See:

http://www.arxiv.org/abs/quant-ph/0308039
 
  • #165
ttn said:
* A black box algorithm that makes predictions but doesn't make any ontological commitments, isn't a theory. That doesn't mean it's a bad thing to have such algorithms. They're just not theories, that's all, because they're not trying to *explain* -- merely to describe.

But that is ALL that OQM pretends to do. OQM says: "there are just algorithms, and that's all you can have".
From that PoV, signal locality is good enough, no ?


* You're still confused about determinism and ontology. Those aren't the same issue, and I find it very revealing that it is *you* who refuses, on principle a priori, to consider the possibility of a stochastic theory -- even when what got this spat started is your claim that *Bell* arbitrarily assumed determinism. Kettle? You're black.

I don't think I'm confusing the issues. I think I make the distinction between both, but as I consider things like "dynamical probabilities" bull****, and that to me, probabilities can ONLY be "ignorance based", I claim that fundamentally stochastic theories are just algorithms. When you turn them into deterministic theories with random variables, then that's different, because now the "random variables" can be assumed to have a physical existence and value, and they are only random because of our ignorance about them.
So a theory that contains random variables, of which probabilities are assigned, but to which we can also assign some element of physical existence, are in my vocabulary, still deterministic theories, because we can consider that these random variables DO have specific values, but we are simply ignorant of them, which gives them their random character.

But a scheme that ends by spitting out a series of probabilities, and calls that "dynamical" is nothing else but an algorithm, and cannot contain a description of a *mechanism*.

So yes, I claim that the only possible ontological descriptions are deterministic (eventually containing random variables) in their approach.
I would like to see such a deterministic description of nature, but maybe it doesn't exist, in which case we have to limit ourselves to *non-descriptive* algorithms.

In the latter case, there's no issue in requiring Bell locality: signal locality will do (as we're not looking for a description of any ontology, but just of an algorithm that will allow us to calculate probabilities without any pretention of ever describing nature on an ontological level).

Bohmian Mechanics is not Bell local, but is signal local.

Exactly, so as a non-descriptive algorithm that spews out probabilities, it is just fine, as well as OQM (which never had any other pretention).
But then, as I said, there's no point in claiming that the particles and forces appearing in Bohmian mechanics have any physical meaning, not more so than the wave function in OQM.
Given that it spits out the same set of numbers as OQM, they are in fact two equivalent algorithms and there's not much point in discussing over it.

But I understand that Bohmians want to confer a kind of ontological status to their theory. In that case, of course, things change, because then we should check whether its *internal mechanism* is local. Given that it is a deterministic mechanism, its locality is equivalent to Bell locality of the predicted probabilities, and then it fails.

So, true, Bohmian mechanics is not worse (on the contrary) than OQM: both are acceptable (signal-local) algorithms to calculate probabilities.

But OQM doesn't go any further (doesn't propose any ontology). So that's where OQM stops.
Bohmians want to give their theory ontological status, and then we open the box, and see that the machinery inside is non-local. So this part of the story doesn't fit.
 
  • #166
ttn said:
Huh? It's a theorem in BM that measurements (which remember in BM are made using devices that are made of particles obeying BM!) cannot give us more knowledge of the particle positions than is implied by applying the Born rule to their effective wave functions.

Do you mean that, if I know the exact positions and motions of the particles in an "observer", I cannot extract more information than allowed by the uncertainty relations ?

In other words, suppose that particles {q1,q2,...q20} are "the observer" and particles {q21, ... q30} are "the system". Does it mean that if I know EXACTLY the positions of {q1...q20} over time, that I cannot know more than what's allowed by the uncertainty relations about q21...q30 ?

"Being the observer" means, to me, "knowing exactly one's state", so there's no probability distribution to be assigned to {q1...q20} here, because it is the observer, which "knows" its state perfectly well, its "knowledge" BEING the state.

I thought that this only came about if we also allowed for an initial uncertainty on {q1...q20}...
 
  • #167
vanesch said:
"Being the observer" means, to me, "knowing exactly one's state", so there's no probability distribution to be assigned to {q1...q20} here, because it is the observer, which "knows" its state perfectly well, its "knowledge" BEING the state.

I thought that this only came about if we also allowed for an initial uncertainty on {q1...q20}...


Ah, I see that the paper you quoted answers exactly that:
The possession by experimenters of such information must thus be reflected in correla-
tions between the system properties to which this information refers and the features of
the environment which express or represent this information. We have shown, however,
that given its wave function there can be no correlation between (the configuration of) a
system and (that of) its environment, even if the full microscopic environment Y—itself
grossly more than what we could conceivably have access to—is taken into account.

I didn't follow the entire paper, but ok, I have to admit that this is impressive if there's no other caveat somewhere...
 
  • #168
vanesh: But that is ALL that OQM pretends to do. OQM says: "there are just algorithms, and that's all you can have".
Thanks for all your posts vanesh. Most informative.

Sorry to bother you, but does the above mean that "particle" in the excerpt below is just a mathematical artifice?

"Bohm's particle is viewed as having a definite position and velocity at all times, with a probability distribution ρ that may be calculated from the wavefunction ψ. The wavefunction "guides" the particle by means of the quantum potential Q. Much of this formalism was developed by Louis de Broglie; Bohm extended it from the case of a single particle to that of many particles, and also, by considering the particles in the measuring apparatus, re-interpreted the equations to include observation..."
 
  • #169
ttn said:
Where did you get the (wrong) idea that BM *doesn't* solve the measurement problem?

My statement of course depends upon what you mean with the measurement problem. If you simply mean that you want to avoid the nonlocal collapse of the QM wave, then of course BM can do that - although it distinguishes itself then from standard QM in a measurable way. What this proposal does *not* adress is that upon a *non-physical* act of the observer (ah yes, no causal effects in the past lightcone here, no pre-determinism) a new local wave package (guiding wave) in position space is constructed. If you ask me : that is no solution to the measurement problem (even Einstein found this ``solution'' cheap.). I have plenty of other problems with Bohm - de Broglie (which I am not going to list here) :
(a) basically what is measurement ? (what do we call measurement of position of electron in atom)
(b) in a multiparticle system, God plays dice in configuration space. How can you have any hope of doing physics in this way ?
(c) Truely speaking, I cannot make sense out of non-local guidance mechanisms; what does it mean that our ignorance influences the dynamics of particles ?
(d) in QFT on curved spacetime, the foliation would determine the choice of trajectories, what is the physical meaning of all this ?
...

Actually Patrick, you keep on mentioning decoherence all the time, but don't you see that this is at least as conspirational as super-determinism ?? Take your decoherence argument to the Planck scale and try to figure out why long range entanglement (or even elementary particles) should exist ! People do study classical chaotic interacting models and do discover that correlated regions appear, such synchronisation effects are well known but poorly understood.

Careful
 
Last edited:
  • #170
vanesch said:
But that is ALL that OQM pretends to do. OQM says: "there are just algorithms, and that's all you can have".
From that PoV, signal locality is good enough, no ?

Look, there are just two different possible attitudes you could take here. You could take the "completeness doctrine" at face value, and say that the wave function in OQM provides a literal description of the physical state of quantum systems. Then it's really a theory in the sense I am using that term. Or, as you suggest, one could just forget about objective reality and use the QM formalism as a black box algorithm. But that is just failing to address the question at hand (about local causality), not answering it in a certain way (i.e., proving an example of a causally local theory, or proving that Bell Locality doesn't make sense or something).

I mean, maybe we need to go back to the beginning. There *is* an objective reality, right? So if you have some mathematical black box algorithm that allows you to predict things -- but which *doesn't* provide an account of that objective reality -- that's *fine*... it's not that I object to having such a thing.. it's just that it doesn't address the kind of question that a *theory* might address, which is what that reality is like. The mere fact that you can construct some algorithm to make predictionsn without telling an ontological story, doesn't somehow make the world disappear. You're just not *talking* about it. But the question of whether or not the causality out there in the world is or isn't relativistically local, remains. Your not talking about objective reality right now doesn't make that question magically disappear or become meaningless.




I don't think I'm confusing the issues. I think I make the distinction between both, but as I consider things like "dynamical probabilities" bull****, and that to me, probabilities can ONLY be "ignorance based", I claim that fundamentally stochastic theories are just algorithms.

So you just accept as an a priori truth that objective reality is deterministic. OK, I mean, I actually lean that way too. I wouldn't claim it as an a priori truth, but certainly all other things being equal it's better to have a deterministic theory than not -- especially since you could never possibly have a strong argument for the stochasticity in a given theory being irreducible (Patrick's theorem). But, nevertheless, as a strategic point, I think it is very important to point out that Bell's inequalities in principle apply both to local deterministic and to local stochastic theories. You don't want to even consider the latter. Ok, fine, but some other people do, and it's important for them to know that they're barking up the wrong tree. If you're made uncomfortable by the non-locality Bell's Theorem proves must be present in any deterministic theory, then you should be *very* uncomfortable, because you CANNOT RESTORE LOCALITY BY DROPPING DETERMINISM.

And that is true whether or not your philosophical sensibilities permit you to take irreducible stochasticity seriously.

BTW, Patrick, does this mean you are unwilling to consider the GRW theory as a serious version of QM?




When you turn them into deterministic theories with random variables, then that's different, because now the "random variables" can be assumed to have a physical existence and value, and they are only random because of our ignorance about them.
So a theory that contains random variables, of which probabilities are assigned, but to which we can also assign some element of physical existence, are in my vocabulary, still deterministic theories, because we can consider that these random variables DO have specific values, but we are simply ignorant of them, which gives them their random character.

Is GRW then "really" a deterministic theory? How about orthodox QM with a "cut" put in at some artibtrary level of "macroscopicness" (however that is measured)?



In the latter case, there's no issue in requiring Bell locality: signal locality will do (as we're not looking for a description of any ontology, but just of an algorithm that will allow us to calculate probabilities without any pretention of ever describing nature on an ontological level).

I'm sorry, this just doesn't make any sense. "Signal locality" is about whether you can transmit a message faster than light. Bell Locality is about whether there are FTL causal influences. They're not just different "formulations" of the same concept, locality. They're about two very different things. So it's not an issue of "signal locality will do". If what you're interested in is whether it's possible to send signals, then yeah, signal locality will do. If, alternatively, what you're interested in is whether or not there exist FTL causal influences out there in the world, then only Bell Locality will do. And if you're interested in ordering some food, look at a menu. None of the 3 "will do" for the two other purposes.



Exactly, so as a non-descriptive algorithm that spews out probabilities, it is just fine, as well as OQM (which never had any other pretention).
But then, as I said, there's no point in claiming that the particles and forces appearing in Bohmian mechanics have any physical meaning, not more so than the wave function in OQM.

Have you gone completely crazy? Now you don't think Bohm's theory really means it when it posits that particle-plus-wf ontology?


Given that it spits out the same set of numbers as OQM, they are in fact two equivalent algorithms and there's not much point in discussing over it.

Yes, they make the same predictions. But on the other hand THEY ARE COMPLETELY DIFFERENT THEORIES because they posit completely different ontologies.


But I understand that Bohmians want to confer a kind of ontological status to their theory.

That's not quite phrased right. Bohmians think that Bohm's theory provides the best available candidate picture of the world. That picture is the ontology of the theory, just like some other picture is the ontology of MWI or of GRW. What is the confusion here?


In that case, of course, things change, because then we should check whether its *internal mechanism* is local. Given that it is a deterministic mechanism, its locality is equivalent to Bell locality of the predicted probabilities, and then it fails.

"Bell locality of the predicted probabilities"? Sheesh. I can only correct your refusal to understand the meaning of "Bell Locality" so many times...


So, true, Bohmian mechanics is not worse (on the contrary) than OQM: both are acceptable (signal-local) algorithms to calculate probabilities.

You're drunk or something. They're the *same* algorithm to calculate probabilites. Where they differ is in the ontology they posit (well, and the clarity of their formulations). You're now saying that really we shouldn't take the ontology of Bohm's theory seriously, and we should just consider it as another black box algorithm... except it's really just *the same* black box algorithm?? Did you overdose on some kind of positivism pills or something?


Bohmians want to give their theory ontological status, and then we open the box, and see that the machinery inside is non-local. So this part of the story doesn't fit.

Forget about "opening the box" of a theory. Start with the existence of a real world out there. Insist that there aren't any FTL causal influences. Derive an inequality from this. Test this empirically and find that it's violated. Infer that there *do* exist nonlocal causal influences in nature. That already means that no local theory is going to work. Bohm's theory is just then one among many possible empirically viable non-local theories. But nobody's infering anything about nature merely by "looking inside the Bohmian box." The point is just the reverse - you infer that any theory at all is going to have to have nonlocal mechanisms "in its box", because we already know going in that NATURE is nonlocal.
 
  • #171
Careful said:
My statement of course depends upon what you mean with the measurement problem.

I don't think this is a controversial point. The measurement problem (for OQM) is that it provides two different and incompatible dynamical laws depending on whether or not a "measurement" is happening, but it never defines that term. So the theory (quoting my pal Bell) is unprofessionally vague and ambiguous. (Relatedly, some people think of the measurement problem as the Schroedinger cat problem -- if you try to construct a non-vague theory by simply getting rid of the second kind of dynamics, then the theory no longer predicts that measurments have definite outcomes, which is contrary to fact.)

Bohm's theory solves the measurement problem unambiguously. It doesn't give two different dynamical rules. There is just one kind of dynamics, and everything (even the "stuff" that measurement apparatuses are made of) are all treating on an equal footing. And the theory actually predicts that measurements have outcomes -- pointers on detectors are made of particles, and these always end up in some definite place (because they're always at some definite place).


(b) in a multiparticle system, God plays dice in configuration space. How can you have any hope of doing physics in this way ?

Huh? The dynamics of Bohm's theory is completely deterministic. If there's dice playing, it's only at the initial conditions.

(c) Truely speaking, I cannot make sense out of non-local guidance mechanisms; what does it mean that our ignorance influences the dynamics of particles ?

Um, it doesn't. Methinks you don't really understand Bohm's theory very well if you think that, according to it, "our ignorance influences the dynamics of particles."
 
  • #172
** if you try to construct a non-vague theory by simply getting rid of the second kind of dynamics, then the theory no longer predicts that measurments have definite outcomes, which is contrary to fact.) **

Of course that is not true, you have to change the Schroedinger equation too. By the way, perhaps this is not an issue amongst Bohm lovers, but some others might think differently.

** Bohm's theory solves the measurement problem unambiguously. It doesn't give two different dynamical rules. There is just one kind of dynamics, and everything (even the "stuff" that measurement apparatuses are made of) are all treating on an equal footing. And the theory actually predicts that measurements have outcomes -- pointers on detectors are made of particles, and these always end up in some definite place (because they're always at some definite place). **

Huh ?? The issue is that you simply don't KNOW where the particle is although it is somewhere and following a definite trajectory. So, you still have to indicate when it is that you ``percieve'' it at some spot and consequently generate a new wavepackage to guide it. Moreover, Copenhagen also has only one DYNAMICAL rule, the projection postulate has nothing to do with dynamics. In classical physics, the act of perception (and the accuracy with which we achieve this) would not change anything to the way we describe the system dynamically; in BM this is not the case at all.


**
Huh? The dynamics of Bohm's theory is completely deterministic. If there's dice playing, it's only at the initial conditions. **

I think you don't understand that my comment is against giving a PHYSICAL interpretation to interactions which are irreducibly confined to configuration space (hello Newton, actually it is even worse than that). If you take QM ,as it stands now, simply to be an algorithm (and not a physical theory) then I do not mind so much that God plays dice in configuration space, but for a physical deterministic (apart from the measurement act) one, I certainly do. Moreover a Bohmian theory of QFT is certainly going to be stochastic and not deterministic.

**
Um, it doesn't. Methinks you don't really understand Bohm's theory very well if you think that, according to it, "our ignorance influences the dynamics of particles." **

Me think that you cannot read between the lines. Our ignorance is of course in the probability density of the wave function (we do not know where the particle is) and what makes it so strange is that this entity governs the dynamics of the particle through the quantum potential. Hence, the fact that this particle (or another particle in the same sample) *could* be somewhere else (in the future) is actually influencing the motion of the particle under consideration (now). If you don't find that strange, then I don't know what is to you.

Careful
 
  • #173
Careful said:
Methinks that you cannot read between the lines. Our ignorance is of course in the probability density of the wave function (we do not know where the particle is) and what makes it so strange is that this entity governs the dynamics of the particle through the quantum potential. Hence, the fact that this particle (or another particle in the same sample) *could* be somewhere else (in the future) is actually influencing the motion of the particle under consideration (now). If you don't find that strange, then I don't know what is to you.

People are so steeped in their axiomatic particles that they invent parallel universes and even time travel to explain them. They just won't see the solution: the quanta you are dealing with ARE NOT PARTICLES. A quantum is NOT all in one place. It is NOT a point. A gallon is not a particle. A coulomb is not a particle. A quantum IS NOT LOCAL.

But people don't listen, and instead we have a god damn crackpot theological "debate" going round in circles for fifty years pretending to be physics. Absolutely tragic.
 
  • #174
Careful said:
Basically, the use of the wave function in BM does not make sense to me vis a vis a one particle scenario.

I agree with you, but you've already heard my cure for that, which is to add an arrow to time by splitting the wave and particle duality into future and past, respectively, with respect to the observer.

Locality is obeyed in the wave propagation, and locality is observed in the observation of the particles. Where it disappears is in the transformation of wave to particle. More precisely, I mean to say that if it were not for wave collapse, none of the odd behavior of QM would exist.

The assumptions that influence is traveling faster than light all use the assumption that the wave and particle descriptions can simultaneously apply to the same event. That should be objected to for the same reason you object to the Bohmian one particle idea. That is, what the heck are the parts of the wave that the particle's trajectory does not traverse for?

Carl
 
  • #175
Farsight said:
People are so steeped in their axiomatic particles that they invent parallel universes and even time travel to explain them. They just won't see the solution: the quanta you are dealing with ARE NOT PARTICLES. A quantum is NOT all in one place. It is NOT a point. A gallon is not a particle. A coulomb is not a particle. A quantum IS NOT LOCAL.

But people don't listen, and instead we have a [edited for content] crackpot theological "debate" going round in circles for fifty years pretending to be physics. Absolutely tragic.
I agree with the sentiment -- but I think you are at least as guilty of this as Careful.
 

Similar threads

Replies
3
Views
1K
Replies
1
Views
1K
  • Quantum Physics
Replies
6
Views
1K
  • Quantum Physics
Replies
28
Views
3K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
Replies
9
Views
1K
  • Quantum Interpretations and Foundations
15
Replies
491
Views
26K
  • Quantum Physics
Replies
7
Views
3K
Replies
43
Views
4K
Back
Top