What Is an Element of Reality?

  • Thread starter Thread starter JohnBarchak
  • Start date Start date
  • Tags Tags
    Element Reality
Click For Summary
Laloe's exploration of "elements of reality" emphasizes the challenge of inferring microscopic properties from macroscopic observations, using a botanical analogy involving peas and flower colors. He argues that perfect correlations observed in experiments suggest intrinsic properties shared by particles, which cannot be influenced by external factors. The discussion highlights that these elements of reality must exist prior to measurement, as they determine outcomes regardless of experimental conditions. Critics challenge the analogy and the concept of hidden variables, questioning its validity and relevance to quantum mechanics. Ultimately, the debate centers on whether the existence of such elements can be scientifically substantiated.
  • #61
ttn said:
If the point is that, in a local theory, P(A|a,L) should not change when you specify the distant setting b, then shouldn't it also not change if you specify the distant outcome B?

I would like to add something here. Note that these discussions help me too clearing up my ideas, I hope it is also your case ! It brings up things I didn't think about before.

So the point you raise is an interesting one, and probably comes from the fact that I consider A and B as "coming out of the system" while a and b are input, because somehow "arbitrary determined by free will at A and B". So "a" and "b" are sources of information, while A and B are information receivers.
A is a local receiver at a, so if any statistic of A would depend on b, I would have an information channel. But in order for A|B to be an information channel, I have to know A and B, so in any way I have to solve another communication problem between A and B. At that moment I shouldn't have difficulties using sources from a and b.

There is a difference in meaning between P(A|B ; a,b,...) and P(A ; B, a, b...)
The first one is a probability that is defined as P(A,B)/P(B), so it is a derived quantity from the correlation function P(A,B). The second one has no meaning, because B is an event, and no parameter describing the distribution, as are a and b.

P(A,B)/P(B) has the frequentist interpretation of "the relative frequency of the events A in the subsample where we had B". A priori, it is somehow clear to me that this can depend on all that has to do with A and with B, because in order to _measure_ this quantity I have to have a coincidence counter, wired up with A and with B.

cheers,
Patrick.
 
Physics news on Phys.org
  • #62
I don't quite agree, there are several loopholes to Bell's theorem that are known. Usually they are easily dismissed by contriving a counter example that exploits local symmetry principles (isospin and things like that). However it could be the case that those local symmetries are broken at fundamental levels (see Planckian regimes). T'Hooft and several String theorists (Vafa etc) have exploited this in devising hidden variable theories that gets by the usual objections. The former has to resort to information loss, the latter in general through quasi local variables found in stringy physics.

The usual problem there is retrieving completely unbroken unitarity and managing to get a bounded hamiltonian.

All those programs have amounted to more or less zero, as the dynamics of any such theory is atrociously complicated, but the idea or possibility is there.
 
  • #63
vanesch said:
Note that these discussions help me too clearing up my ideas, I hope it is also your case ! It brings up things I didn't think about before.
Me too!

One thing I've been thinking about is how much easier life would be if one simply accepted the two sides as being independent (once L is fixed) and treated the matter of their coincidence probability just as you would treat the problem of achieving two 6's, say, with a pair of dice. With dice, it surely would not occur to you to do the operation in two stages, using conditional probabilities? You'd simply multiply the two separate probabilities.

I've begun to work on an analogy based on this idea. I don't think the actual Bell test experiments can be modeled without making some allowance for the geometry -- the fact that we are dealing with angles, so that the addition of 2 pi to every "setting" makes no difference. Instead of a dice we'd need one of those little hexagonal tops, with the sectors numbered consecutively. The "hidden variables" might be little weights attached to particular segments (the same for each -- it makes things easier and does not affect the final logic to assume the same rather than "opposite" or "orthogonal") and the "detector settings" could correspond to specified ranges of results. We could have, for example, (1 or 2) scoring + for A, while (2 or 3) scores + for B. If the little weight is fixed at 2, we can have a fully deterministic experiment if is it so heavy that the tops always stops at 2 ... [to be continued]

Cat
 
  • #64
vanesch said:
Well, in order to satisfy relativity, replace "humans" by "anything that can send out information" (because that's the relativity paradox you want to avoid: that you receive your own information before sending it ; on which you could base a decision to send out OTHER information, hence the paradox)

As I said before, there is something to be gained from analyzing "locality" in terms of information transfer. But it is also a dangerous game, mostly because "information" is a dangerously fuzzy, human-centered concept. Here's is Bell's comment:

"Do we then have to fall back on 'no signalling faster than light' as the expression of the fundamental causal structure of contemporary theoretical physics? That is hard for me to accept. For one thing we have lost the idea that correlations can be explained, or at least this idea awaits reformulation. More importantly, the 'no signalling' notion rests on concepts which are desperately vague, or vaguely applicable. The assertion that 'we cannot signal faster than light' immediately provokes the question:
Who do we think *we* are?
*We* who can make 'measurements,' *we* who can manipulate 'external fields', *we* who can 'signal' at all, even if not faster than light? Do *we* include chemists, or only physicists, plants, or only animals, pocket calculators, or only mainframe computers?"

I'm sure you get the point. Bohmian mechanics is, yet again, a clarifying example here. Part of Bell's point, surely, is that what relativity really requires, if you are going to take it seriously, is *more* than a mere no-signalling condition. *That's* why people are unwilling to accept Bohm's theory as consistent with relativity, even though it too doesn't permit signalling -- the behind-the-scenes nonlocality is just too obvious. But then, exactly the same thing is true in orthodox QM. If you take the wf seriously as a complete description of reality, the collapse of the wf is just as nonlocal as anything in Bohm's theory. And both violate the cleanly-formulated "Bell Locality" test.



The problem with locality is that the definition is different according to whether you work with a stochastic theory or with a deterministic theory.
In a purely stochastic theory, the only definition we can have concerning locality is of course based upon information theory.
In that sense, QM and of course Bohm's theory considered as a stochatical theory (which gives the same stochastic predictions) is local.

I still don't understand why you think there's an important distinction here. I thought of a perhaps clarifying example to discuss, though maybe you beat me to the punch with your comment about Bohm's theory "considered as a stochastical theory". But I don't understand exactly what you're getting at there, so I'll throw my example out and see what happens.

Consider Bohm's theory: Sch's equation plus a "guidance formula" specifying particle velocities in terms of the wf. Now add a small random noise term to the guidance formula -- on average, particles will still go where Bohm's theory says they should, only now they'll occasionally deviate by just a little bit. This noise is meant to be completely random (but Gaussian about zero and pretty narrow so it keeps deviations from well-tested QM predictions below the level at which they could be detected). Make sense?

The question is: for this modified Bohm theory, does anything really change in regard to its locality? The theory is now fundamentally stochastic instead of deterministic. Yet it seems to still blatantly violate our notions of local causality -- in particular, the particle velocities still depend on the simultaneous positions of other (entangled) particles. So the theory will still violate "Bell Locality" and I think anyone who looked at it would have no trouble seeing that it was (in pretty much any sense other than "signalling") quite blatantly nonlocal.

Do you agree that this would be an example of a stochastic theory to which the notion of Bell Locality is perfectly applicable?



Next, we can talk about the locality of mechanisms, whether or not they lead to a deterministic or stochastic theory ; and in the latter case, independent of whether the stochastic theory is local in the information theory sense.

Sure, you can talk about that. But when you come to QM, you'll end up playing the same semantic games as before, I suspect. QM has no underlying mechanism (I suspect you'll want to say), hence there is no nonlocality in its underlying mechanism.

But this is just trading on fuzziness over what is meant by "mechanism". Sure, QM lacks a clear detailed ontology that allows you to understand what's going on behind the scenes, i.e., you might say, it lacks a mechanism. But in another sense, QM is perfectly clear. It says: there is nothing going on behind the scenes; the wf is the whole story, a complete description of the state of a system at any moment. And when you make a measurement, the wf -- i.e., the state of the system -- suddenly and randomly jumps into an eigenstate of the operator measured.

My question is: why not just take QM at its word and accept *this* as its mechanism?? It is, after all, what QM says the mechanism is! I mean, it's a pretty strange and fuzzy and non-mechanical mechanism, but if that bothers you you should reject the story on that grounds, not turn it into a point in QM's favor, a get-out-of-jail-free card.


For instance, the "collapse of the wavefunction" in QM is blatantly non local, because it affects the internal description at B when doing something at A.
However, the MWI approach gives us a local mechanism, in a very subtle way: you can only talk about a correlation when the events at A and B are in the past lightcone and you deny the individual existence of events at A and B until at the moment where you can observe the correlations. At most you can observe one of both.

Yes, according to your MWI, the only things that really exist are in your mind -- so in fact there aren't any spatially separated physical objects to interact nonlocally (or locally for that matter) in the first place. So, um, sure, I guess that counts as local.



The *hidden* variables are also subject to a non-local mechanism in Bohm's theory.

No doubt. As shown most cleanly by the fact that Bohm's theory violates Bell Locality. (See? Bell Locality really is a nice litmus test for whether a theory is "locally causal." Bohm's theory isn't.) But then, as you're probably all tired of hearing me say, orthodox QM violates Bell Locality too.




Theories which have a non-local mechanism but give rise to a stochastic theory which IS local (in the relativistic sense) are said to "conspire": they have all the gutwork to NOT respect the locality requirement of relativity, but they simply don't take advantage of it. Bohm's theory, and QM in the Copenhagen view are in that case (that's why I don't like them).
MWI QM doesn't have such a non-local mechanism

OK, I think we're in agreement here. Bohm's theory and orthodox QM both "conspire" in some sense -- there is a non-local mechanism which is somehow washed out by uncertainty or randomness to prevent that nonlocal mechanism from being used to transmit information.


Of course a stochastic theory without any underlying mechanism cannot be analysed for their underlying mechanism!

But you can still ask if such a theory violates Bell Locality.

Perhaps it's the word "underlying" that is causing (err, spontaneously and inexplicably correlating with?) trouble. In Bohm's theory, there is a pretty clear distinction of "levels" between the level of prediction and the "underlying" level of definite particle trajectories, etc. In QM, the level of prediction and the level of "exact and complete specification of the state of the world" are pretty much one and the same. But again, it's just cheap semantics to insist on a clean difference between two levels, in order to then dismiss Bell Locality as inapplicable to (say) QM on the grounds that it has no "underlying" levels. Bell Locality is stated/defined in terms of an "exact and complete specification of the state of the system" -- the thing we've been calling "L" that all the relevant probabilities are conditioned on. There is no requirement that that "L" be "underlying" or anything like that. So again, I would advocate just taking QM straight (e.g., letting the wf play the role of "L"), and taking Bell Locality straight. Don't twist words and make subtle distinctions that are not made in or required by these ideas.

Then you won't have to worry about distractions like deterministic vs. stochastic and "underlying".
 
  • #65
ttn said:
... Bell Locality is stated/defined in terms of an "exact and complete specification of the state of the system" -- the thing we've been calling "L" that all the relevant probabilities are conditioned on. There is no requirement that that "L" be "underlying" or anything like that. So again, I would advocate just taking QM straight (e.g., letting the wf play the role of "L"), and taking Bell Locality straight. Don't twist words and make subtle distinctions that are not made in or required by these ideas.
That won't quite work, though, since the wave function applies to an ensemble of particles and L applies to one particular one (or, in our case, two, both having arisen from the one source in state L).

Then you won't have to worry about distractions like deterministic vs. stochastic and "underlying".
To continue my analogy, this little hexagonal top can be made either fully deterministic (if the biasing weight is heavy) or stochastic, if it is lighter. Attaching the weight at the "2" position may, if it is heavy, cause the top always to come to rest with the 2 down, but if it is light then it will merely cause bias, the degree depending on the actual weight. There will be the highest probality of a 2, lesser chance of a 1 or 3, and very little chance of the other scores.

And clearly if we have two such tops, coming from the same factory and with the same fault but spun independently, and define detector settings as I suggested, we have your "Bell locality" and can multiply the individual probabilities of success to get the joint probability.

If I persevere, I think I'll be able to demonstrate that the "coincidences" don't form a "fair sample" ... I'll have to define what I mean by '-' results, though, as well as '+':

If, as in the previous message, the values 1 or 2 count as '+' for A, then the opposite sectors (which will be 4 and 5, since we number them sequentially) I define as counting '-'. Under this scheme, if I've got it right, when B is set "parallel" to A (i.e. it also scores either + or - when the top lands with 1, 2, 4 or 5 down, but fails to score anything when it lands on 3 or 6), you get a lot more coincidences than when they are set one unit apart (effectively the only other option in this simple scheme). In the deterministic version, there would (I think) be twice as many coincidences in the parallel case as compared to other orientations.

Perhaps I'm getting carried away, though!

Questions:

(a) Is the variation of coincidence probability in itself sufficient to show that we have not got a fair sample?

(b) Is this a convincing analogy for a Bell setup?

(c) Can we squeeze a "Bell inequality" out of it that can be compared with any QM prediction?

Ah well, probably not, so the exercise was a waste of time from that point of view. I think it might be helpful, though, for illustrating stochastic v deterministic models and for helping us to escape from the use of conditional probabilities.

Cat
 
  • #66
Cat said:
That won't quite work, though, since the wave function applies to an ensemble of particles and L applies to one particular one (or, in our case, two, both having arisen from the one source in state L).

Well, it's true that identifying Bell's "L" with the QM wf requires the assumption that the QM wf is a complete description of the relevant part of the world. So when I say things like "QM violates Bell Locality" what I mean is "QM, so long as one accepts Bohr's completeness doctrine and hence regards the wf as a complete description of a system, violates Bell Locality."

If, on the other hand, one wishes to reject the completeness assumption and regard psi as merely an average or collective description of an ensemble of similar but not identical systems, then you're right, this identification doesn't work. Two things follow: 1, one needs a *different* (and in fact much less trivial) argument to show that a hidden variable theory (i.e., the kind of theory one is led to when one rejects completeness) must also violate Bell Locality. This argument is of course Bell's theorem. and 2, EPR were exactly correct. They didn't prove that QM was incomplete, and they didn't prove that it violated locality; but they did prove it was *either* nonlocal or incomplete.



(b) Is this a convincing analogy for a Bell setup?

(c) Can we squeeze a "Bell inequality" out of it that can be compared with any QM prediction?

I don't think so. The results of two dice rolls will always be statistically independent unless there is some "mechanism" by which the result of one roll can affect the result of the other. Merely making one or the other "biased" in some way isn't at all the same as "linking" them. So, as long as they are independent, you will never find that the correlations violate a Bell inequality.
 
  • #67
ttn said:
I still don't understand why you think there's an important distinction here. I thought of a perhaps clarifying example to discuss, though maybe you beat me to the punch with your comment about Bohm's theory "considered as a stochastical theory". But I don't understand exactly what you're getting at there, so I'll throw my example out and see what happens.

A quick reaction (I don't have much time right now): I may have expressed myself badly, conducting you in misunderstanding what I tried to say.

When I say "Bohm's theory considered as a stochastic theory" I mean Bohm's theory, as a black box, out of which come probabilities for observation P(A), P(A,B) etc... I didn't mean "turn Bohm into a stochastic theory". It IS, at the end of the day, exactly the same stochastic theory as quantum theory (also seen as a black box out of which come probabilities P(A), P(A,B)...) or so I understood, if it is 100% equivalent.

Any qualifier based upon the probabilities must then be of course exactly the same for both theories. For example I call them "relativistically local" and you call them Bell-non-local. We agree upon that point.

To me, a stochastic theory is a black box out of which come prescriptions for calculating probabilities of events. Nothing is said about how these probabilities come about.

So, given the "description of the experiment", we have the function:
P(A,B ; a, b)
(out of which all other probabilities can be derived).
Note that we cannot include an explicit "state description" in these probabilities, because it is inside the black box. The only thing we can specify is the "description of the experiment": a laser beam here, a PDC xtal there, etce... You already see a difficulty in specifying "Bell Locality" here without "opening the black box", but I have no problem defining my "relativity locality".

We can now open the black box and look at the formalism that gives us these probabilities. If somehow it is assumed that parts of the formalism correspond to a physical reality, then we have a MECHANISM.
It can also be that the formalism does not correspond to something describing a physical reality. In that case the black box remains black. Some people see QM as such. There's nothing to say something against it (except that it is a bit deceiving for a physical theory).

A deterministic theory gives us an underlying mechanics, such that, if we were to know all the internal degrees of freedom, only probabilities 1 and 0 would come out.
There are different ways to make shortcuts here: we can use these internal degrees of freedom to specify non-trivial probability distributions, and we can "hide" internal degrees of freedom. If we hide internal degrees of freedom, then we can always ADD others to generate the non-trivial probability distributions. So I do not see what is the point in making non-deterministic hidden-variable theories, because they are always equivalent to another, deterministic one.
However, there are good reasons to have non-hidden variable stochastic theories with a mechanism. In fact our big black box then becomes a structure containing "smaller black boxes" which are by themselves generators of probabilities without any underlying mechanism. Quantum theory, with a physical interpretation of the wave function, is in that case.

The locality or non-locality of a mechanism is harder to define in all generality because of the variety of mechanisms. But if something "happening here" does something to the physical description "over there" then it is non-local.
As you point out, the collapse of the wave function in Copenhagen QM is non-local if you attach a physical reality to the wavefunction. Also the HV in Bohm is non-local if you attach a physical reality to the HV (and honestly, what's the point of introducing HV if you do not attach a physical reality to them ?)

cheers,
patrick.
 
  • #68
vanesch said:
When I say "Bohm's theory considered as a stochastic theory" I mean Bohm's theory, as a black box, out of which come probabilities for observation P(A), P(A,B) etc... I didn't mean "turn Bohm into a stochastic theory". It IS, at the end of the day, exactly the same stochastic theory as quantum theory (also seen as a black box out of which come probabilities P(A), P(A,B)...) or so I understood, if it is 100% equivalent.

Any qualifier based upon the probabilities must then be of course exactly the same for both theories. For example I call them "relativistically local" and you call them Bell-non-local. We agree upon that point.

Yup.


To me, a stochastic theory is a black box out of which come prescriptions for calculating probabilities of events. Nothing is said about how these probabilities come about.

So, given the "description of the experiment", we have the function:
P(A,B ; a, b)
(out of which all other probabilities can be derived).
Note that we cannot include an explicit "state description" in these probabilities, because it is inside the black box. The only thing we can specify is the "description of the experiment": a laser beam here, a PDC xtal there, etc... You already see a difficulty in specifying "Bell Locality" here without "opening the black box", but I have no problem defining my "relativity locality".

I don't think this is right. You say you treat the theory as a black box and that there's really no way to include an explicit 'state description' in the probabilities. But surely you know that you must do that, in QM, in order for the probabilities to be defined. You don't/can't just calculate "P(A,B|a,b)" in QM -- rather, you calculate P(A,B|a,b,psi). If nobody tells you what state the system is prepared in, there is no way to predict using QM what the probabilities of various measurement outcomes are. QM may be a black box, but it isn't as much of one as you imply here. It *does* contain "state descriptions" and these play an absolutely essential role in its abililty to predict (probabilities for) outcomes of experiments.


We can now open the black box and look at the formalism that gives us these probabilities. If somehow it is assumed that parts of the formalism correspond to a physical reality, then we have a MECHANISM.
It can also be that the formalism does not correspond to something describing a physical reality. In that case the black box remains black. Some people see QM as such. There's nothing to say something against it (except that it is a bit deceiving for a physical theory).

I thought what I said against it before was pretty good. :smile:

EPR asked: Can the quantum-mechanical description of reality be considered complete? They said no, Bohr said yes. I don't think there was any debate about whether quantum state descriptions refer to something in reality (though nowadays one can find people arguing for any nonsense, even this). What does the completeness doctrine even *mean*, if it isn't that the wave function alone provides a complete description of reality?

So in the sense you are talking about "mechanisms" in the above paragraph, QM has just as much mechanism as Bohm's theory. They both claim to provide a complete picture of what is real at any given moment. And on that basis they have some rule for calculating probabilities of various things.

So again I see no fundamental difference. Both the mechanisms violate Bell Locality, yet this underlying nonlocal causality is washed out by uncertainty (in the case of Bohm) and irreducible indeterminism (in the case of QM) at the level of measurement results, thus preventing its being used for superluminal telephones.


There are different ways to make shortcuts here: we can use these internal degrees of freedom to specify non-trivial probability distributions, and we can "hide" internal degrees of freedom. If we hide internal degrees of freedom, then we can always ADD others to generate the non-trivial probability distributions. So I do not see what is the point in making non-deterministic hidden-variable theories, because they are always equivalent to another, deterministic one.

Maybe you're right; I'm not sure. But your point is only that it would be silly to construct a stochastic hv theory, not that it is really impossible in principle. But I wasn't seriously advocating that one ought to construct such a theory; I was just pointing out that it was possible to build one, and that the mere addition of randomness in the theory doesn't in any way preclude one from identifying the resulting theory as nonlocal.


However, there are good reasons to have non-hidden variable stochastic theories with a mechanism. In fact our big black box then becomes a structure containing "smaller black boxes" which are by themselves generators of probabilities without any underlying mechanism. Quantum theory, with a physical interpretation of the wave function, is in that case.

Is there a quantum theory without a physical interpretation of the wf? I know people (e.g., the Fuchs and Peres "opinion" article that appeared in Physics Today a few years ago) talk about the wf as purely/merely epistemological, but this is blatantly in contradiction with the completeness doctrine (that such people also tend to advocate), isn't it?

The locality or non-locality of a mechanism is harder to define in all generality because of the variety of mechanisms. But if something "happening here" does something to the physical description "over there" then it is non-local.
As you point out, the collapse of the wave function in Copenhagen QM is non-local if you attach a physical reality to the wavefunction.

Precisely. And if you don't "attach a physical reality to the wf" -- i.e., if you think the wf represents mere knowledge of some state that is, in physical reality, perfectly definite -- then you have abandoned completeness. And that means you believe in a hidden variable theory instead of QM. And that means (because of Bell's theorem) that you haven't successfully gotten around quantum nonlocality! ...which is really the point I want to stress: the choice between orthodox QM and (say) Bohmian mechanics is a choice between two equally-nonlocal theories. The nonlocality cannot be escaped, and is hence no reason to support QM as against Bohm.


Also the HV in Bohm is non-local if you attach a physical reality to the HV (and honestly, what's the point of introducing HV if you do not attach a physical reality to them ?)

I certainly can't think of any!
 
  • #69
ttn said:
I don't think this is right. You say you treat the theory as a black box and that there's really no way to include an explicit 'state description' in the probabilities. But surely you know that you must do that, in QM, in order for the probabilities to be defined. You don't/can't just calculate "P(A,B|a,b)" in QM -- rather, you calculate P(A,B|a,b,psi). If nobody tells you what state the system is prepared in, there is no way to predict using QM what the probabilities of various measurement outcomes are. QM may be a black box, but it isn't as much of one as you imply here. It *does* contain "state descriptions" and these play an absolutely essential role in its abililty to predict (probabilities for) outcomes of experiments.

Ah, I think the real issue here is the term "completeness" ,and not "locality". I have to say I don't know what it means, except "a potentially deterministic underlying mechanics".
Because what stops me from giving the precise description of the experiment as "complete" ? A laser here, a PDC there etc... In "complete" I include everything I'm potentially allowed to know, but I don't include things that I cannot, in principle, know, such as hidden variables. You can write it down on 20 pages of text, but the quantummechanical wavefunction does exactly that: it is the unique state of which I'm supposed to know everything I can know (complete set of commuting observables determine it).

If out of such a description comes still a series of probabilities, different from 0 or 1, I call such a theory fundamentally stochastic, because there is no way, in principle, to reduce the randomness here. But *this* seems to be what one objects to when one requires "completeness".

EPR asked: Can the quantum-mechanical description of reality be considered complete? They said no, Bohr said yes. I don't think there was any debate about whether quantum state descriptions refer to something in reality (though nowadays one can find people arguing for any nonsense, even this). What does the completeness doctrine even *mean*, if it isn't that the wave function alone provides a complete description of reality?

Yes, I agree that the wavefunction is supposed to give a complete description of reality in QM. Such as would be those 20 pages of text describing in detail the experimental setup. The wavefunction is the translation, in the mathematical formalism, of those 20 pages.

You are free to say that that 20 page text is "an element of reality". Personally, I also think that there must be something "real" to it (and hence want to tell a story = interpretation), but many people just see it as a "generator of statistics". In that view, I don't know how you apply Bell locality for example, because obviously:

P(A,B| a, b, 20 pages) is not equal to P(A|a, 20 pages) x P(B|b, 20 pages)

Indeed, that wouldn't even allow you to have classical correlations! Nevertheless those 20 pages are a full, complete description of what we are supposed to know about the experiment.

So in the sense you are talking about "mechanisms" in the above paragraph, QM has just as much mechanism as Bohm's theory. They both claim to provide a complete picture of what is real at any given moment. And on that basis they have some rule for calculating probabilities of various things.

So again I see no fundamental difference. Both the mechanisms violate Bell Locality, yet this underlying nonlocal causality is washed out by uncertainty (in the case of Bohm) and irreducible indeterminism (in the case of QM) at the level of measurement results, thus preventing its being used for superluminal telephones.

Well, what I wanted to show, in an MWI story that goes with QM, is that there is no underlying nonlocal causal mechanism. There is maybe a kind of "holistic description" (such as the wavefunction of the universe), but it is the OBSERVER which, on each of his observations, has to make a choice between branches (and hence introduces the apparent randomness in his observations). As the observer is essentially "local" to itself, there is no way for him to influence what so ever remotely. If he travels from A to B, then first he only knows about A, and so determines a probability P(A) at that moment and "registers" the entanglement branch which he chose, but B is "still in the air", in that the measurement apparatus at B just got into entanglement with B and is in the two possible states it can be. It is only when that event B gets in the past lightcone of the observer that he has a chance of reading the apparatus, meaning looking at THAT branch of the apparatus which corresponds to his registering of his branch at A. Now OR the apparatus is in a pointer state (which means that we had equal settings a and b), or the apparatus is still in a superposition within that branch, upon which he makes again a choice, and now registers again a second branching.

It is important to notice that nothing "happened" to the apparatus, or B in all these cases. It is just the *observer* who made choices. And when you look at it this way, you're NOT tempted to make FTL phones. You maybe also see my insistance upon the fact that P(A,B) shouldn't be constrained so as to be factorisable: indeed, at the moment where P(A,B) makes sense, namely when the observer has to make his choice for the result of B, he has already everything in his pocket about A and now about B.

Maybe you're right; I'm not sure. But your point is only that it would be silly to construct a stochastic hv theory, not that it is really impossible in principle. But I wasn't seriously advocating that one ought to construct such a theory; I was just pointing out that it was possible to build one, and that the mere addition of randomness in the theory doesn't in any way preclude one from identifying the resulting theory as nonlocal.

As I said, a stochastic theory CAN have structure, and then you can analyse that structure for locality. But you can, if you wish, just see it as a generator of statistics too.
I don't see the point however, to go and postulate hidden variables (that by itself is ugly, no ?) and to keep randomness. The original reason for introducing hidden variables was, I thought, to _explain_ randomness.
But of course you're free to do so.

Is there a quantum theory without a physical interpretation of the wf? I know people (e.g., the Fuchs and Peres "opinion" article that appeared in Physics Today a few years ago) talk about the wf as purely/merely epistemological, but this is blatantly in contradiction with the completeness doctrine (that such people also tend to advocate), isn't it?

Precisely. And if you don't "attach a physical reality to the wf" -- i.e., if you think the wf represents mere knowledge of some state that is, in physical reality, perfectly definite -- then you have abandoned completeness. And that means you believe in a hidden variable theory instead of QM. And that means (because of Bell's theorem) that you haven't successfully gotten around quantum nonlocality! ...which is really the point I want to stress: the choice between orthodox QM and (say) Bohmian mechanics is a choice between two equally-nonlocal theories. The nonlocality cannot be escaped, and is hence no reason to support QM as against Bohm.

Ah, this "completeness" looks more and more to be a "realist" condition.
And yes, QM in a MWI like setting is not very "realist" in that observations are not determining the external world, but the state of the observer in relationship to the external world (which is vastly more complex: we have ONE TERM in the wavefunction given by our observations, while they all "exist", whatever that may mean).

So it seems that the vague term (to me) is not locality but "completeness"...
I would naively think that a theory is "complete" if we can get out of it, as predictive properties (if it is stochastic: in the information - theoretic way) the maximum that we are fundamentally allowed to get out, so that you cannot do any better.
In that sense, I don't know how "completeness" of QM has anything to do with whether we consider the wavefunction as real. And Bohm and QM are the of course equally complete because they give us, as black boxes, the same probability functions upon the parameters we're allowed to choose freely, namely P(A,B ; a,b).
In Einstein's view, of course, there couldn't be any stochastic theory, so a complete theory, to him, had to mean a deterministic theory (and yes, then all probabilities are 0 or 1 and hence you get more information out ; but you then have the problem that the hidden variables cannot be hidden for ever).

But apparently, completeness means now something totally different, so can you enlighten me ?

cheers,
Patrick.
 
  • #70
vanesch said:
P(A,B| a, b, 20 pages) is not equal to P(A|a, 20 pages) x P(B|b, 20 pages)

Indeed, that wouldn't even allow you to have classical correlations! Nevertheless those 20 pages are a full, complete description of what we are supposed to know about the experiment.

I would like to elaborate a bit more on this. I"ll try to give an example illustrating what is so different between a stochastic theory and a deterministic one. It hasn't got anything to do with EPR or QM, but I would like to "attack Bell locality".

Imagine that I have a system which sends two little balls to each detector ; upon emission they are blue, but due to an inherent reaction inside, they turn red or they turn black. Imagine now that it is IN PRINCIPLE impossible to know the details of this inherent reaction. You have to accept such a possibility in the framework of a stochastic theory ; however, in a deterministic theory you can object: indeed, something inside must "know" if the ball turns black or red.

But assuming that this is purely stochastic, this "inside" is not part of a complete description, because you have no access to it. The complete description is simply that out of the experiment come two blue balls. There is nothing more I can say. Those balls have been analysed in all possible ways, they turn out to be identical. There is no measurement I can perform to show me which ball will turn out to become black, and which one will become red.

Imagine now that my theory is such that this predicts that of two blue balls generated, one always turns red, and the other black. We don't know why, and it is simply in principle impossible to know why, but it is so. A basic axiom of my theory is that a generator of 2 blue balls always has one that turns black and the other turn red.
Then I have P(A,B ; 2 blue balls) has the following values (A = black or red, B = black or red).

P(black,black ; 2 blue balls) = 0
P(red, red ; 2 blue balls) = 0
P(black, red ; 2 blue balls) = 0.5
P(red, black ; 2 blue balls) = 0.5

P(A = black ; 2 blue balls) = P(A = red ; 2 blue balls) = P(B = black ; 2 blue balls) = P(B = red ; 2 blue balls) = 0.5

Clearly "2 blue balls" is a complete description of the setup in that I cannot know more.
Clearly, P(A,B) is not equal to P(A) x P(B)

And I didn't introduce any non-local mechanism !

There is no issue about relativistic locality, because there wasn't even a free choice that could send information !

So where does Bell locality indicate non-locality ?

Aren't you tempted to say that "there must be an underlying (deterministic?) mechanism that should show me what ball will turn red ?" But if no such mechanism is postulated, how do we conclude about non-locality ?

cheers,
Patrick.
 
  • #71
ttn said:
EPR were exactly correct. They didn't prove that QM was incomplete, and they didn't prove that it violated locality; but they did prove it was *either* nonlocal or incomplete.
Agreed.

Re two dice (or, in my analogy, two spinning hexagons) being a reasonable analogy to illustrate entanglement, you say:
I don't think so. The results of two dice rolls will always be statistically independent unless there is some "mechanism" by which the result of one roll can affect the result of the other. Merely making one or the other "biased" in some way isn't at all the same as "linking" them. So, as long as they are independent, you will never find that the correlations violate a Bell inequality.
True, they will never violate a "genuine" Bell inequality, but I suspect that the fact that there are some "non-detections" means that they will violate the equivalent of the CHSH inequality, i.e. one in which the estimated test statistic is related to the detected pairs, not to the emitted ones.

When time, I'll work on this. Meantime I've having fun trying to produce a local realist model that will predict the outcome of one of the latest proposed "loophole-free" experiments -- that by Grangier's team, using PDC sources with "event-ready detectors" and balanced homodyne detection. Here, because, even without the event-ready detectors, we shall have (I think) some kind of record for every single emitted pair (i.e. no non-detections), I predict that the CHSH inequality will not be violated.

Cat
 
  • #72
Let us go for the strange world of Balls.

Imagine the following situation: in the world of Balls, we have a theory describing a curious experiment: a generator of pairs of blue balls sends one ball to a left observer, Alice, and another ball to the right observer, Bob.

It has been empirically verified that blue balls turn into red or black objects, piramids or cubes, smooth or hairy. However, it is only possible to observe one property: if you look at the color red or black, they become slimy balls ; if you look at the shape, they become blue, slimy shapes, and if you look at the surface quality, they become blue balls.
It has also been empirically verified that if we measure the same property of both balls coming out of the pair producer, they are always opposite.

For tens of years, people have tried to analyse these pairs of balls, but nothing seems to distinguish them until they change (about after half an hour or so) and we can do a measurement on them. So we've come to the conclusion that "pair of blue balls" completely describes the physical situation.
Even in a zargon-ray analysis, they give exactly the same diffraction patterns.

We have measured empirically since years the following probabilities for the pairs of blue balls measurements, and this has lead to the Stochastic Theory of Blue Ball Pairs (in Mathematica notation) which takes as fundamental postulate:

p[{hair, smooth}] = 1/2
p[{hair, hair}] = 0
p[{smooth, smooth}] = 0
p[{red, blue}] = 1/2
p[{red, red}] = 0
p[{blue, blue}] = 0
p[{piramid, cube}] = 1/2
p[{piramid, piramid}] = 0
p[{cube, cube}] = 0

p[{hair, blue}] = 1/2
p[{hair, red}] = 0
p[{smooth, blue}] = 0
p[{smooth, red}] = 1/2
p[{piramid, blue}] = 0
p[{piramid, red}] = 1/2
p[{cube, blue}] = 1/2
p[{cube, red}] = 0
p[{hair, cube}] = 1/2
p[{hair, piramid}] = 0
p[{smooth, cube}] = 0
p[{smooth, piramid}] = 1/2
p[{a_, b_}] := p[{b, a}]

the last equation indicating that the probabilities are symmetric.

It is interesting to note that from these 2-point correlations, we can deduce that the local probabilities of Alice, to find on a color measurement:
blue, has probability 1/2
red has probability 1/2

on a shape measurement:
cubes have probability 1/2
piramids have probability 1/2

on a surface aspect measurement:
hair has probability 1/2
smooth has probability 1/2

and this, independent on the choice of measurement Bob will make.

So Bob can not use its choice of measurement to send a message to Alice.

Is my stochastic theory local or not ?
Is in this theory P(B|A) equal to P(B)

Now compare it to the following theory, the theory of Blue Bells:

p[{hair, smooth}] = 1/2
p[{hair, hair}] = 0
p[{smooth, smooth}] = 0
p[{red, blue}] = 1/2
p[{red, red}] = 0
p[{blue, blue}] = 0
p[{piramid, cube}] = 1/2
p[{piramid, piramid}] = 0
p[{cube, cube}] = 0

p[{hair, blue}] = 0
p[{hair, red}] = 1/2
p[{smooth, blue}] = 1/2
p[{smooth, red}] = 0
p[{piramid, blue}] = 0
p[{piramid, red}] = 1/2
p[{cube, blue}] = 1/2
p[{cube, red}] = 0
p[{hair, cube}] = 1/2
p[{hair, piramid}] = 0
p[{smooth, cube}] = 0
p[{smooth, piramid}] = 1/2
p[{a_, b_}] := p[{b, a}]

Same questions...


cheers,
Patrick.
 
  • #73
Cat said:
When time, I'll work on this. Meantime I've having fun trying to produce a local realist model that will predict the outcome of one of the latest proposed "loophole-free" experiments -- that by Grangier's team, using PDC sources with "event-ready detectors" and balanced homodyne detection. Here, because, even without the event-ready detectors, we shall have (I think) some kind of record for every single emitted pair (i.e. no non-detections), I predict that the CHSH inequality will not be violated.

I guess you are referring to this paper?
http://arxiv.org/abs/quant-ph/0403191

I also found a followup paper from a later data here:
http://arxiv.org/abs/quant-ph//0407181

The similar related papers from H. Nha and H.J. Carmichael:
http://arxiv.org/abs/quant-ph/0406101
http://arxiv.org/abs/quant-ph/0406102



If you are objecting to the Clauser, Horner, Shimony, Holt inequality
is it because the derrivation of the ≤ 2 assumes that the value Eb
is equal in both Eab and Ea'b (see below) while in fact they are
generally selected subsets (~3%) after coincidence detection ?


|Eab + Ea'b + Eab' - Ea'b'| ≤ 2

|(Ea + Ea')Eb + (Ea - Ea')Eb' | ≤ 2

(for individual measurements with outcome +1 or -1 either
(Ea+Ea')=0 or (Ea-Ea')=0 resulting in a maximum value of 2)


Regards, Hans
 
  • #74
vanesch said:
It is interesting to note that from these 2-point correlations, we can deduce that the local probabilities of Alice [...]
and this, independent on the choice of measurement Bob will make.

That's potentially misleading. The marginals (the probabilities for Alice gotten by summing
over the possible outcomes for Bob weighted by the appropriate probabilities) for alice to measure red/blue are indeed 50/50. But the conditional probability for Alice to measure red is *not* independent of the color of Bob's ball. ...e.g., the probability that Alice will find a blue ball when Bob's has already turned red, is 100%.

So, since you went out of your way to claim that there is no behind-the-scenes, local mechanism which can account for the correlations, i.e., that the description "two blue balls" is *complete*, there is a violation of Bell Locality here.

So Bob can not use its choice of measurement to send a message to Alice.

That's right. This example shows a violation of Bell Locality, but one that is washed out by randomness and so cannot be used to transmit information. Just like QM. Just like Bohm. :smile:


Is my stochastic theory local or not ?

Depends on what you mean. It's not Bell Local, but it is "information local".


Is in this theory P(B|A) equal to P(B)

No, definitely not. 100% =/= 50%.
 
  • #75
vanesch said:
Clearly "2 blue balls" is a complete description of the setup in that I cannot know more.

That is not clear at all. "Completeness" is not a statement merely about what can be known. Completeness is a shorthand for something like "complete description of reality." Einstein talked about it as requiring a one-to-one correspondence between physical states and state-descriptions in some theory. EPR of course urged that every "element of reality" must have a counterpart in the theoretical description. etc.

It is admittedly difficult if not impossible to know whether a given state description represents a complete description. Personally I think Bohr was off his rocker for making this kind of claim in the first place -- what in the world could have counted as evidence for it? The mere fact that the Heisenberg principle seems to prevent us from obtaining *knowledge* of certain things? That of course proves nothing. The little switch in the door prevents me from knowing whether or not the light in the refrigerator really goes off or not when I shut the door -- but that doesn't mean I stop believing that, in fact, the light is either on or off. In that case, there are obviously more facts out there in the external world than I can know about directly, so my description "I think there's about a 99% chance that the light does go out when I shut the door" is an admittedly incomplete one.

In the QM case, we can't take anything for granted. It is by no means "obvious" there that there are further facts of reality beyond what is contained in or described by the wave function. But that is why EPR-like arguments are so clever. They allow you to say something, not about the completeness alone, but about the relationship between completeness and locality. EPR showed that, if you hold fast to the locality principle, there must exist "elements of reality" for more quantities than are consistent with the uncertainty principle; hence QM, if local, is incomplete. I think Einstein's argument is even better: he argues that (a) you must be willing to inject the wave function collapse rule into the dynamics in order to get the right correlations and so (b) there is *not* a one-to-one correspondence between physical states and theoretical descriptions since when you collapse the wf for a distant system by making a measurement "here", the wf for that distant system changes in a situation where (by locality) its physical state can not have changed. That ruins any claim of one-to-one correspondence. I also like the Bell-Locality-based argument for this same conclusion: if you assume that the wf alone does provide a complete description of the system described, it is trivial to note that Bell Locality is violated.

Anyway, my point is just to reject in the strongest possible terms the idea that what "completeness" means is somehow purely epistemological, e.g., that it means we've learned all we can or have said all we can say. Completeness involves a comparison between knowledge and the facts, not just a comparison of knowledge to itself.

Of course, many people have tried to define completeness in a purely epistemological way, i.e., while dropping the assumption of realism. This (as with the attempt to define "locality" outside the context of realism) is literal nonsense. Tim Maudlin makes this point (about locality) very nicely in an article called "Space-time in the quantum world": "Physicists have been tremendously resistant to any claims of non-locality, mostly on the assumption (which is not a theorem) that non-locality is inconsistent with Relativity. The calculus seems to be that one ought to be willing to pay *any* price -- even the renunciation of pretensions to accurately describe the world -- to preserve the theory of Relativity. But the only possible view that would make sense of this obsessive attachment to Relativity is a thoroughly realistic one! These physicists seem to be so certain that Relativity is the last word in space-time structure that they are willing even to forego any coherent account of the entities that inhabit space-time." I believe parallel remarks apply as well to the concept of "completeness". Defenders of orthodox QM have been extremely resistant to any claims that QM might be incomplete... yet the only view that would make sense of this obsessive attachment to "completeness" is a thoroughly realistic one.
 
  • #76
ttn said:
...e.g., the probability that Alice will find a blue ball when Bob's has already turned red, is 100%.

So, since you went out of your way to claim that there is no behind-the-scenes, local mechanism which can account for the correlations, i.e., that the description "two blue balls" is *complete*, there is a violation of Bell Locality here.

And my second example ? The 2 Bells Theory ? Is it also violating Bell Locality ? The same example can be given about Alice's black ball and Bob's red ball...


cheers,
Patrick.
 
  • #77
vanesch said:
And my second example ? The 2 Bells Theory ? Is it also violating Bell Locality ? The same example can be given about Alice's black ball and Bob's red ball...

Maybe I'm just being dumb and/or not looking carefully enough, but I didn't see any difference between the two theories. Isn't the second just the same as the first with some of the (already
just meaningless, made-up) terms swapped around?

Any time you tell me there are persistent, law-like correlations between separated events and that there is *nothing* in the shared past of those events which made them be so correlated, I am going to say this violates Bell Locality. That kind of "magical" correlation between separated events is precisely what Bell Locality forbids.

I gather you are going to object to this, and say that my view is premised on a demand for explanation (which black box theories aren't intended to provide) or relies to heavily on realist commitments, or something to that effect. I guess I'm guilty as charged. By the way, you might enjoy the article "Do Correlations Need to be Explained" by Arthur Fine (in the Cushing/McMullin volume called "Philosophical Consequences of Quantum Theory"). He takes a position there that seems like the one you are evolving toward here -- namely, that if we are going to accept irreducible randomness at the individual-outcome-level, we should be equally willing to accept irreducible correlations between distant events. I don't agree with this position of course, but it's certainly out there.

Oh yeah, one other point I wanted to make that fits in nicely here. I was skimming through some of the other threads here, especially the ones on the "loopholes" in the Bell's Inequality experiments. Dr. Chinese made an excellent point there against the "local realism" people who refuse to admit that the experiments actually support the claim that Bell's Inequality is violated in nature. Paraphrasing, the point was: if you made these same sorts of objections on any other issue in science (e.g., claiming that different systematic errors in a bunch of different experiments all conspire magically to make those experiments give exactly the same results, claiming that the samples might be biased merely on the basis that the sample represents less than 100% of the population and without *any* statistical evidence to suggest a bias, etc.) you'd be branded a loony. Science would seriously grind to a complete and total halt if scientists were this willing, across the board, to consider conspiracy theories. It is relevant that the stakes are pretty high here -- one is talking about having to reject a premise (locality) that has been awfully important to physics for a long time. So there is *some* justification for a bit of extra skepticism, scrutiny, and thinking carefully about "loopholes", etc. But at some point you have to draw a line and say: enough. *All* of the evidence points to the QM predictions being correct, and *no* evidence suggests they are wrong. (And the lack of evidence against that proposition is not evidence for it!)

Anyway, I think similar comments apply to the question of whether we should try to explain correlations between distant events. The position of Arthur Fine in the article I mentioned (which I think Patrick would be symapthetic to?) amounts to shrugging and saying "well, some correlations can't be explained." But imagine that view being taken seriously by, say, the drug industry or biologists or chemists or anybody else in science. "Hmmm, people who live in these two widely separated towns all simultaneously came down with a rare disease that hasn't been observed anywhere else on Earth for 100 years... <shrug> oh well, coincidences happen all the time. When's lunch?" Or: "Well yes, your honor, there is a strong correlation between patients having undergone Medical Procedure X and, ahem, dying the next day -- but some correlations are just inexplicable." etc... you get the point.
 
  • #78
My claim is that this "completeness" requirement means: there is an underlying deterministic theory that can generate the probabilities in a classical statistical mechanical way. You are fighting like a devil to show me that I do not need that word "deterministic" but I will try to show you that THAT is what you want, and as long as you don't have it, you call a theory "incomplete". This is not surprising, because it was indeed Einstein's programme. But, although you won't admit it, it comes down to regard any fundamentally statistical theory as "incomplete".

I hope you do not mean by "complete" the "ultimate theory describing the true nature of reality" because that theory will change every century or so, and we will never have a 'true description of reality'. Newtonian theory wasn't, Maxwell's theory wasn't, we now know that general relativity isn't, quantum field theory isn't so I think it is clear by now that nothing we will ever have to put our hands on will be "the true description of reality".
EVERY theory we will ever have is an approximate formalism and with a totally different paradigm than the previous one giving sufficiently accurate results when compared with the experimental results available by the technology of the moment.
Maybe some day we will have to stop, because it all fits logically together and we cannot perform technologically any experiment anymore that could possibly challenge the theory. But that doesn't mean we "arrived".
So it is very simple: if you mean that, by completeness, you can just as well stop and say that every theory is incomplete.

ttn said:
That is not clear at all. "Completeness" is not a statement merely about what can be known. Completeness is a shorthand for something like "complete description of reality." Einstein talked about it as requiring a one-to-one correspondence between physical states and state-descriptions in some theory. EPR of course urged that every "element of reality" must have a counterpart in the theoretical description. etc.

Ok, so "element of reality" must mean: determines precisely every outcome, potentially with certainty. I'll try to show you.

It is admittedly difficult if not impossible to know whether a given state description represents a complete description.

No, once you have a determinisitic theory, you will be happy because there's nothing more to be added. What can be more "complete" than a deterministic theory which tells you individually, for each event, what will happen, with certainty ?

Personally I think Bohr was off his rocker for making this kind of claim in the first place -- what in the world could have counted as evidence for it? The mere fact that the Heisenberg principle seems to prevent us from obtaining *knowledge* of certain things? That of course proves nothing. The little switch in the door prevents me from knowing whether or not the light in the refrigerator really goes off or not when I shut the door -- but that doesn't mean I stop believing that, in fact, the light is either on or off. In that case, there are obviously more facts out there in the external world than I can know about directly, so my description "I think there's about a 99% chance that the light does go out when I shut the door" is an admittedly incomplete one.

Indeed, you want to talk about the switch, and the fact that it determines with certainty that the light goes off.

In the QM case, we can't take anything for granted. It is by no means "obvious" there that there are further facts of reality beyond what is contained in or described by the wave function. But that is why EPR-like arguments are so clever. They allow you to say something, not about the completeness alone, but about the relationship between completeness and locality. EPR showed that, if you hold fast to the locality principle, there must exist "elements of reality" for more quantities than are consistent with the uncertainty principle; hence QM, if local, is incomplete.

Again, in a deterministic case, when it is "in principle" possible to determine with certainty each individual outcome.

I think Einstein's argument is even better: he argues that (a) you must be willing to inject the wave function collapse rule into the dynamics in order to get the right correlations and so (b) there is *not* a one-to-one correspondence between physical states and theoretical descriptions since when you collapse the wf for a distant system by making a measurement "here", the wf for that distant system changes in a situation where (by locality) its physical state can not have changed. That ruins any claim of one-to-one correspondence. I also like the Bell-Locality-based argument for this same conclusion: if you assume that the wf alone does provide a complete description of the system described, it is trivial to note that Bell Locality is violated.

Bell locality is violated for EVERY stochastic theory which gives you correlations and which does not include a deterministic model for each individual outcome in its "state description". See my Blue Balls and my Blue Bells examples. It is only when you give a potentially deterministic state description that you can avoid Bell locality to be violated and have correlations in certain cases.

Anyway, my point is just to reject in the strongest possible terms the idea that what "completeness" means is somehow purely epistemological, e.g., that it means we've learned all we can or have said all we can say. Completeness involves a comparison between knowledge and the facts, not just a comparison of knowledge to itself.

Yes, and the facts "determine" every individual outcome. Again, there is no room for a purely stochastic theory which *postulates* probabilities as fundamental concepts.

Of course, many people have tried to define completeness in a purely epistemological way, i.e., while dropping the assumption of realism. This (as with the attempt to define "locality" outside the context of realism) is literal nonsense. Tim Maudlin makes this point (about locality) very nicely in an article called "Space-time in the quantum world": "Physicists have been tremendously resistant to any claims of non-locality, mostly on the assumption (which is not a theorem) that non-locality is inconsistent with Relativity. The calculus seems to be that one ought to be willing to pay *any* price -- even the renunciation of pretensions to accurately describe the world -- to preserve the theory of Relativity. But the only possible view that would make sense of this obsessive attachment to Relativity is a thoroughly realistic one! These physicists seem to be so certain that Relativity is the last word in space-time structure that they are willing even to forego any coherent account of the entities that inhabit space-time." I believe parallel remarks apply as well to the concept of "completeness". Defenders of orthodox QM have been extremely resistant to any claims that QM might be incomplete... yet the only view that would make sense of this obsessive attachment to "completeness" is a thoroughly realistic one.

That's why I think that the only reasonable definition of locality is the one that avoids the paradox in relativity, which is that you receive your own information before sending it so that you can decide to send something else.
If *that* requirement is satisfied, the stochastic predictions of a theory are local.

Bell-locality is a requirement that doesn't only depend upon the stochastic predictions a theory makes, but also upon what is considered as a state description, and can only avoid calling any correlation as non-local if that state description is potentially deterministic. But it will call ANY stochastic description 'non-local'. Bell locality has no meaning for theories which are inherently stochastic, meaning: out of which come simply rules to calculate probabilities.

There is more room for such stochastic theories than for deterministic theories with local mechanisms to make up probabilities which do not violate "information transfer" locality, and QM happens to hit in that extra room.

So you can redefine qualifiers such as "complete" or "realist" or whatever, what you really mean is "deterministic", or "potentially deterministic".

By "potentially deterministic" I mean partly deterministic and partly stochastic theories, of which the stochastic parts can trivially be converted in deterministic ones by adding (hidden) variables.

cheers,
Patrick.
 
  • #79
ttn said:
Maybe I'm just being dumb and/or not looking carefully enough, but I didn't see any difference between the two theories. Isn't the second just the same as the first with some of the (already
just meaningless, made-up) terms swapped around?

Hehe :devil: :devil:

The second theory (Blue Bells) HAS a hidden variable explanation:

you have a hairy, red cube going one side and a smooth blue piramid going the other way. So ADDING this deterministic hidden variable model will turn my Bell-locality violating theory into a Bell-respecting theory.

The first theory (Blue balls) hasn't such a potentially underlying model.

Please admire it for at least 3 seconds, it took me some puzzling to find it :smile:.

cheers,
Patrick.
 
  • #80
ttn said:
Any time you tell me there are persistent, law-like correlations between separated events and that there is *nothing* in the shared past of those events which made them be so correlated, I am going to say this violates Bell Locality. That kind of "magical" correlation between separated events is precisely what Bell Locality forbids.

And if the correlations are only born when the two events are already in the past, like in an MWI approach ? When the "remote measurement" didn't take place *until you got news of it because it is YOU who determined the outcome* ?

I gather you are going to object to this, and say that my view is premised on a demand for explanation (which black box theories aren't intended to provide) or relies to heavily on realist commitments, or something to that effect.

I go even further: what you call "realist" means deterministic, even if you don't want to admit it. But I'll find a way to make you talk :devil: :-p

Dr. Chinese made an excellent point there against the "local realism" people who refuse to admit that the experiments actually support the claim that Bell's Inequality is violated in nature. Paraphrasing, the point was: if you made these same sorts of objections on any other issue in science (e.g., claiming that different systematic errors in a bunch of different experiments all conspire magically to make those experiments give exactly the same results, claiming that the samples might be biased merely on the basis that the sample represents less than 100% of the population and without *any* statistical evidence to suggest a bias, etc.) you'd be branded a loony. Science would seriously grind to a complete and total halt if scientists were this willing, across the board, to consider conspiracy theories. It is relevant that the stakes are pretty high here -- one is talking about having to reject a premise (locality) that has been awfully important to physics for a long time. So there is *some* justification for a bit of extra skepticism, scrutiny, and thinking carefully about "loopholes", etc. But at some point you have to draw a line and say: enough. *All* of the evidence points to the QM predictions being correct, and *no* evidence suggests they are wrong. (And the lack of evidence against that proposition is not evidence for it!)

I don't think locality (in the relativity sense) is the issue, I think it is a certain form of realism (which you call somehow complete, and which I'm sure means "determinisitic"). I think that at the moment, we cannot give up on the first (and happily QM DOESN'T violate locality in the relativity sense in generating an information paradox). But I easily give up on the second condition.

Anyway, I think similar comments apply to the question of whether we should try to explain correlations between distant events. The position of Arthur Fine in the article I mentioned (which I think Patrick would be symapthetic to?) amounts to shrugging and saying "well, some correlations can't be explained." But imagine that view being taken seriously by, say, the drug industry or biologists or chemists or anybody else in science. "Hmmm, people who live in these two widely separated towns all simultaneously came down with a rare disease that hasn't been observed anywhere else on Earth for 100 years... <shrug> oh well, coincidences happen all the time. When's lunch?" Or: "Well yes, your honor, there is a strong correlation between patients having undergone Medical Procedure X and, ahem, dying the next day -- but some correlations are just inexplicable." etc... you get the point.

If I were the judge, I'd try to send information through the patients, by given certain days the drug to the people, and certain days not. The receiver which would be the grand jury, and should then try to decode my message by looking at how people die. My message would be: "Cut this guy his head off - stop - repeat - cut this guy his head off" coded in ASCII 7 bit. Bit one: I give them the drug, and they die. Bit 0, they get a placebo and they live.
Hmm, if my phrase contains 80 characters, that means 560 bits to send, with at least 10 people per bit ; ok but half of them will have bit 0 and live, so I'll need to kill 2800 people for this message to be sent... :bugeye:
If they can read my message, I'd say that there is a causal link :smile:

cheers,
Patrick.
 
  • #81
ttn said:
That is not clear at all. "Completeness" is not a statement merely about what can be known. Completeness is a shorthand for something like "complete description of reality." Einstein talked about it as requiring a one-to-one correspondence between physical states and state-descriptions in some theory. EPR of course urged that every "element of reality" must have a counterpart in the theoretical description. etc.

It is admittedly difficult if not impossible to know whether a given state description represents a complete description. Personally I think Bohr was off his rocker for making this kind of claim in the first place -- what in the world could have counted as evidence for it? The mere fact that the Heisenberg principle seems to prevent us from obtaining *knowledge* of certain things? That of course proves nothing.

Probably you are right that Bohr should not have asserted QM was complete. I think that statement carries too much baggage with it.

EPR thought they had a pretty clever argument by throwing the singlet state into the equation along with the HUP. They argued that at least a "more complete" specification of the system was possible, even if you accepted QM's predictions. They tried, in other words, to use the logic of the HUP against the idea that QM was complete.

Bell said that EPR's argument - which also tried to define what an element of reality was - did not actually work as they had pictured it. The problem being that their assumption - elements of reality exist independent of the measurement - was flawed. As we now know, Bell's Inequality shows that these elements of reality cannot have predetermined values and still yield experimental results consistent with QM. This is true - in my opinion - whether the theory is local or non-local: unmeasured quantum properties do not correspond to elements of reality. This conclusion is diametrically opposed to the closing words of EPR. However, I do not think this is semantically equivalent to the statement that QM is complete.
 
  • #82
vanesch said:
Hehe :devil: :devil:

The second theory (Blue Bells) HAS a hidden variable explanation:

you have a hairy, red cube going one side and a smooth blue piramid going the other way. So ADDING this deterministic hidden variable model will turn my Bell-locality violating theory into a Bell-respecting theory.

The first theory (Blue balls) hasn't such a potentially underlying model.

Please admire it for at least 3 seconds, it took me some puzzling to find it :smile:.

cheers,
Patrick.

You have a lot of balls (sorry couldn't resist).

The only detail I would comment on is this: you can construct a local hidden variable theory as you have above which appears to provide certain correspondence to the Bell model, but that correspondence is superficial. You can't do it AND give the same predictions as QM. That is the essence of Bell! There is no \theta in your formula. Functions exist which respect the Bell Inequality as \theta varies; but they will not match the cos^2\theta predictions of QM.
 
  • #83
ttn said:
1. Any time you tell me there are persistent, law-like correlations between separated events and that there is *nothing* in the shared past of those events which made them be so correlated, I am going to say this violates Bell Locality. That kind of "magical" correlation between separated events is precisely what Bell Locality forbids.

...

2. But imagine that view being taken seriously by, say, the drug industry or biologists or chemists or anybody else in science. "Hmmm, people who live in these two widely separated towns all simultaneously came down with a rare disease that hasn't been observed anywhere else on Earth for 100 years... <shrug> oh well, coincidences happen all the time. When's lunch?" Or: "Well yes, your honor, there is a strong correlation between patients having undergone Medical Procedure X and, ahem, dying the next day -- but some correlations are just inexplicable." etc... you get the point.

1. I guess you could use this as an operating definition of Bell Locality. But there is yet one more item to consider: who is saying that there is no connection between these events? I say there is a connection between the events. But I deny that there are more "elements of reality" than actually measured.

2. Good point. Would you bet your life that there is no causality to the correlation? If you wouldn't - as a strategy - then you believe the correlation is not spurious.

I don't believe the connection between the correlations is spurious, but I don't know what is the cause and what is the effect. Presumably, causes must precede effects but maybe that does not apply. If you see time as symmetric then maybe causes only precede effects in some frames.
 
  • #84
vanesch said:
My claim is that this "completeness" requirement means: there is an underlying deterministic theory that can generate the probabilities in a classical statistical mechanical way.
Yes, in any case this is what I would mean, but this does not lead necessarily to the requirement that every "elements of reality" has to determine outcomes in a Bell test completely. Any given element of reality may need (as explained by Bell and by Clauser and Horne) the company of other elements of reality, mostly local to the detectors, before it yields a definite outcome. Without this extra input, our element of reality set at the source may determine only the probability of each possible outcome.

Ok, so "element of reality" must mean: determines precisely every outcome, potentially with certainty. I'll try to show you.
This is the actual statement that I'm challenging. It is probably not essential to your point but it's as well to be clear what is meant.

Yes, and the facts "determine" every individual outcome. Again, there is no room for a purely stochastic theory which *postulates* probabilities as fundamental concepts.
Ah yes, that's a more correct way of saying it. The "facts" can include more than one element of reality.

Bell locality has no meaning for theories which are inherently stochastic, meaning: out of which come simply rules to calculate probabilities.

There is more room for such stochastic theories than for deterministic theories with local mechanisms to make up probabilities which do not violate "information transfer" locality, and QM happens to hit in that extra room.
But doesn't that imply that QM operates by magic?

By "potentially deterministic" I mean partly deterministic and partly stochastic theories, of which the stochastic parts can trivially be converted in deterministic ones by adding (hidden) variables.
This sounds reasonable.

The interesting question now is whether or not experiments have in fact ruled out such "potentially deterministic" theories. Isn't the fact that they [e.g. Grangier's team, and Nha and Carmichael -- see Hans de Vries post earlier] are still looking for "loophole-free" tests an indication that the evidence against such theories is, to date, not conclusive? Of course, by Bell's theorem, if this kind of theory really does underly everything it means that QM is not quite correct, but it's probably nearly correct, or perhaps sufficiently near to correctness that the various applications of entanglement are effectively valid.

Cat
 
  • #85
Cat said:
Isn't the fact that they [e.g. Grangier's team, and Nha and Carmichael -- see Hans de Vries post earlier] are still looking for "loophole-free" tests an indication that the evidence against such theories is, to date, not conclusive?

That is a logical flaw. You want it both ways. You refuse to accept it as conclusive evidence when folks stop looking; and you see it as supporting your position when they are looking! From your logic, it makes no sense to repeat an experiment, either! (Presumably that would mean that the experimented does not accept the initial results.) There are a lot of reasons to do experiments, even ones in which the essential results are not in question.
 
  • #86
vanesch said:
I hope you do not mean by "complete" the "ultimate theory describing the true nature of reality" because that theory will change every century or so, and we will never have a 'true description of reality'. Newtonian theory wasn't, Maxwell's theory wasn't, we now know that general relativity isn't, quantum field theory isn't so I think it is clear by now that nothing we will ever have to put our hands on will be "the true description of reality".
EVERY theory we will ever have is an approximate formalism and with a totally different paradigm than the previous one giving sufficiently accurate results when compared with the experimental results available by the technology of the moment.
Maybe some day we will have to stop, because it all fits logically together and we cannot perform technologically any experiment anymore that could possibly challenge the theory. But that doesn't mean we "arrived".
So it is very simple: if you mean that, by completeness, you can just as well stop and say that every theory is incomplete.

Good point, I totally agree. This is exactly why I think Bohr should never have been taken seriously when he claimed QM was complete.

Perhaps, then, what really makes you uncomfortable with all of this is not that the EPR-type argument against Bohr's claim is unsound, but that it is totally unnecessary. Why work so hard to refute something that is preposterous on its face? There is no grounds whatsoever for thinking QM is complete, so just forget about the whole issue and get on with life. Einstein et al were *obviously* right to reject the completeness doctrine, and they shouldn't have opened unnecessary cans of worms arguing against it. Is this more or less what you think? :smile:



No, once you have a determinisitic theory, you will be happy because there's nothing more to be added. What can be more "complete" than a deterministic theory which tells you individually, for each event, what will happen, with certainty ?

It's true; if you have a deterministic theory that explains everything, you'd at least have some evidence that maybe the theory is complete. On the other hand, any time you have a stochastic theory, it's always possible to wonder if the randomness is merely due to incomplete information, i.e., if an underlying deterministic theory could give rise to the stochastic theory already in hand.

But that doesn't mean I simply equate "complete" with "deterministic". Perhaps nature really is not deterministic. Who knows. (Actually, as someone who believes in free will, I'm really pretty open to this possibility.) My only point is: if you have a stochastic theory that predicts correlations which cannot be locally explained (with the usual stochastic sense of "explained"), you should admit that your stochastic theory is nonlocal. And, say, if it is possible to remove that nonlocality (i.e., construct a local theory that makes the same predictions) by filling in the description a bit (maybe leaving you with a deterministic underlying theory, or maybe a still-stochastic but more detailed underlying theory) you should be open to that possibility.





Indeed, you want to talk about the switch, and the fact that it determines with certainty that the light goes off.

It's not so much that I *want* to talk about this stuff. But if talking about this stuff allows me to get around a shocking and troublesome problem (which would be that thinking my stochastic statement about the fridge light was a complete description, led to my fridge theory being nonlocal -- which makes no sense in this example, but oh well) then I should be open to the possibility.

That's my only point. It's totally simple. People who claim QM is complete should admit that their theory is nonlocal. (...and they should therefore quit dismissing, out of hand, theories like Bohm's because of their nonlocality.)


Bell locality is violated for EVERY stochastic theory which gives you correlations and which does not include a deterministic model for each individual outcome in its "state description". See my Blue Balls and my Blue Bells examples.

That's not true. You yourself revealed that there exists a local, stochastic theory that can explain all the observational results you catalogued for the Blue Bells example. (It's stochastic because it's random which of the two goobers goes which way -- and for all I know, maybe it's *really*, irreducibly random.)

That's why I think that the only reasonable definition of locality is the one that avoids the paradox in relativity, which is that you receive your own information before sending it so that you can decide to send something else.
If *that* requirement is satisfied, the stochastic predictions of a theory are local.

We've been here before. I agree this is an important and interesting definition of locality, definitely worth considering. The problem is that "information" is a very high level idea, and it's possible for theories to be local in this "no info transfer" sense while being fundamentally, in their guts, quite blatantly nonlocal. Bohm's theory is the obvious non-controversial example. Orthodox QM is another obvious example that is, for reasons I frankly don't understand, controversial. (I guess, the reason it's controversial is that people are happy to emblazon Bohm's theory with the scarlet letter "NL" based on its violating Bell Locality, then they like to switch to the "no info transfer" definition so that QM gets the label "Local". But as I've said several times, that's just stupid naked inconsistency and shouldn't be tolerated by serious thinkers.)
 
  • #87
DrChinese said:
You have a lot of balls (sorry couldn't resist).

I hope you realize that the names have been carefully choosen :smile:
The Blue Balls theory is very preposterous, and violates Bell's inequalities much more than QM (hence Balls :smile:) - at least if I didn't make an error.

The Bells theory is compatible with a local hidden variable model and hence will satisfy Bell's inequalities.

I took on purpose NOT a cover-up of a prediction of QM because that would be seen as too cheap. In fact, I tried to make the two correlation functions as much alike as I could, with similar values of correlations, but in different cases.

Note that what ttn defined as Bell Locality is not the Bell's inequalities (but they can be derived from it). He defines Bell Locality as the fact that if you take into account a "complete state description", then the correlation P(A,B) factorizes in P(A) x P(B).

I wanted to show how inevident it is to apply this to a stochastical theory, by showing two very similar stochastical theories.

The only detail I would comment on is this: you can construct a local hidden variable theory as you have above which appears to provide certain correspondence to the Bell model, but that correspondence is superficial. You can't do it AND give the same predictions as QM. That is the essence of Bell! There is no \theta in your formula. Functions exist which respect the Bell Inequality as \theta varies; but they will not match the cos^2\theta predictions of QM.

The only "theta" I have is discrete: colors, shapes and surface type. 3 values is sufficient.

cheers,
Patrick.
 
  • #88
vanesch said:
Hehe :devil: :devil:

The second theory (Blue Bells) HAS a hidden variable explanation:

you have a hairy, red cube going one side and a smooth blue piramid going the other way. So ADDING this deterministic hidden variable model will turn my Bell-locality violating theory into a Bell-respecting theory.

The first theory (Blue balls) hasn't such a potentially underlying model.

Please admire it for at least 3 seconds, it took me some puzzling to find it :smile:.

It's a nice example, no doubt. :smile: But I still think you are missing my point. In fact, your example helps me make my point even stronger, so thank you.

My claim was that the Blue Bells theory violated Bell Locality -- ***if*** you asserted that the theory is complete. It's just like EPR: the conclusion is not a blanket claim for in-completeness or non-locality, but a dilemma: if you want to believe the theory is complete, you must admit that it violates locality. Or: if you insist on avoiding nonlocality, you must admit that the theory is incomplete.

So your example is helpful in that it illustrates this dilemma very clearly. Regarded as a complete specification of the system, the Blue Bells model is nonlocal. The probabilities violate Bell Locality. Of course, you can get around this conclusion easily, by admitting that maybe, after all, the theory was not complete, and considering the very local-hv account you provided.

Really, this is exactly like the coin-in-two-hands or Einstein's Boxes example I mentioned a while back. Put a particle in a box, split the box in two so half the wf goes each way, separate the halves, and then look in one to see if the particle is there. If you *insist* on regarding the wf as a complete description of the state of the particle prior to looking in the boxes, you can then identify the wf with Bell's "L" and infer that Bell Locality is violated. QM, if complete, is nonlocal. But in this example there is, just like in yours, a rather obvious local way to understand the probabilities involved (specifically, that the joint probability for finding the particle in *both* boxes is not simply the product of the individual probabilities for the two boxes = 50% * 50% = 25%) as arising from a deeper level of description -- namely, one in which the particle just is in one of the two boxes the whole time, prior to measurement. Then opening the boxes merely reveals the pre-existing location of the particle. *Obviously* local. But -- and this is the whole point -- the price of *doing* this is regarding the original wf-only description as *incomplete*. QM, if local, must be incomplete. Or equivalently: QM, if complete, is nonlocal. All of that follows from this trivial example.

Of course, less trivial examples (involving spin correlations along several distinct axes, or the equivalent of the case of the Blue Balls -- which, by the way, is not a sherlock holmes story I'd particularly like to read) yield different results. Sometimes it is *not* possible to elude the apparent nonlocality of the quantum predictions merely by giving up the idea that the wf provides a complete description. That is Bell's theorem. But that doesn't undo what we already showed with the simpler example, namely, that QM, if complete, is nonlocal.

And that's really all I'm interested in claiming. The nonlocality that is *apparent* in the QM predictions is actually *real* -- it cannot be escaped by dropping the completeness assumption or anything else. Nature is nonlocal (in the Bell sense, though, yes, possibly local in some other senses). You're going to be stuck with a Bell-Nonlocal theory whether you regard QM as complete or not. This is not a proof that QM *isn't* complete. Duh. But it is a proof that the people who dismiss theories like Bohmian mechanics out of hand (on the grounds of their violating Bell Locality) should shut up. :smile:
 
  • #89
ttn said:
That's not true. You yourself revealed that there exists a local, stochastic theory that can explain all the observational results you catalogued for the Blue Bells example. (It's stochastic because it's random which of the two goobers goes which way -- and for all I know, maybe it's *really*, irreducibly random.)

That's where we differ.
IF you consider this theory as "Bell Local" it is obviously deterministic, in that for each pair of bells emitted, you are in case A (cube left, piramid right) or you are in case B (cube right, piramid left). If you are in case A, all the probabilities are 1 or 0, and if you are in case B, idem. So if the case is determined, everything is deterministic. Now, if you think you have the right to put the "case" into the "complete description of nature" then I have also the right to say that this complete description of nature determines all outcomes with certainty, and that's what I call a deterministic theory.
Whether this CASE information is accessible in principle to us, observers, or not (in which case it is a "hidden variable") doesn't change anything: if you consider it part of a complete description, it "is there".
It is our lack of information about the CASE variable, so that we have to consider an ensemble of these variables, that gives us the ONLY randomness in the outcomes. Now, or (as in the case of statistical mechanics) this is just a problem in practice, or somehow it is "fundamentally hidden", so whatever we do, we'll never find out. In that last case you could maybe try to claim that your theory is fundamentally stochastic, but then I can claim that your variable is so well hidden that it shouldn't be part of a state description in the first place ! But if you do that, your Bell locality condition falls on its face again...
As I said elsewhere, you could pro forma introduce some finite probabilities in such hidden variable theories to make em look like a stochastic theory, but by adding a few more variables, you easily turn them in fully deterministic theories out of which (when including them in the "complete state description") come only 1 and 0 as probabilities.

cheers,
Patrick.
 
  • #90
ttn said:
And that's really all I'm interested in claiming. The nonlocality that is *apparent* in the QM predictions is actually *real* -- it cannot be escaped by dropping the completeness assumption or anything else. Nature is nonlocal (in the Bell sense, though, yes, possibly local in some other senses). You're going to be stuck with a Bell-Nonlocal theory whether you regard QM as complete or not. This is not a proof that QM *isn't* complete. Duh. But it is a proof that the people who dismiss theories like Bohmian mechanics out of hand (on the grounds of their violating Bell Locality) should shut up. :smile:

I had the impression (but I can be wrong) that if you take the hidden variables in Bohm for real (and you have to, if you consider them part of the reality description), that LOCAL probability distributions of these hidden variables can have expectation values which change according to what happens elsewhere, so that these probability distributions of these hidden variables are not local in the sense of relativity (in that we can send information that way, if only we had local access to these hidden variables).
It is in *that* sense that I thought that Bohm was non-local.

I honestly don't care about Bell locality itself which is, in my opinion, just a statement about probabilities generated by deterministic, local theories. So I agree with you that I wouldn't mind Bohm only to violate Bell Locality. The same rules have to count for everybody.

cheers,
Patrick.
 

Similar threads

Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 6 ·
Replies
6
Views
483
Replies
58
Views
4K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 109 ·
4
Replies
109
Views
6K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
5
Views
3K
  • · Replies 19 ·
Replies
19
Views
2K