What Is an Element of Reality?

In summary, Laloe discusses the meaning of "element of reality" and how it applies to quantum mechanics. He discusses simple experiments and how no conclusion can be made yet. He discusses correlations and how they unveil the cause of a common color. He concludes that the only possible explanation is that there is a common property in both peas that determines the color.
  • #36
ttn said:
"a" was meant to include any relevant facts about the apparatus used to measure "A". And yes, "b" is to be thought of as outside the past lightcone of A. The idea is just that, as I said in another post, once you've conditionalized a probability on every event on which it might depend in a local way, adding more information isn't going to change the probabilities. Such information would be either irrelevant or redundant.
Yes, that's fine, and I like your coin analogy, but I don't see how you can say:
this cute little example with the "quantum coin" actually plays a significant role in establishing that any empirically viable theory (which is to say: Nature!) violates Bell Locality.
No, it takes something much worse than that to violate a "real" Bell inequality (i.e. one with no loopholes). Bell's inequality is not just a statistical trick dependent on a change in conditional probabilities when our information changes. I know some physicists think it is, but if you look at, say, Clauser and Horne's derivation in their 1974 paper (Physical Review D, 10, 526-35 (1974)) you can see that all it depends on are ratios of counts.

Cat
 
Physics news on Phys.org
  • #37
JesseM said:
You still agree that Everett-style interpretations may be an exception to this, even if you personally don't find them plausible, right? I think an advocate of such an interpretation would say your argument is faulty because you assume that after the experimenter makes a measurement there is a single fact about which side the photon was found on, when really there might be different facts observed by different copies of the experimenter.


Sure.

At the risk of annoying Patrick, I'll just say that if the best objection to my claim involves "ah, but maybe different copies of the experimenters in parallel universes saw different results" ... well, I'll just shrug and take it as a compliment.
 
  • #38
Cat said:
No, it takes something much worse than that to violate a "real" Bell inequality (i.e. one with no loopholes).

I think you missed my point. No Bell inequality is violated by the coin example. You have to do difficult experiments to violate the Bell inequality. But that Bell inequalities are experimentally violated is only the second half of Bell's argument for non-locality. The first half of the argument is essentially the EPR argument (that QM, if complete, is nonlocal) and *that's* what the coins/boxes example establishes. Yes, it's meager, but it's a small part of an important argument that hasn't been widely grasped -- despite Bell's heroic efforts to make people understand.
 
  • #39
ttn said:
At the risk of annoying Patrick, I'll just say that if the best objection to my claim involves "ah, but maybe different copies of the experimenters in parallel universes saw different results" ... well, I'll just shrug and take it as a compliment.
I guess it depends on one's "aesthetic" intuitions about what the laws of nature should look like. To me, the idea that the laws of nature violate Lorentz-invariance, picking out a single preferred reference frame, yet somehow the laws of nature also conspire to make it impossible to ever detect any evidence of a preferred frame, seems like a much uglier solution to the problems raised by the EPR experiment than the idea of everything constantly splitting into multiple copies, or even the idea of backwards-in-time causation as in Cramer's "transactional interpretation". As an analogy, even though it's possible to come up with an ether theory that reproduces all the predictions of SR, there are strong aesthetic and conceptual arguments against such an interpretation of SR, summarized in this post by Tom Roberts:

http://groups-beta.google.com/group/sci.physics.relativity/msg/a6f110865893d962

Most of his arguments would apply equally well to nonlocal hidden-variables theories which involve a preferred reference frame for FTL effects, like Bohm's (and any new nonlocal hidden-variables theories must also have this feature if they want to forbid backwards-in-time causation).
 
  • #40
ttn said:
The Bell Locality condition is really simple. It merely says

P(A|a,b,L) = P(A|a,L)

where "A" is some particular event (say the result of a measurement), "a" is any relevant parameters pertaining to the event (like the orientation of your SG magnets if it's a spin measurement), "L" is a complete specification of the state of the measured object across some spacelike hypersurface in the past of the measurement event, and "b" is any other junk that is spacelike separated from the measurement event. Basically the idea is: once you conditionalize on everything that could possibly affect the outcome in a local manner, specifying in addition information pertaining to space-like separated events will be *redundant* and hence won't change the probabilities.

Ah, but HERE we agree ! And QM DOES satisfy this condition: the LOCAL PROBABILITIES at A are not influenced by the settings at b.
This is indeed, what I consider "sufficiently local" in a stochastic theory.

But you wrote something else for Bell locality:

You wrote that
P(A,B | a,b,L) = P(A|a,L) P(B|b,L)

That's a much more severe requirement (which is the true Bell locality).
My claim is that we are now expressing probabilities of A AND B, which pertain to a and b, so there's no a priori need to have them factorized, because to have a single hope of observing A and B together, a and b have to be in the observer's past light cone. (that's the thing you win from an MWI viewpoint)
Of course you want to factorize this if you somehow "want the randomness to be generated locally". But that only has a meaning with a deterministic theory where "the particles carry something with them to determine that randomness". It follows naturally within the framework of an underlying mechanics that GENERATES you the probabilities. But I don't see why a stochastic theory has the "right" to tell you about "individual outcome probabilities" but not about higher order correlations. If you want to give a name to that property, namely Bell locality, then so be it, but again, I repeat that this name doesn't mean anything within the framework of a stochastic theory. Your FIRST condition, however, DOES :-)

cheers,
Patrick.

EDIT: the latter requirement of course leads to the first, but a less severe one can also do:

Integral dB P(A,B|a,b,L) shouldn't depend on b.
 
Last edited:
  • #41
ttn said:
I'd go further. I'm a scientific realist. I believe there is an external, physical, objective world that exists independent of human knowledge of it. And I think the purpose of physics is to understand what the world is like. What you are here calling "telling stories" is really the process of building up an evidence-based model of reality -- just like Copernicus was "telling a story" when he said the Earth went around the sun, Maxwell and Boltzmann were "telling stories" when they predicted the distribution of molecular speeds in a gas, and just like, say, contemporary astrophysicists "tell stories" about how shockwaves propagating through infalling matter can result in supernovas.

I would like to point out that MWI like settings ALSO give rise to an "external physical objective world" in a certain way: it is given by "the wavefunction of the universe", and its corresponding evolution (in a Schroedinger picture). Only, that "universe" is quite remote from what WE observe ; nevertheless, it explains what WE observe. So, in that way, it isn't solipsist.

About stories: I think it is increasingly clear that the mental picture we have of a physical phenomenon is dependent on the formalism in which we work. For instance, if I work in Newtonian gravity, I think of "forces" pulling on some balls in a kind of Euclidean space. When you write computer code for such a calculation, that's what you keep in mind. However, when you switch to general relativity, suddenly the picture changes. Now we are thinking of geodesics on a frozen, wobbly 4-d spacetime. It is very hard to think that way, and even to take that very seriously, because "nothing moves" in that clay-shaped 4-d thing. There's a spot on that clay model where you are already dead, for instance. So here already, there is a BIG difference between what "nature is" (a 4-d motionless piece of clay) and what you "observe" (time flow). As I said before: the big shock of relativity was the shattering of our concept of time.

When you do optics experiments, often you imagine "beams" with a kind of little "phase counter" running over the lines, in order to find out the correct interference patterns. This is on one hand a semigeometrical approximation to EM, but it is also an application of the Feynman path integral. However, when things get harder, like in cavities, you switch to a complete classical field viewpoint, with E-arrows attached to each point in space.
When you do quantum optics, often you switch back and forth between the two mental pictures (the "wave - particle duality" :-). And sometimes you even have to put it all down, and go to an abstract Fock space representation.

You don't take all these stories simultaneously to be "a true description of reality", no ? You will of course reply: no, there is ONE story (the most sophisticated one) that must be true, and all the rest are approximations. But hey, when we didn't know about that most sophisticated view, we *did* think that our approximations were the "true vision". There's nothing that makes us think that our grandchildren will not have an even more sophisiticated view which invalidates our current one, or at least makes it just an approximation to reason in. So there's no point in trying to have "a true description of reality" if it fundamentally changes every century or so.

I don't accept the idea that, in these sorts of cases, the only point of these stories is to help people develop intuition for formalism, etc. If anything, it's just the reverse: the point of the formalism is to help us figure out which story is the correct one, i.e., what the world is like. Isn't that really what science is all about?

It was, a long time ago. But after a few paradigm shifts, I think it is an illusion to think about it that way. I repeat: you have no idea how deluded we all are :smile: I think that that is the ONLY true picture of "reality" that will remain with us for ever. We only know one thing for sure: reality is way way different from what we ever will think it is.
Our only hope (seems to be satisfied up to now) is that we will be able to map approximate mathematical models on it, and tell stories that go with it. And even that is not evident, but seems to be right. Because you like Einstein: "the most incomprehensible thing about the universe is that it seems to be comprehensible" or something of the kind.


cheers,
Patrick.
 
  • #42
JesseM said:
I guess it depends on one's "aesthetic" intuitions about what the laws of nature should look like. To me, the idea that the laws of nature violate Lorentz-invariance, picking out a single preferred reference frame, yet somehow the laws of nature also conspire to make it impossible to ever detect any evidence of a preferred frame, seems like a much uglier solution to the problems raised by the EPR experiment than the idea of everything constantly splitting into multiple copies, or even the idea of backwards-in-time causation as in Cramer's "transactional interpretation".

Yes, I think we perfectly agree here. I prefer consistency over intuition. As you point out: it is not lorentz invariance by itself that is "holy", but the fact that it should be holy in practice, but violated by the wheels and gears of the system which I find very ugly.

cheers,
Patrick.
 
  • #43
AFAIK stochastic interpretations get around bells theorem as well (or rather reproduce it), so in a sense you get a philosophically pleasing determinism (though it can never be truly measured) as well as locality.

Incidentally there's a lot of definitions for locality out there, depending on the nature of the information theory you are using. What we can't have under any circumstance, is information propagating outside its own lightcone. So while locality and causality are logically linked, there is some subtleties between the two in the literature.
 
  • #44
Since no one has brought this up, I will.

I strongly suggest everyone interested to read the latest paper from the Zeilinger group appearing in PRL this week.[1] In this, they created a 2-photon pair, which at first had very weak degree of entanglement. They showed that this pair could not violate the CHSH inequality "... or any other inequality..". They referred to this pair as the "local" states. They then purified the pair to strongly entangled them and showed how it can now violate the CHSH-Bell inequality, "... proving that it cannot be described by a local realistic model..."

I think this is crucial because it compares what you'd get in two different cases, and that the relevant inequalities are only violated upon the "turning on" of the entanglement. It shows that just because you can create a pair of something (as in a pair of classical particle that are connected to each other via conservation of angular momentu, let's say), doesn't mean the pair will violate these inequalities if it isn't entangled quantum mechanically.

Zz.

[1] P. Walther et al., PRL v.94, p.040504 (2005).
 
  • #45
Haelfix said:
AFAIK stochastic interpretations get around bells theorem as well (or rather reproduce it), so in a sense you get a philosophically pleasing determinism (though it can never be truly measured) as well as locality.
What do you mean by "stochastic interpretations"? And why are you talking about determinism when the word stochastic always implies randomness?
 
  • #46
ZapperZ said:
I strongly suggest everyone interested to read the latest paper from the Zeilinger group appearing in PRL this week ...
P. Walther et al., PRL v.94, p.040504 (2005).​
I had a look at this and was shocked to find that they said they'd infringed the CHSH inequality but did not even mention the main "loophole" that bugs this -- the "fair sampling" one. Of course, if their detecters were perfect then it would have been irrelevant, but since it was an ordinary optical experiment this cannot have been so.

How can they justify quoting the results of this test without so much as a mention of the efficiencies involved? They say their results "cannot be described by a local realist model". If they have not closed the detection loophole they have not proved this!

Cat
 
  • #47
JesseM said:
What do you mean by "stochastic interpretations"? And why are you talking about determinism when the word stochastic always implies randomness?
See Clauser, J F and Horne, M A, “Experimental consequences of objective local theories”, Physical Review D, 10, 526-35 (1974).

They explain very carefully how in real Bell test experiments it's best not to try and define hidden variables that completely determine the outcomes even though they do exist. Some components of the "complete" hidden variables are tied up with the detailed behaviour of the detectors and can be treated as random. Clauser and Horne's version of the Bell inequality follows one of Bell's later ideas in defining HV's that merely determine the probabilities of detection, not the actual outcomes.

As I understand it, this kind of HV is constructed from those components of the full HV that are associated with the source and which play a logical role in relation to detection. It is what is meant by the term "stochastic hidden variable". In an attempt to avoid confusion, C and H call a stochastic HV theory an "objective local theory".

Cat
 
  • #48
Cat said:
They explain very carefully how in real Bell test experiments it's best not to try and define hidden variables that completely determine the outcomes even though they do exist. Some components of the "complete" hidden variables are tied up with the detailed behaviour of the detectors and can be treated as random.

There is no difference between a fully deterministic HV theory, and a "HV theory of which the parameters determine the local probability distributions" because this simply amounts to adding a few more hidden variables to the "full state description" (call it an extra list of random numbers) which turn the latter version in the former. For instance, these variables can determine the "microstate of the detector" which is unknown. So a HV theory is always deterministic in its approach and gets its stochastic character only from the incomplete knowledge we have about the HV. And then you always end up with classical statistical mechanics, from which, indeed, all Bell inequalities and so on follow upon assumption of locality.

cheers,
Patrick.
 
  • #49
Just for information, the kind of view I have been defending here, and which was, I thought, a kind of personal mixture of existing views, is very clearly expressed by the article by Anthony Sudbery in:
quant-ph/0011084

What is weird in reading this, for me, was that it almost sounded as if I was talking to myself... but then I'm tempted by solipsism :smile:

cheers,
patrick.
 
  • #50
vanesch said:
There is no difference between a fully deterministic HV theory, and a "HV theory of which the parameters determine the local probability distributions" because this simply amounts to adding a few more hidden variables to the "full state description" (call it an extra list of random numbers) which turn the latter version in the former. For instance, these variables can determine the "microstate of the detector" which is unknown. So a HV theory is always deterministic in its approach and gets its stochastic character only from the incomplete knowledge we have about the HV. And then you always end up with classical statistical mechanics, from which, indeed, all Bell inequalities and so on follow upon assumption of locality.

cheers,
Patrick.
Though true, if you want to derive a general Bell inequality, valid for imperfect detectors, it is necessary, I think, to do as Clauser and Horne (and Bell, in 1971) did and treat the important (type I) components of the HV in a logically different manner from unimportant (type II) ones. The "type I" ones are those such as polarisation direction and signal amplitude that are set at the source and are relevant when the particles reach the analysers. These really do play a logically different role in the experiments from the "type II" components concerned with, for instance, the microstate of the detector. The type I components are responsible for any correlation, while the type II ones are assumed to be independent on the two sides -- just random "noise".

Cat
 
  • #51
Cat said:
Though true, if you want to derive a general Bell inequality, valid for imperfect detectors, it is necessary, I think, to do as Clauser and Horne (and Bell, in 1971) did and treat the important (type I) components of the HV in a logically different manner from unimportant (type II) ones. The "type I" ones are those such as polarisation direction and signal amplitude that are set at the source and are relevant when the particles reach the analysers. These really do play a logically different role in the experiments from the "type II" components concerned with, for instance, the microstate of the detector. The type I components are responsible for any correlation, while the type II ones are assumed to be independent on the two sides -- just random "noise".
Cat

This is true, but amounts to postulating (again) a deterministic theory. My claim is that the relationship between what is called "Bell locality" (a factorisation condition joint probabilities have to satisfy and from which one can deduce the Bell inequalities) and any kind of "physical locality of interaction" only makes sense in the framework of an essentially deterministic HV theory. It is _that_ deterministic mechanism (hidden or not) which, if required to give rise to probabilities and to be based on local interactions, that gives rise to Bell locality.
But "Bell locality" doesn't make any sense for fundamentally stochastic theories, because there is no supposed hidden mechanism of interaction which is to be local or not. A fundamentally stochastic theory just tells you what are the probabilities for "single events" and for "joint events" (correlations) WITHOUT being generated by an underlying deterministic mechanism.
The only locality condition we can then require is that probabilities of observations can only depend on what is in the past lightcone of those observations, and this then gives:

P(A|a,b,L) can only be function of a because only the setting a is in the past lightcone of event A.
P(B|a,b,L) can only be function of b, because only the setting b is in the past lightcone of event B.

But:
P(A,B|a,b,L) can be function of a and b, because this correlation can only be established when we get news from A AND from B, and at that moment, a and b are in the past lightcone of a and b. Or otherwise formulated: a and b are in the past lightcones of the events A and B.

The first two conditions impose an INTEGRAL condition on the third expression, but do not require that P(A,B) factorizes. That factorization only comes about when P(A,B) is _constructed_ from an underlying deterministic model.

The objection seemed to be: hey, but I can think of hidden variable theories which are _stochastic_. And I tried to point out that that's tricking the audience, because it can trivially be transformed into a deterministic hidden variable theory. BTW, I don't understand what the purpose could be of constructing a truly stochastic hidden variable theory to explain a stochastic "no hidden variable" theory (such as QM).

cheers,
Patrick.
 
  • #52
vanesch said:
This is true, but amounts to postulating (again) a deterministic theory. My claim is that the relationship between what is called "Bell locality" (a factorisation condition joint probabilities have to satisfy and from which one can deduce the Bell inequalities) and any kind of "physical locality of interaction" only makes sense in the framework of an essentially deterministic HV theory. It is _that_ deterministic mechanism (hidden or not) which, if required to give rise to probabilities and to be based on local interactions, that gives rise to Bell locality.
But "Bell locality" doesn't make any sense for fundamentally stochastic theories, because there is no supposed hidden mechanism of interaction which is to be local or not. A fundamentally stochastic theory just tells you what are the probabilities for "single events" and for "joint events" (correlations) WITHOUT being generated by an underlying deterministic mechanism.

I'm sorry, but this really is just playing semantic word games to make the answer appear to come out the way you want. First you define "nonlocality" in terms of "underlying deterministic mechanisms", then you shrug and say: since QM has no such mechanisms, it isn't nonlocal.

The beauty of Bell's locality condition is that it doesn't require any of this loose talk about "underlying mechanisms" and "communication of information" and all these other things that lead to endless debates. And despite what you say above, Bell Locality *does* apply perfectly well to stochastic theories. The condition is, after all, stated exclusively in terms of probabilities, so the applicability is really rather obvious.



The only locality condition we can then require is that probabilities of observations can only depend on what is in the past lightcone of those observations, and this then gives:

P(A|a,b,L) can only be function of a because only the setting a is in the past lightcone of event A.
P(B|a,b,L) can only be function of b, because only the setting b is in the past lightcone of event B.

But:
P(A,B|a,b,L) can be function of a and b, because this correlation can only be established when we get news from A AND from B, and at that moment, a and b are in the past lightcone of a and b. Or otherwise formulated: a and b are in the past lightcones of the events A and B.

Here you are simply forgetting an important rule of probability calculus. I believe it is sometimes called "Bayes theorem" or something to that effect. It says:

P(A,B) = P(A|B) * P(B)

that is, you can *always* write a joint probability as a product so long as you conditionalize one of the probabilities on the other event.

If we are interested in something of the form P(A,B|a,b,L), we may write this as

P(A,B|a,b,L) = P(A|B,a,b,L) * P(B|a,b,L)

But then Bell Locality enters and says:

P(A|B,a,b,L) = P(A|a,L)

and

P(B|a,b,L) = P(B|b,L)

on the grounds of locality: neither event (A or B) may depend stochastically on occurrences outside of their past light cones. Specifically, the probability distribution of events A cannot be affected by conditionalizing on space-like separated events B and b, since we have already conditionalized on a complete description of the world in the past light cone of A, namely L. And likewise for B. There is no determinism built in here, no requirement that the probabilities P(A|a,L), etc., be zero or unity.

Bottom line: Bell Locality *does* completely justify the factorization condition that is (a) required to demonstrate the Bell Theorem and (b) violated by orthodox QM when we identify L with the QM wave function (as surely Bohr invites us to do).



The first two conditions impose an INTEGRAL condition on the third expression, but do not require that P(A,B) factorizes. That factorization only comes about when P(A,B) is _constructed_ from an underlying deterministic model.

No, this is just wrong. I got from the joint probability to the factored, Bell Local expression, by using "Bayes Theorem" (I'm not actually sure it's called that...) and Bell Locality and that's it. No mention of determinism.
 
  • #53
ttn said:
P(A,B|a,b,L) = P(A|B,a,b,L) * P(B|a,b,L)

But then Bell Locality enters and says:

P(A|B,a,b,L) = P(A|a,L)

No ! Because B enters in the condition on the left hand side, this may depend upon b. There is no way to talk about "upon condition B" without having information about B. So the conditional probability on the left hand side talks about A and B, and so can depend on a and b. Now, you can *require* that the conditional probability P(A|B) = P(A), in which case you call A and B statistically independent events. But that's a property that you can call "zork" or "Bell beauty" or "Bell locality" or "Bell desire". It isn't required for a stochastic theory that only claims that probabilities of events only depend on conditions in their past lightcones ; THIS is what is required by locality as specified by relativity. From the moment you mention A AND B in a probability (whether joint or conditional), they may depend on everything about A and everything about B.

So, again: QM probabilities do not satisfy "zork"
QM probabilities do satisfy locality as specified by relativity.

However, what I'm trying to make clear as a point, is that IF YOU WANT THOSE PROBABILITIES TO BE GENERATED FROM A DETERMINISTIC THEORY which has hidden variables (that will give you the "stochastic appearance" because of their hidden character) and YOU REQUIRE THAT ALL INTERACTIONS ARE LOCAL including those concerning the change, transfer etc... of the hidden variables, THEN YOU OBTAIN A CONDITION WHICH IS ZORK (also called Bell locality).

And from the zork condition follows the Bell inequality.

You cannot PROVE me the necessity of Bell Locality (which I call zork) without going to a deterministic model (or a pseudo-deterministic model, that can be transformed into a deterministic one by adding variables).
Try to prove me somehow (not DEFINE) that factorization is necessary for locality without using an underlying deterministic model !

However, I can PROVE you the requirement of locality specified by relativity on the basis of information theory. Now, since the concept of locality plays an eminent role only because of relativity, my point is that that is the only sensible requirement for locality given a stochastic theory. We only switch to a more severe one (zork) because we want "extra stuff" such as an underlying deterministic mechanics.

cheers,
Patrick.
 
  • #54
ttn said:
Specifically, the probability distribution of events A cannot be affected by conditionalizing on space-like separated events B and b, since we have already conditionalized on a complete description of the world in the past light cone of A, namely L.

It is in this phrase that is catched exactly the deterministic character of an underlying mechanism ! (the "complete description" part)

Why ? Because you seem to claim that "whatever happens to B and whatever choice I make for b, it can not be "signalled " to A. (by the underlying mechanism). But careful: the choice of b will of course affect the result B. So you shouldn't be surprised that P(A|B) can a priori depend on b ; as long as it is done in such a way, that P(A) doesn't depend on b. (that's the integral condition)

So my claim is: P(A|B) does not need to be equal to P(A). I wish you could pove me its necessity. (it is, as you point out, equivalent to factorizing P(A,B) = P(A) P(B) )

But of course if you want to invent a machinery that generates these probabilities, you will have a hard time sending a hidden variable messenger from B to A, and THEN of course, you can claim that any machinery that will determine things at B, as a function of b, can never send a message to A in order to do anything there.

cheers,
Patrick.
 
  • #55
vanesch said:
No ! Because B enters in the condition on the left hand side, this may depend upon b. There is no way to talk about "upon condition B" without having information about B. So the conditional probability on the left hand side talks about A and B, and so can depend on a and b. Now, you can *require* that the conditional probability P(A|B) = P(A), in which case you call A and B statistically independent events. But that's a property that you can call "zork" or "Bell beauty" or "Bell locality" or "Bell desire". It isn't required for a stochastic theory that only claims that probabilities of events only depend on conditions in their past lightcones ; THIS is what is required by locality as specified by relativity. From the moment you mention A AND B in a probability (whether joint or conditional), they may depend on everything about A and everything about B.

So... let me see if I get your position. You are willing to allow that

P(B|a,b,L) = P(B|b,L)

as a perfectly reasonable requirement of locality. But you are unwilling to allow that

P(B|A,b,L) = P(B|b,L)

is a reasonable requirement.

Do I have that straight? You think: Locality forbids the outcome B from depending on the setting (a) of the distant apparatus, but does not forbid B from depending on the *outcome* of that distant measurement (A). Is that it?


Try to prove me somehow (not DEFINE) that factorization is necessary for locality without using an underlying deterministic model !

I'm not sure what kind of thing you would take as a proof. I think Bell Locality is an extremely natural way of expressing the requirement of local causality. Bell thought so too. But there is no way to "prove" this. One has to simply accept it as a way of defining what it means for a theory to be local; then people can choose to accept or reject that definition. What bothers me is when people accept it in regard to hv theories, but reject it in regard to QM. That's just inconsistent.

However, I can PROVE you the requirement of locality specified by relativity on the basis of information theory.

Not really, although surely the statement "humans should never be able to communicate, i.e., transmit information, faster than light" is another somewhat reasonable definition of locality. The problem is, if you are going to define locality that way in order to prove that QM is local, Bohm's theory turns out to be local, too -- despite the fact that, in some *other* senses of "locality", Bohm's theory is rather blatantly *nonlocal*.

Again, I only really care here about consistency. If you're going to define locality in terms of "information", then you shouldn't say that Bohm's theory is nonlocal. And if you're going to define locality as Bell did, then you shouldn't say that orthodox QM is local.
 
  • #56
vanesch said:
It is in this phrase that is catched exactly the deterministic character of an underlying mechanism ! (the "complete description" part)

Why ? Because you seem to claim that "whatever happens to B and whatever choice I make for b, it can not be "signalled " to A. (by the underlying mechanism). But careful: the choice of b will of course affect the result B. So you shouldn't be surprised that P(A|B) can a priori depend on b ; as long as it is done in such a way, that P(A) doesn't depend on b. (that's the integral condition)

I agree with this much: P(B) depends on b. But that's precisely why I find it so silly to argue that locality requires

P(A|B,a,b,L) = P(A|B,a,L)

but not

P(A|B,a,b,L) = P(A|a,L)

If the point is that, in a local theory, P(A|a,L) should not change when you specify the distant setting b, then shouldn't it also not change if you specify the distant outcome B? If you allow the latter sort of dependence, you are in effect smuggling in the previously-eliminated dependence on "b" for just the sort of reason you elaborate above.



So my claim is: P(A|B) does not need to be equal to P(A). I wish you could pove me its necessity. (it is, as you point out, equivalent to factorizing P(A,B) = P(A) P(B) )

Well, I certainly can't prove that P(A|B) = P(A). That would be a preposterous requirement. It would basically just assert that there is no correlation between A and B. But locality doesn't forbid correlations. It merely forbids correlations which cannot be in some way accounted for by information in the past of the two events in question. That is, the condition only makes sense if you conditionalize all probabilities involved on some complete specification of the state of the system at some prior time(slice), and if you add in possible local dependencies on things like apparatus settings:

P(A,B|a,b,L) = P(A|B,a,b,L) * P(B|a,b,L) = P(A|a,L) * P(B|b,L)

where the first equality is pure unobjectionable math, and the second involves application of Bell Locality.

I'm sure you will now say "Aha!" and assert that by "accounted for" above I really mean "deterministically accounted for". But I just don't. I am perfectly happy to allow non-deterministic laws. In fact, that's one of the nice things about this probability-based notation for expressing Bell locality. Some of Bell's papers use a notation like A(a,L) where now A is the (evidently one and only) outcome consistent with setting "a" and prior-joint-state L. That notation does imply determinism, and hence a statement of Bell Locality couched in that language would *not* be able to be applied to orthodox QM (which is stochastic). But a fairly straightforward change of notation gives the statement of Bell Locality we've been discussing, the one that is couched explicitly in stochastic terms and which therefore is entirely applicable to stochastic theories like orthodox QM.
 
  • #57
We're getting closer :smile:

ttn said:
Not really, although surely the statement "humans should never be able to communicate, i.e., transmit information, faster than light" is another somewhat reasonable definition of locality.

Well, in order to satisfy relativity, replace "humans" by "anything that can send out information" (because that's the relativity paradox you want to avoid: that you receive your own information before sending it ; on which you could base a decision to send out OTHER information, hence the paradox)

The problem is, if you are going to define locality that way in order to prove that QM is local, Bohm's theory turns out to be local, too -- despite the fact that, in some *other* senses of "locality", Bohm's theory is rather blatantly *nonlocal*.

The problem with locality is that the definition is different according to whether you work with a stochastic theory or with a deterministic theory.
In a purely stochastic theory, the only definition we can have concerning locality is of course based upon information theory.
In that sense, QM and of course Bohm's theory considered as a stochatical theory (which gives the same stochastic predictions) is local.

Next, we can talk about the locality of mechanisms, whether or not they lead to a deterministic or stochastic theory ; and in the latter case, independent of whether the stochastic theory is local in the information theory sense.

For instance, the "collapse of the wavefunction" in QM is blatantly non local, because it affects the internal description at B when doing something at A.
However, the MWI approach gives us a local mechanism, in a very subtle way: you can only talk about a correlation when the events at A and B are in the past lightcone and you deny the individual existence of events at A and B until at the moment where you can observe the correlations. At most you can observe one of both.

The *hidden* variables are also subject to a non-local mechanism in Bohm's theory.

Theories which have a non-local mechanism but give rise to a stochastic theory which IS local (in the relativistic sense) are said to "conspire": they have all the gutwork to NOT respect the locality requirement of relativity, but they simply don't take advantage of it. Bohm's theory, and QM in the Copenhagen view are in that case (that's why I don't like them).
MWI QM doesn't have such a non-local mechanism

Of course a stochastic theory without any underlying mechanism cannot be analysed for their underlying mechanism!

What's now the room for Bell Locality ? It turns out that any stochastic theory generated by a deterministic theory which respects a local mechanism, satisfies Bell locality.

Amen.

cheers,
Patrick.
 
  • #58
vanesch said:
Theories which have a non-local mechanism but give rise to a stochastic theory which IS local (in the relativistic sense) are said to "conspire": they have all the gutwork to NOT respect the locality requirement of relativity, but they simply don't take advantage of it. Bohm's theory, and QM in the Copenhagen view are in that case (that's why I don't like them).
MWI QM doesn't have such a non-local mechanism

Two questions on this. First, do you have any links to discussions where this technical term "conspire" is introduced? And second, does this resolution of nonlocality exist in the weaker relative interpretation of MWI or do you require the literal multiple worlds?
 
  • #59
selfAdjoint said:
Two questions on this. First, do you have any links to discussions where this technical term "conspire" is introduced? And second, does this resolution of nonlocality exist in the weaker relative interpretation of MWI or do you require the literal multiple worlds?

I have to say that I use the term "conspire" as I intuitively thought it was typically used, namely that a strict principle should be obeyed, but that the underlying mechanism (whatever it is) doesn't obey it, but in such a way that it doesn't show. I don't know if there is a rigorous definition for the term.
An example that comes to mind is the "naive" QFT mass-energy of the 1/2 hbar omega terms (which is HUGE) and a corresponding cosmological constant which happens to exactly (or almost so) compensate this. So, or there is a principle that says that the effective cosmological constant must be small, or there is a "conspiracy" so that these two unconstrained contributions cancel.

Concerning your second point, I guess one can discuss about it, depending on exactly what one defines as a "local mechanism". If it is sufficient to say that the correlations only make sense to an observer when the corresponding events are already in the past lightcone, such as is the case in _any_ MWI like scheme, then I would think that that is sufficient to call the mechanism "local". If, however, you require a totally local state description, then there is a problem with the Schroedinger picture, where there is one, holistic wavefunction of the universe. However, Rubin has written a few articles showing that - if I understood it well - you can get rid of that problem in the Heisenberg picture. The price to pay is that you carry with you a lot of indices which indicate your whole "entanglement history". But you carry them with you at sub lightspeed.

I may have used words and definitions in my arguments here which are not 100% correct. The whole thing is of course discussable, but the intuition - to me - is clear: local means: there's no obvious way you see how to use the mechanism to make an FTL phone. Non-local means: highly suggestive of how to make an FTL phone.
The projection postulate collapses wave functions at a distance. You would think immediately that somehow you can exploit that ! It is only after doing some calculations that you find out that you can't.
We know that the stochastic predictions of QM do not allow you to make an FTL phone. That's good enough for me to call it a "local" theory. But the underlying wheels and gears can or cannot suggest that FTL phones are possible (even if we know, at the end of the day, that they aren't). In such cases, I call the mechanism "non-local".

cheers,
Patrick.
 
Last edited:
  • #60
Cat said:
[/INDENT]
I had a look at this and was shocked to find that they said they'd infringed the CHSH inequality but did not even mention the main "loophole" that bugs this -- the "fair sampling" one. Of course, if their detecters were perfect then it would have been irrelevant, but since it was an ordinary optical experiment this cannot have been so.

How can they justify quoting the results of this test without so much as a mention of the efficiencies involved? They say their results "cannot be described by a local realist model". If they have not closed the detection loophole they have not proved this!

Cat

Per the Rowe et al citation previously given, a lot of folks think this "loophole" is closed. Once the rest of us close a loophole, it is not necessary to repeat something that no longer applies. They also don't mention what day of the week the test was performed on because that doesn't matter either.

On the other hand, there are "certain" local realists out there who deny nearly every aspect (pun intended) of Bell tests. Some of us have come to the conclusion that a debate on the matter is a waste of time and effort.

BTW, I don't have a subscription to PRL. I could not find a link anywhere to the Walther article. Anyone find such?
 
  • #61
ttn said:
If the point is that, in a local theory, P(A|a,L) should not change when you specify the distant setting b, then shouldn't it also not change if you specify the distant outcome B?

I would like to add something here. Note that these discussions help me too clearing up my ideas, I hope it is also your case ! It brings up things I didn't think about before.

So the point you raise is an interesting one, and probably comes from the fact that I consider A and B as "coming out of the system" while a and b are input, because somehow "arbitrary determined by free will at A and B". So "a" and "b" are sources of information, while A and B are information receivers.
A is a local receiver at a, so if any statistic of A would depend on b, I would have an information channel. But in order for A|B to be an information channel, I have to know A and B, so in any way I have to solve another communication problem between A and B. At that moment I shouldn't have difficulties using sources from a and b.

There is a difference in meaning between P(A|B ; a,b,...) and P(A ; B, a, b...)
The first one is a probability that is defined as P(A,B)/P(B), so it is a derived quantity from the correlation function P(A,B). The second one has no meaning, because B is an event, and no parameter describing the distribution, as are a and b.

P(A,B)/P(B) has the frequentist interpretation of "the relative frequency of the events A in the subsample where we had B". A priori, it is somehow clear to me that this can depend on all that has to do with A and with B, because in order to _measure_ this quantity I have to have a coincidence counter, wired up with A and with B.

cheers,
Patrick.
 
  • #62
I don't quite agree, there are several loopholes to Bell's theorem that are known. Usually they are easily dismissed by contriving a counter example that exploits local symmetry principles (isospin and things like that). However it could be the case that those local symmetries are broken at fundamental levels (see Planckian regimes). T'Hooft and several String theorists (Vafa etc) have exploited this in devising hidden variable theories that gets by the usual objections. The former has to resort to information loss, the latter in general through quasi local variables found in stringy physics.

The usual problem there is retrieving completely unbroken unitarity and managing to get a bounded hamiltonian.

All those programs have amounted to more or less zero, as the dynamics of any such theory is atrociously complicated, but the idea or possibility is there.
 
  • #63
vanesch said:
Note that these discussions help me too clearing up my ideas, I hope it is also your case ! It brings up things I didn't think about before.
Me too!

One thing I've been thinking about is how much easier life would be if one simply accepted the two sides as being independent (once L is fixed) and treated the matter of their coincidence probability just as you would treat the problem of achieving two 6's, say, with a pair of dice. With dice, it surely would not occur to you to do the operation in two stages, using conditional probabilities? You'd simply multiply the two separate probabilities.

I've begun to work on an analogy based on this idea. I don't think the actual Bell test experiments can be modeled without making some allowance for the geometry -- the fact that we are dealing with angles, so that the addition of 2 pi to every "setting" makes no difference. Instead of a dice we'd need one of those little hexagonal tops, with the sectors numbered consecutively. The "hidden variables" might be little weights attached to particular segments (the same for each -- it makes things easier and does not affect the final logic to assume the same rather than "opposite" or "orthogonal") and the "detector settings" could correspond to specified ranges of results. We could have, for example, (1 or 2) scoring + for A, while (2 or 3) scores + for B. If the little weight is fixed at 2, we can have a fully deterministic experiment if is it so heavy that the tops always stops at 2 ... [to be continued]

Cat
 
  • #64
vanesch said:
Well, in order to satisfy relativity, replace "humans" by "anything that can send out information" (because that's the relativity paradox you want to avoid: that you receive your own information before sending it ; on which you could base a decision to send out OTHER information, hence the paradox)

As I said before, there is something to be gained from analyzing "locality" in terms of information transfer. But it is also a dangerous game, mostly because "information" is a dangerously fuzzy, human-centered concept. Here's is Bell's comment:

"Do we then have to fall back on 'no signalling faster than light' as the expression of the fundamental causal structure of contemporary theoretical physics? That is hard for me to accept. For one thing we have lost the idea that correlations can be explained, or at least this idea awaits reformulation. More importantly, the 'no signalling' notion rests on concepts which are desperately vague, or vaguely applicable. The assertion that 'we cannot signal faster than light' immediately provokes the question:
Who do we think *we* are?
*We* who can make 'measurements,' *we* who can manipulate 'external fields', *we* who can 'signal' at all, even if not faster than light? Do *we* include chemists, or only physicists, plants, or only animals, pocket calculators, or only mainframe computers?"

I'm sure you get the point. Bohmian mechanics is, yet again, a clarifying example here. Part of Bell's point, surely, is that what relativity really requires, if you are going to take it seriously, is *more* than a mere no-signalling condition. *That's* why people are unwilling to accept Bohm's theory as consistent with relativity, even though it too doesn't permit signalling -- the behind-the-scenes nonlocality is just too obvious. But then, exactly the same thing is true in orthodox QM. If you take the wf seriously as a complete description of reality, the collapse of the wf is just as nonlocal as anything in Bohm's theory. And both violate the cleanly-formulated "Bell Locality" test.



The problem with locality is that the definition is different according to whether you work with a stochastic theory or with a deterministic theory.
In a purely stochastic theory, the only definition we can have concerning locality is of course based upon information theory.
In that sense, QM and of course Bohm's theory considered as a stochatical theory (which gives the same stochastic predictions) is local.

I still don't understand why you think there's an important distinction here. I thought of a perhaps clarifying example to discuss, though maybe you beat me to the punch with your comment about Bohm's theory "considered as a stochastical theory". But I don't understand exactly what you're getting at there, so I'll throw my example out and see what happens.

Consider Bohm's theory: Sch's equation plus a "guidance formula" specifying particle velocities in terms of the wf. Now add a small random noise term to the guidance formula -- on average, particles will still go where Bohm's theory says they should, only now they'll occasionally deviate by just a little bit. This noise is meant to be completely random (but Gaussian about zero and pretty narrow so it keeps deviations from well-tested QM predictions below the level at which they could be detected). Make sense?

The question is: for this modified Bohm theory, does anything really change in regard to its locality? The theory is now fundamentally stochastic instead of deterministic. Yet it seems to still blatantly violate our notions of local causality -- in particular, the particle velocities still depend on the simultaneous positions of other (entangled) particles. So the theory will still violate "Bell Locality" and I think anyone who looked at it would have no trouble seeing that it was (in pretty much any sense other than "signalling") quite blatantly nonlocal.

Do you agree that this would be an example of a stochastic theory to which the notion of Bell Locality is perfectly applicable?



Next, we can talk about the locality of mechanisms, whether or not they lead to a deterministic or stochastic theory ; and in the latter case, independent of whether the stochastic theory is local in the information theory sense.

Sure, you can talk about that. But when you come to QM, you'll end up playing the same semantic games as before, I suspect. QM has no underlying mechanism (I suspect you'll want to say), hence there is no nonlocality in its underlying mechanism.

But this is just trading on fuzziness over what is meant by "mechanism". Sure, QM lacks a clear detailed ontology that allows you to understand what's going on behind the scenes, i.e., you might say, it lacks a mechanism. But in another sense, QM is perfectly clear. It says: there is nothing going on behind the scenes; the wf is the whole story, a complete description of the state of a system at any moment. And when you make a measurement, the wf -- i.e., the state of the system -- suddenly and randomly jumps into an eigenstate of the operator measured.

My question is: why not just take QM at its word and accept *this* as its mechanism?? It is, after all, what QM says the mechanism is! I mean, it's a pretty strange and fuzzy and non-mechanical mechanism, but if that bothers you you should reject the story on that grounds, not turn it into a point in QM's favor, a get-out-of-jail-free card.


For instance, the "collapse of the wavefunction" in QM is blatantly non local, because it affects the internal description at B when doing something at A.
However, the MWI approach gives us a local mechanism, in a very subtle way: you can only talk about a correlation when the events at A and B are in the past lightcone and you deny the individual existence of events at A and B until at the moment where you can observe the correlations. At most you can observe one of both.

Yes, according to your MWI, the only things that really exist are in your mind -- so in fact there aren't any spatially separated physical objects to interact nonlocally (or locally for that matter) in the first place. So, um, sure, I guess that counts as local.



The *hidden* variables are also subject to a non-local mechanism in Bohm's theory.

No doubt. As shown most cleanly by the fact that Bohm's theory violates Bell Locality. (See? Bell Locality really is a nice litmus test for whether a theory is "locally causal." Bohm's theory isn't.) But then, as you're probably all tired of hearing me say, orthodox QM violates Bell Locality too.




Theories which have a non-local mechanism but give rise to a stochastic theory which IS local (in the relativistic sense) are said to "conspire": they have all the gutwork to NOT respect the locality requirement of relativity, but they simply don't take advantage of it. Bohm's theory, and QM in the Copenhagen view are in that case (that's why I don't like them).
MWI QM doesn't have such a non-local mechanism

OK, I think we're in agreement here. Bohm's theory and orthodox QM both "conspire" in some sense -- there is a non-local mechanism which is somehow washed out by uncertainty or randomness to prevent that nonlocal mechanism from being used to transmit information.


Of course a stochastic theory without any underlying mechanism cannot be analysed for their underlying mechanism!

But you can still ask if such a theory violates Bell Locality.

Perhaps it's the word "underlying" that is causing (err, spontaneously and inexplicably correlating with?) trouble. In Bohm's theory, there is a pretty clear distinction of "levels" between the level of prediction and the "underlying" level of definite particle trajectories, etc. In QM, the level of prediction and the level of "exact and complete specification of the state of the world" are pretty much one and the same. But again, it's just cheap semantics to insist on a clean difference between two levels, in order to then dismiss Bell Locality as inapplicable to (say) QM on the grounds that it has no "underlying" levels. Bell Locality is stated/defined in terms of an "exact and complete specification of the state of the system" -- the thing we've been calling "L" that all the relevant probabilities are conditioned on. There is no requirement that that "L" be "underlying" or anything like that. So again, I would advocate just taking QM straight (e.g., letting the wf play the role of "L"), and taking Bell Locality straight. Don't twist words and make subtle distinctions that are not made in or required by these ideas.

Then you won't have to worry about distractions like deterministic vs. stochastic and "underlying".
 
  • #65
ttn said:
... Bell Locality is stated/defined in terms of an "exact and complete specification of the state of the system" -- the thing we've been calling "L" that all the relevant probabilities are conditioned on. There is no requirement that that "L" be "underlying" or anything like that. So again, I would advocate just taking QM straight (e.g., letting the wf play the role of "L"), and taking Bell Locality straight. Don't twist words and make subtle distinctions that are not made in or required by these ideas.
That won't quite work, though, since the wave function applies to an ensemble of particles and L applies to one particular one (or, in our case, two, both having arisen from the one source in state L).

Then you won't have to worry about distractions like deterministic vs. stochastic and "underlying".
To continue my analogy, this little hexagonal top can be made either fully deterministic (if the biasing weight is heavy) or stochastic, if it is lighter. Attaching the weight at the "2" position may, if it is heavy, cause the top always to come to rest with the 2 down, but if it is light then it will merely cause bias, the degree depending on the actual weight. There will be the highest probality of a 2, lesser chance of a 1 or 3, and very little chance of the other scores.

And clearly if we have two such tops, coming from the same factory and with the same fault but spun independently, and define detector settings as I suggested, we have your "Bell locality" and can multiply the individual probabilities of success to get the joint probability.

If I persevere, I think I'll be able to demonstrate that the "coincidences" don't form a "fair sample" ... I'll have to define what I mean by '-' results, though, as well as '+':

If, as in the previous message, the values 1 or 2 count as '+' for A, then the opposite sectors (which will be 4 and 5, since we number them sequentially) I define as counting '-'. Under this scheme, if I've got it right, when B is set "parallel" to A (i.e. it also scores either + or - when the top lands with 1, 2, 4 or 5 down, but fails to score anything when it lands on 3 or 6), you get a lot more coincidences than when they are set one unit apart (effectively the only other option in this simple scheme). In the deterministic version, there would (I think) be twice as many coincidences in the parallel case as compared to other orientations.

Perhaps I'm getting carried away, though!

Questions:

(a) Is the variation of coincidence probability in itself sufficient to show that we have not got a fair sample?

(b) Is this a convincing analogy for a Bell setup?

(c) Can we squeeze a "Bell inequality" out of it that can be compared with any QM prediction?

Ah well, probably not, so the exercise was a waste of time from that point of view. I think it might be helpful, though, for illustrating stochastic v deterministic models and for helping us to escape from the use of conditional probabilities.

Cat
 
  • #66
Cat said:
That won't quite work, though, since the wave function applies to an ensemble of particles and L applies to one particular one (or, in our case, two, both having arisen from the one source in state L).

Well, it's true that identifying Bell's "L" with the QM wf requires the assumption that the QM wf is a complete description of the relevant part of the world. So when I say things like "QM violates Bell Locality" what I mean is "QM, so long as one accepts Bohr's completeness doctrine and hence regards the wf as a complete description of a system, violates Bell Locality."

If, on the other hand, one wishes to reject the completeness assumption and regard psi as merely an average or collective description of an ensemble of similar but not identical systems, then you're right, this identification doesn't work. Two things follow: 1, one needs a *different* (and in fact much less trivial) argument to show that a hidden variable theory (i.e., the kind of theory one is led to when one rejects completeness) must also violate Bell Locality. This argument is of course Bell's theorem. and 2, EPR were exactly correct. They didn't prove that QM was incomplete, and they didn't prove that it violated locality; but they did prove it was *either* nonlocal or incomplete.



(b) Is this a convincing analogy for a Bell setup?

(c) Can we squeeze a "Bell inequality" out of it that can be compared with any QM prediction?

I don't think so. The results of two dice rolls will always be statistically independent unless there is some "mechanism" by which the result of one roll can affect the result of the other. Merely making one or the other "biased" in some way isn't at all the same as "linking" them. So, as long as they are independent, you will never find that the correlations violate a Bell inequality.
 
  • #67
ttn said:
I still don't understand why you think there's an important distinction here. I thought of a perhaps clarifying example to discuss, though maybe you beat me to the punch with your comment about Bohm's theory "considered as a stochastical theory". But I don't understand exactly what you're getting at there, so I'll throw my example out and see what happens.

A quick reaction (I don't have much time right now): I may have expressed myself badly, conducting you in misunderstanding what I tried to say.

When I say "Bohm's theory considered as a stochastic theory" I mean Bohm's theory, as a black box, out of which come probabilities for observation P(A), P(A,B) etc... I didn't mean "turn Bohm into a stochastic theory". It IS, at the end of the day, exactly the same stochastic theory as quantum theory (also seen as a black box out of which come probabilities P(A), P(A,B)...) or so I understood, if it is 100% equivalent.

Any qualifier based upon the probabilities must then be of course exactly the same for both theories. For example I call them "relativistically local" and you call them Bell-non-local. We agree upon that point.

To me, a stochastic theory is a black box out of which come prescriptions for calculating probabilities of events. Nothing is said about how these probabilities come about.

So, given the "description of the experiment", we have the function:
P(A,B ; a, b)
(out of which all other probabilities can be derived).
Note that we cannot include an explicit "state description" in these probabilities, because it is inside the black box. The only thing we can specify is the "description of the experiment": a laser beam here, a PDC xtal there, etce... You already see a difficulty in specifying "Bell Locality" here without "opening the black box", but I have no problem defining my "relativity locality".

We can now open the black box and look at the formalism that gives us these probabilities. If somehow it is assumed that parts of the formalism correspond to a physical reality, then we have a MECHANISM.
It can also be that the formalism does not correspond to something describing a physical reality. In that case the black box remains black. Some people see QM as such. There's nothing to say something against it (except that it is a bit deceiving for a physical theory).

A deterministic theory gives us an underlying mechanics, such that, if we were to know all the internal degrees of freedom, only probabilities 1 and 0 would come out.
There are different ways to make shortcuts here: we can use these internal degrees of freedom to specify non-trivial probability distributions, and we can "hide" internal degrees of freedom. If we hide internal degrees of freedom, then we can always ADD others to generate the non-trivial probability distributions. So I do not see what is the point in making non-deterministic hidden-variable theories, because they are always equivalent to another, deterministic one.
However, there are good reasons to have non-hidden variable stochastic theories with a mechanism. In fact our big black box then becomes a structure containing "smaller black boxes" which are by themselves generators of probabilities without any underlying mechanism. Quantum theory, with a physical interpretation of the wave function, is in that case.

The locality or non-locality of a mechanism is harder to define in all generality because of the variety of mechanisms. But if something "happening here" does something to the physical description "over there" then it is non-local.
As you point out, the collapse of the wave function in Copenhagen QM is non-local if you attach a physical reality to the wavefunction. Also the HV in Bohm is non-local if you attach a physical reality to the HV (and honestly, what's the point of introducing HV if you do not attach a physical reality to them ?)

cheers,
patrick.
 
  • #68
vanesch said:
When I say "Bohm's theory considered as a stochastic theory" I mean Bohm's theory, as a black box, out of which come probabilities for observation P(A), P(A,B) etc... I didn't mean "turn Bohm into a stochastic theory". It IS, at the end of the day, exactly the same stochastic theory as quantum theory (also seen as a black box out of which come probabilities P(A), P(A,B)...) or so I understood, if it is 100% equivalent.

Any qualifier based upon the probabilities must then be of course exactly the same for both theories. For example I call them "relativistically local" and you call them Bell-non-local. We agree upon that point.

Yup.


To me, a stochastic theory is a black box out of which come prescriptions for calculating probabilities of events. Nothing is said about how these probabilities come about.

So, given the "description of the experiment", we have the function:
P(A,B ; a, b)
(out of which all other probabilities can be derived).
Note that we cannot include an explicit "state description" in these probabilities, because it is inside the black box. The only thing we can specify is the "description of the experiment": a laser beam here, a PDC xtal there, etc... You already see a difficulty in specifying "Bell Locality" here without "opening the black box", but I have no problem defining my "relativity locality".

I don't think this is right. You say you treat the theory as a black box and that there's really no way to include an explicit 'state description' in the probabilities. But surely you know that you must do that, in QM, in order for the probabilities to be defined. You don't/can't just calculate "P(A,B|a,b)" in QM -- rather, you calculate P(A,B|a,b,psi). If nobody tells you what state the system is prepared in, there is no way to predict using QM what the probabilities of various measurement outcomes are. QM may be a black box, but it isn't as much of one as you imply here. It *does* contain "state descriptions" and these play an absolutely essential role in its abililty to predict (probabilities for) outcomes of experiments.


We can now open the black box and look at the formalism that gives us these probabilities. If somehow it is assumed that parts of the formalism correspond to a physical reality, then we have a MECHANISM.
It can also be that the formalism does not correspond to something describing a physical reality. In that case the black box remains black. Some people see QM as such. There's nothing to say something against it (except that it is a bit deceiving for a physical theory).

I thought what I said against it before was pretty good. :smile:

EPR asked: Can the quantum-mechanical description of reality be considered complete? They said no, Bohr said yes. I don't think there was any debate about whether quantum state descriptions refer to something in reality (though nowadays one can find people arguing for any nonsense, even this). What does the completeness doctrine even *mean*, if it isn't that the wave function alone provides a complete description of reality?

So in the sense you are talking about "mechanisms" in the above paragraph, QM has just as much mechanism as Bohm's theory. They both claim to provide a complete picture of what is real at any given moment. And on that basis they have some rule for calculating probabilities of various things.

So again I see no fundamental difference. Both the mechanisms violate Bell Locality, yet this underlying nonlocal causality is washed out by uncertainty (in the case of Bohm) and irreducible indeterminism (in the case of QM) at the level of measurement results, thus preventing its being used for superluminal telephones.


There are different ways to make shortcuts here: we can use these internal degrees of freedom to specify non-trivial probability distributions, and we can "hide" internal degrees of freedom. If we hide internal degrees of freedom, then we can always ADD others to generate the non-trivial probability distributions. So I do not see what is the point in making non-deterministic hidden-variable theories, because they are always equivalent to another, deterministic one.

Maybe you're right; I'm not sure. But your point is only that it would be silly to construct a stochastic hv theory, not that it is really impossible in principle. But I wasn't seriously advocating that one ought to construct such a theory; I was just pointing out that it was possible to build one, and that the mere addition of randomness in the theory doesn't in any way preclude one from identifying the resulting theory as nonlocal.


However, there are good reasons to have non-hidden variable stochastic theories with a mechanism. In fact our big black box then becomes a structure containing "smaller black boxes" which are by themselves generators of probabilities without any underlying mechanism. Quantum theory, with a physical interpretation of the wave function, is in that case.

Is there a quantum theory without a physical interpretation of the wf? I know people (e.g., the Fuchs and Peres "opinion" article that appeared in Physics Today a few years ago) talk about the wf as purely/merely epistemological, but this is blatantly in contradiction with the completeness doctrine (that such people also tend to advocate), isn't it?

The locality or non-locality of a mechanism is harder to define in all generality because of the variety of mechanisms. But if something "happening here" does something to the physical description "over there" then it is non-local.
As you point out, the collapse of the wave function in Copenhagen QM is non-local if you attach a physical reality to the wavefunction.

Precisely. And if you don't "attach a physical reality to the wf" -- i.e., if you think the wf represents mere knowledge of some state that is, in physical reality, perfectly definite -- then you have abandoned completeness. And that means you believe in a hidden variable theory instead of QM. And that means (because of Bell's theorem) that you haven't successfully gotten around quantum nonlocality! ...which is really the point I want to stress: the choice between orthodox QM and (say) Bohmian mechanics is a choice between two equally-nonlocal theories. The nonlocality cannot be escaped, and is hence no reason to support QM as against Bohm.


Also the HV in Bohm is non-local if you attach a physical reality to the HV (and honestly, what's the point of introducing HV if you do not attach a physical reality to them ?)

I certainly can't think of any!
 
  • #69
ttn said:
I don't think this is right. You say you treat the theory as a black box and that there's really no way to include an explicit 'state description' in the probabilities. But surely you know that you must do that, in QM, in order for the probabilities to be defined. You don't/can't just calculate "P(A,B|a,b)" in QM -- rather, you calculate P(A,B|a,b,psi). If nobody tells you what state the system is prepared in, there is no way to predict using QM what the probabilities of various measurement outcomes are. QM may be a black box, but it isn't as much of one as you imply here. It *does* contain "state descriptions" and these play an absolutely essential role in its abililty to predict (probabilities for) outcomes of experiments.

Ah, I think the real issue here is the term "completeness" ,and not "locality". I have to say I don't know what it means, except "a potentially deterministic underlying mechanics".
Because what stops me from giving the precise description of the experiment as "complete" ? A laser here, a PDC there etc... In "complete" I include everything I'm potentially allowed to know, but I don't include things that I cannot, in principle, know, such as hidden variables. You can write it down on 20 pages of text, but the quantummechanical wavefunction does exactly that: it is the unique state of which I'm supposed to know everything I can know (complete set of commuting observables determine it).

If out of such a description comes still a series of probabilities, different from 0 or 1, I call such a theory fundamentally stochastic, because there is no way, in principle, to reduce the randomness here. But *this* seems to be what one objects to when one requires "completeness".

EPR asked: Can the quantum-mechanical description of reality be considered complete? They said no, Bohr said yes. I don't think there was any debate about whether quantum state descriptions refer to something in reality (though nowadays one can find people arguing for any nonsense, even this). What does the completeness doctrine even *mean*, if it isn't that the wave function alone provides a complete description of reality?

Yes, I agree that the wavefunction is supposed to give a complete description of reality in QM. Such as would be those 20 pages of text describing in detail the experimental setup. The wavefunction is the translation, in the mathematical formalism, of those 20 pages.

You are free to say that that 20 page text is "an element of reality". Personally, I also think that there must be something "real" to it (and hence want to tell a story = interpretation), but many people just see it as a "generator of statistics". In that view, I don't know how you apply Bell locality for example, because obviously:

P(A,B| a, b, 20 pages) is not equal to P(A|a, 20 pages) x P(B|b, 20 pages)

Indeed, that wouldn't even allow you to have classical correlations! Nevertheless those 20 pages are a full, complete description of what we are supposed to know about the experiment.

So in the sense you are talking about "mechanisms" in the above paragraph, QM has just as much mechanism as Bohm's theory. They both claim to provide a complete picture of what is real at any given moment. And on that basis they have some rule for calculating probabilities of various things.

So again I see no fundamental difference. Both the mechanisms violate Bell Locality, yet this underlying nonlocal causality is washed out by uncertainty (in the case of Bohm) and irreducible indeterminism (in the case of QM) at the level of measurement results, thus preventing its being used for superluminal telephones.

Well, what I wanted to show, in an MWI story that goes with QM, is that there is no underlying nonlocal causal mechanism. There is maybe a kind of "holistic description" (such as the wavefunction of the universe), but it is the OBSERVER which, on each of his observations, has to make a choice between branches (and hence introduces the apparent randomness in his observations). As the observer is essentially "local" to itself, there is no way for him to influence what so ever remotely. If he travels from A to B, then first he only knows about A, and so determines a probability P(A) at that moment and "registers" the entanglement branch which he chose, but B is "still in the air", in that the measurement apparatus at B just got into entanglement with B and is in the two possible states it can be. It is only when that event B gets in the past lightcone of the observer that he has a chance of reading the apparatus, meaning looking at THAT branch of the apparatus which corresponds to his registering of his branch at A. Now OR the apparatus is in a pointer state (which means that we had equal settings a and b), or the apparatus is still in a superposition within that branch, upon which he makes again a choice, and now registers again a second branching.

It is important to notice that nothing "happened" to the apparatus, or B in all these cases. It is just the *observer* who made choices. And when you look at it this way, you're NOT tempted to make FTL phones. You maybe also see my insistance upon the fact that P(A,B) shouldn't be constrained so as to be factorisable: indeed, at the moment where P(A,B) makes sense, namely when the observer has to make his choice for the result of B, he has already everything in his pocket about A and now about B.

Maybe you're right; I'm not sure. But your point is only that it would be silly to construct a stochastic hv theory, not that it is really impossible in principle. But I wasn't seriously advocating that one ought to construct such a theory; I was just pointing out that it was possible to build one, and that the mere addition of randomness in the theory doesn't in any way preclude one from identifying the resulting theory as nonlocal.

As I said, a stochastic theory CAN have structure, and then you can analyse that structure for locality. But you can, if you wish, just see it as a generator of statistics too.
I don't see the point however, to go and postulate hidden variables (that by itself is ugly, no ?) and to keep randomness. The original reason for introducing hidden variables was, I thought, to _explain_ randomness.
But of course you're free to do so.

Is there a quantum theory without a physical interpretation of the wf? I know people (e.g., the Fuchs and Peres "opinion" article that appeared in Physics Today a few years ago) talk about the wf as purely/merely epistemological, but this is blatantly in contradiction with the completeness doctrine (that such people also tend to advocate), isn't it?

Precisely. And if you don't "attach a physical reality to the wf" -- i.e., if you think the wf represents mere knowledge of some state that is, in physical reality, perfectly definite -- then you have abandoned completeness. And that means you believe in a hidden variable theory instead of QM. And that means (because of Bell's theorem) that you haven't successfully gotten around quantum nonlocality! ...which is really the point I want to stress: the choice between orthodox QM and (say) Bohmian mechanics is a choice between two equally-nonlocal theories. The nonlocality cannot be escaped, and is hence no reason to support QM as against Bohm.

Ah, this "completeness" looks more and more to be a "realist" condition.
And yes, QM in a MWI like setting is not very "realist" in that observations are not determining the external world, but the state of the observer in relationship to the external world (which is vastly more complex: we have ONE TERM in the wavefunction given by our observations, while they all "exist", whatever that may mean).

So it seems that the vague term (to me) is not locality but "completeness"...
I would naively think that a theory is "complete" if we can get out of it, as predictive properties (if it is stochastic: in the information - theoretic way) the maximum that we are fundamentally allowed to get out, so that you cannot do any better.
In that sense, I don't know how "completeness" of QM has anything to do with whether we consider the wavefunction as real. And Bohm and QM are the of course equally complete because they give us, as black boxes, the same probability functions upon the parameters we're allowed to choose freely, namely P(A,B ; a,b).
In Einstein's view, of course, there couldn't be any stochastic theory, so a complete theory, to him, had to mean a deterministic theory (and yes, then all probabilities are 0 or 1 and hence you get more information out ; but you then have the problem that the hidden variables cannot be hidden for ever).

But apparently, completeness means now something totally different, so can you enlighten me ?

cheers,
Patrick.
 
  • #70
vanesch said:
P(A,B| a, b, 20 pages) is not equal to P(A|a, 20 pages) x P(B|b, 20 pages)

Indeed, that wouldn't even allow you to have classical correlations! Nevertheless those 20 pages are a full, complete description of what we are supposed to know about the experiment.

I would like to elaborate a bit more on this. I"ll try to give an example illustrating what is so different between a stochastic theory and a deterministic one. It hasn't got anything to do with EPR or QM, but I would like to "attack Bell locality".

Imagine that I have a system which sends two little balls to each detector ; upon emission they are blue, but due to an inherent reaction inside, they turn red or they turn black. Imagine now that it is IN PRINCIPLE impossible to know the details of this inherent reaction. You have to accept such a possibility in the framework of a stochastic theory ; however, in a deterministic theory you can object: indeed, something inside must "know" if the ball turns black or red.

But assuming that this is purely stochastic, this "inside" is not part of a complete description, because you have no access to it. The complete description is simply that out of the experiment come two blue balls. There is nothing more I can say. Those balls have been analysed in all possible ways, they turn out to be identical. There is no measurement I can perform to show me which ball will turn out to become black, and which one will become red.

Imagine now that my theory is such that this predicts that of two blue balls generated, one always turns red, and the other black. We don't know why, and it is simply in principle impossible to know why, but it is so. A basic axiom of my theory is that a generator of 2 blue balls always has one that turns black and the other turn red.
Then I have P(A,B ; 2 blue balls) has the following values (A = black or red, B = black or red).

P(black,black ; 2 blue balls) = 0
P(red, red ; 2 blue balls) = 0
P(black, red ; 2 blue balls) = 0.5
P(red, black ; 2 blue balls) = 0.5

P(A = black ; 2 blue balls) = P(A = red ; 2 blue balls) = P(B = black ; 2 blue balls) = P(B = red ; 2 blue balls) = 0.5

Clearly "2 blue balls" is a complete description of the setup in that I cannot know more.
Clearly, P(A,B) is not equal to P(A) x P(B)

And I didn't introduce any non-local mechanism !

There is no issue about relativistic locality, because there wasn't even a free choice that could send information !

So where does Bell locality indicate non-locality ?

Aren't you tempted to say that "there must be an underlying (deterministic?) mechanism that should show me what ball will turn red ?" But if no such mechanism is postulated, how do we conclude about non-locality ?

cheers,
Patrick.
 

Similar threads

Replies
3
Views
1K
Replies
13
Views
998
  • Quantum Interpretations and Foundations
4
Replies
109
Views
4K
  • Quantum Physics
Replies
1
Views
825
Replies
19
Views
1K
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
Replies
15
Views
1K
Replies
49
Views
2K
Replies
8
Views
1K
Replies
18
Views
1K
Back
Top