What Is an Element of Reality?

  • Thread starter Thread starter JohnBarchak
  • Start date Start date
  • Tags Tags
    Element Reality
Click For Summary
Laloe's exploration of "elements of reality" emphasizes the challenge of inferring microscopic properties from macroscopic observations, using a botanical analogy involving peas and flower colors. He argues that perfect correlations observed in experiments suggest intrinsic properties shared by particles, which cannot be influenced by external factors. The discussion highlights that these elements of reality must exist prior to measurement, as they determine outcomes regardless of experimental conditions. Critics challenge the analogy and the concept of hidden variables, questioning its validity and relevance to quantum mechanics. Ultimately, the debate centers on whether the existence of such elements can be scientifically substantiated.
  • #31
vanesch said:
Why a story ? Because stories are nice :-) Seriously, having a story helps you devellop an intuition for the formalism, and also helps you out when you're confused on how to apply the formalism.

I'd go further. I'm a scientific realist. I believe there is an external, physical, objective world that exists independent of human knowledge of it. And I think the purpose of physics is to understand what the world is like. What you are here calling "telling stories" is really the process of building up an evidence-based model of reality -- just like Copernicus was "telling a story" when he said the Earth went around the sun, Maxwell and Boltzmann were "telling stories" when they predicted the distribution of molecular speeds in a gas, and just like, say, contemporary astrophysicists "tell stories" about how shockwaves propagating through infalling matter can result in supernovas.

I don't accept the idea that, in these sorts of cases, the only point of these stories is to help people develop intuition for formalism, etc. If anything, it's just the reverse: the point of the formalism is to help us figure out which story is the correct one, i.e., what the world is like. Isn't that really what science is all about?


If that "complete description" is a stochastic description, then evidently QM, and ALL its equivalent views, are, according to this definition "Bell nonlocal".

I agree with "evidently QM ... [is] Bell nonlocal." But I don't see how this has anything to do with whether a complete description is stochastic. According to QM, the complete description is stochastic; the theory isn't deterministic. So what? It violates the Bell Locality condition regardless, and that's all that matters here.

My point was that this only has a meaning related to a causality relationship if we intend to work with an underlying deterministic statistical mechanics. If not, the fact that we do not satisfy the Bell locality condition doesn't say anything about a causal non-locality. Correlations then just "are" and do not necessarily imply any causal link. The only way to have a causal link in a purely stochastic model is by a change in local expectation values. This is of course a weaker requirement than Bell locality.

So... you're saying any non-deterministic theory is automatically consistent with Bell's local causality requirement, because such theories have no causality in them at all, and hence not even the remotest possibility of verboten non-local causality?

Talk about semantics! :-p

I think it is perfectly reasonable to talk about causality in the context of a stochastic theory. Of course, in such a theory, a complete specification of the causes of some event won't be sufficient to predict with certainty that the event occurs. That's what it means to be stochastic. But you can still talk about the probability distribution of possible events. A complete specification of the causes of a given event would then be sufficient to predict, not the exact outcome, but the exact probability distribution of outcomes. And if you're with me still, it would in addition make perfect sense to ask whether all the elements of this "complete specification of causes" is present in the past light-cone of a given event or whether, instead, some space-like separated event *changes* the probability distribution for the possible outcomes of the event in question. This, I think, is a perfectly reasonable and perfectly appropriate way of deciding whether a non-deterministic theory is locally causal. In fact, this is precisely Bell's Locality criterion.


Ok, but you should agree that the "Bell locality condition" has been designed on purpose for the Bell theorem, no ?

That's a historical question I don't know the answer to. Bell was inspired when he read about Bohm's counterexample to the no-hidden-variables "proofs" but wondered if a local hv theory was possible. Perhaps what we now know as the Bell Locality condition was the first thing he wrote down as an obvious mathematical statement of local causality. Or perhaps he tried some other things first, and only settled on "Bell Locality" when it became clear that the theorem could be based on it. Who knows. And I'm inclined to say: who cares? Bell Locality *is* a natural and reasonable way of deciding between local and nonlocal theories. So even if he did cook it up so as to be able to prove the theorem, I say: he's a great chef!


No, you're again thinking in deterministic statistical mechanics terms, this time with "added local noise". This will indeed lessen any correlations. But in a truly stochastic system, you cannot require anything about the probabilities. Everything can happen.

Stochastic doesn't mean Heraclitean. :smile: There are still laws governing what happens, only they are stochastic instead of deterministic. When you roll a fair die, "anything can happen" in the sense that you might get any of the 6 possible outcomes. But it's false that "anything can happen" in the sense that you might see a billion 3's in a row.

So... I think you can put requirement on the probabilities in a stochastic theory. Indeed, writing down specific laws the probabilities obey is precisely what a stochastic theory *does*!


Ah, but that is then if you forget again the hidden variables. Because they DO NOT obey the locality conditions (if I'm not mistaking). If I understood well, the Bell locality condition comes down to the observable effect of the local expectation values condition of an underlying hidden deterministic model, no ?

The Bell Locality condition is really simple. It merely says

P(A|a,b,L) = P(A|a,L)

where "A" is some particular event (say the result of a measurement), "a" is any relevant parameters pertaining to the event (like the orientation of your SG magnets if it's a spin measurement), "L" is a complete specification of the state of the measured object across some spacelike hypersurface in the past of the measurement event, and "b" is any other junk that is spacelike separated from the measurement event. Basically the idea is: once you conditionalize on everything that could possibly affect the outcome in a local manner, specifying in addition information pertaining to space-like separated events will be *redundant* and hence won't change the probabilities.



Ok, they are not observable, you will say. But then there's no point in the first place to introduce them :-)
(unless you absolutely want to get rid of solipsism ...)

Yes, that's exactly what I'll say. =) Since I tend to just disregard MWI as non-serious, I would have said the point of introducing the hidden variables was to solve the measurement problem. The usual argument against this is that, while maybe a hidden variable theory can clear up the measurement problem, the price of doing so is to introduce violations of Bell Locality into theory, and the price is too high. Spoken by advocates of the completeness doctrine (i.e., orthodox QM) that is a preposterous and self-refuting argument since QM itself violates Bell Locality. That is, in terms of locality, QM vs. Bohmian mechanics (say) is a wash. But since the former suffers from a measurement problem and the latter doesn't, Bohmian Mechanics is clearly the superior theory.

Of course, you'll want to bring in MWI as a third candidate. But we've been over that already...
 
Physics news on Phys.org
  • #32
ttn said:
The Bell Locality condition is really simple. It merely says

P(A|a,b,L) = P(A|a,L)

where "A" is some particular event (say the result of a measurement), "a" is any relevant parameters pertaining to the event (like the orientation of your SG magnets if it's a spin measurement), "L" is a complete specification of the state of the measured object across some spacelike hypersurface in the past of the measurement event, and "b" is any other junk that is spacelike separated from the measurement event. Basically the idea is: once you conditionalize on everything that could possibly affect the outcome in a local manner, specifying in addition information pertaining to space-like separated events will be *redundant* and hence won't change the probabilities.

The way I read this is that the measuring apparatus is outside the scope of the relevant parameters. I do not fully understand how this associates with Bell's formalism though. When you say local, is b required to be outside the light cone? Or simply outside of L?
 
  • #33
DrChinese said:
The way I read this is that the measuring apparatus is outside the scope of the relevant parameters. I do not fully understand how this associates with Bell's formalism though. When you say local, is b required to be outside the light cone? Or simply outside of L?

"a" was meant to include any relevant facts about the apparatus used to measure "A". And yes, "b" is to be thought of as outside the past lightcone of A. The idea is just that, as I said in another post, once you've conditionalized a probability on every event on which it might depend in a local way, adding more information isn't going to change the probabilities. Such information would be either irrelevant or redundant.

Here's a really simple example. Say I put a coin in one of my hands and separate them out to my left and right, but without letting you know which hand the coin is in. If that's all you know, then you'll probably attribute a 50% probability to the proposition that the coin is in my left hand. If I then reveal that the coin is in fact in my right hand, the probability you attribute to that earlier proposition (that it was in my left hand) will jump to zero. This may seem like a violation of Bell's local causation condition, since the probability attributed to a certain event changed due to something that happened at a distant location. But it isn't. This merely brings out that we didn't start with a complete description of the state of the coin -- we forgot to conditionalize our probabilities on an appropriate "L" which, here, would obviously consist of some statement about where the coin actually was. Then the conditional probability P(left|L) would be either 1 or 0 from the very beginning, and *this* would not change when you learned whether or not the coin was in my right hand -- i.e., P(left|L) = P(left | L, right?) where by "right?" I mean the outcome of the experiment of looking for the coin in my right hand.

As Patrick and I have been discussing, I think this generalizes in a perfectly straightforward way to non-deterministic theories, i.e., theories in which the various probabilities (conditioned on L) are not restricted to be just zero and one. (Bell thought so too by the way.)

There is a nice article in the current issue of AmJPhys (Feb.) on a thought experiment that is basically the quantum equivalent of the example I just gave with the coin. (The article and the thought experiment are called "Einstein's Boxes".) The idea is, imagine doing this same sort of experiment with a quantum particle (instead of a macroscopic coin), say by splitting a photon's wf in half with a half-silvered mirror, and letting the two halves separate for a while. Then, according to QM, there is a 50% chance of finding it on the left and 50% for the right, just like the coin. But unlike the coin case, and assuming you believe the quantum completeness doctrine, there is no actual position of the photon prior to measurement. It's in a superposition of left and right, neither here nor there. In particular, the wave function represents (by hypothesis) a complete description of the state of the photon. It is, in other words and again assuming the completeness hypothesis for the sake of argument, "L". So we have

P(left|L) = 50%

and

P(right|L) = 50%

But surely it is an experimental fact that the joint probability of finding the photon both on the left and on the right vanishes: P(left&right|L) = 0.

This illustrates that QM, if you believe Bohr that the wave function is a complete description, violates Bell Locality. For according to that condition, the joint probability should factor:

P(left&right|L) = P(left|L) * P(right|L)

which (given the values specified for the three quantities above) it doesn't. So this simple little example is (kind of amazingly, if you think about it) sufficient to show that, if complete, QM violates Bell locality. Of course, this is exactly what motivated people like Einstein to reject the completeness doctrine and try to find a local theory which, of course, means a local hidden variable theory. And since Bell (later) proved that was impossible too, this cute little example with the "quantum coin" actually plays a significant role in establishing that any empirically viable theory (which is to say: Nature!) violates Bell Locality. Not bad for such a trivial little example.

Hopefully that clarifies things a bit. Do check out the article on "Einstein's Boxes." It's quite interesting, if I do say so myself. :wink:
 
  • #34
ttn said:
this cute little example with the "quantum coin" actually plays a significant role in establishing that any empirically viable theory (which is to say: Nature!) violates Bell Locality.
You still agree that Everett-style interpretations may be an exception to this, even if you personally don't find them plausible, right? I think an advocate of such an interpretation would say your argument is faulty because you assume that after the experimenter makes a measurement there is a single fact about which side the photon was found on, when really there might be different facts observed by different copies of the experimenter.
 
  • #35
ttn said:
...

There is a nice article in the current issue of AmJPhys (Feb.) on a thought experiment that is basically the quantum equivalent of the example I just gave with the coin. (The article and the thought experiment are called "Einstein's Boxes".) The idea is, imagine doing this same sort of experiment with a quantum particle (instead of a macroscopic coin), say by splitting a photon's wf in half with a half-silvered mirror, and letting the two halves separate for a while. Then, according to QM, there is a 50% chance of finding it on the left and 50% for the right, just like the coin. But unlike the coin case, and assuming you believe the quantum completeness doctrine, there is no actual position of the photon prior to measurement. It's in a superposition of left and right, neither here nor there. In particular, the wave function represents (by hypothesis) a complete description of the state of the photon. It is, in other words and again assuming the completeness hypothesis for the sake of argument, "L". So we have

P(left|L) = 50%

and

P(right|L) = 50%

But surely it is an experimental fact that the joint probability of finding the photon both on the left and on the right vanishes: P(left&right|L) = 0.

This illustrates that QM, if you believe Bohr that the wave function is a complete description, violates Bell Locality. For according to that condition, the joint probability should factor:

P(left&right|L) = P(left|L) * P(right|L)

which (given the values specified for the three quantities above) it doesn't. So this simple little example is (kind of amazingly, if you think about it) sufficient to show that, if complete, QM violates Bell locality. Of course, this is exactly what motivated people like Einstein to reject the completeness doctrine and try to find a local theory which, of course, means a local hidden variable theory. And since Bell (later) proved that was impossible too, this cute little example with the "quantum coin" actually plays a significant role in establishing that any empirically viable theory (which is to say: Nature!) violates Bell Locality. Not bad for such a trivial little example.

Hopefully that clarifies things a bit. Do check out the article on "Einstein's Boxes." It's quite interesting, if I do say so myself. :wink:

Thanks for taking the time to discuss.

I have always thought that there are many elements of QM that come back to being equivalent to the EPR paradox, it just is easier to see it laid out when looking at entangled particles. Pretty much every variation of the HUP that you can see - the beamsplitting example you give for example - has to be considered as being in opposition to local reality when you really dig down into it. An unentangled photon's polarization is not definite real either unless it is measured.
 
  • #36
ttn said:
"a" was meant to include any relevant facts about the apparatus used to measure "A". And yes, "b" is to be thought of as outside the past lightcone of A. The idea is just that, as I said in another post, once you've conditionalized a probability on every event on which it might depend in a local way, adding more information isn't going to change the probabilities. Such information would be either irrelevant or redundant.
Yes, that's fine, and I like your coin analogy, but I don't see how you can say:
this cute little example with the "quantum coin" actually plays a significant role in establishing that any empirically viable theory (which is to say: Nature!) violates Bell Locality.
No, it takes something much worse than that to violate a "real" Bell inequality (i.e. one with no loopholes). Bell's inequality is not just a statistical trick dependent on a change in conditional probabilities when our information changes. I know some physicists think it is, but if you look at, say, Clauser and Horne's derivation in their 1974 paper (Physical Review D, 10, 526-35 (1974)) you can see that all it depends on are ratios of counts.

Cat
 
  • #37
JesseM said:
You still agree that Everett-style interpretations may be an exception to this, even if you personally don't find them plausible, right? I think an advocate of such an interpretation would say your argument is faulty because you assume that after the experimenter makes a measurement there is a single fact about which side the photon was found on, when really there might be different facts observed by different copies of the experimenter.


Sure.

At the risk of annoying Patrick, I'll just say that if the best objection to my claim involves "ah, but maybe different copies of the experimenters in parallel universes saw different results" ... well, I'll just shrug and take it as a compliment.
 
  • #38
Cat said:
No, it takes something much worse than that to violate a "real" Bell inequality (i.e. one with no loopholes).

I think you missed my point. No Bell inequality is violated by the coin example. You have to do difficult experiments to violate the Bell inequality. But that Bell inequalities are experimentally violated is only the second half of Bell's argument for non-locality. The first half of the argument is essentially the EPR argument (that QM, if complete, is nonlocal) and *that's* what the coins/boxes example establishes. Yes, it's meager, but it's a small part of an important argument that hasn't been widely grasped -- despite Bell's heroic efforts to make people understand.
 
  • #39
ttn said:
At the risk of annoying Patrick, I'll just say that if the best objection to my claim involves "ah, but maybe different copies of the experimenters in parallel universes saw different results" ... well, I'll just shrug and take it as a compliment.
I guess it depends on one's "aesthetic" intuitions about what the laws of nature should look like. To me, the idea that the laws of nature violate Lorentz-invariance, picking out a single preferred reference frame, yet somehow the laws of nature also conspire to make it impossible to ever detect any evidence of a preferred frame, seems like a much uglier solution to the problems raised by the EPR experiment than the idea of everything constantly splitting into multiple copies, or even the idea of backwards-in-time causation as in Cramer's "transactional interpretation". As an analogy, even though it's possible to come up with an ether theory that reproduces all the predictions of SR, there are strong aesthetic and conceptual arguments against such an interpretation of SR, summarized in this post by Tom Roberts:

http://groups-beta.google.com/group/sci.physics.relativity/msg/a6f110865893d962

Most of his arguments would apply equally well to nonlocal hidden-variables theories which involve a preferred reference frame for FTL effects, like Bohm's (and any new nonlocal hidden-variables theories must also have this feature if they want to forbid backwards-in-time causation).
 
  • #40
ttn said:
The Bell Locality condition is really simple. It merely says

P(A|a,b,L) = P(A|a,L)

where "A" is some particular event (say the result of a measurement), "a" is any relevant parameters pertaining to the event (like the orientation of your SG magnets if it's a spin measurement), "L" is a complete specification of the state of the measured object across some spacelike hypersurface in the past of the measurement event, and "b" is any other junk that is spacelike separated from the measurement event. Basically the idea is: once you conditionalize on everything that could possibly affect the outcome in a local manner, specifying in addition information pertaining to space-like separated events will be *redundant* and hence won't change the probabilities.

Ah, but HERE we agree ! And QM DOES satisfy this condition: the LOCAL PROBABILITIES at A are not influenced by the settings at b.
This is indeed, what I consider "sufficiently local" in a stochastic theory.

But you wrote something else for Bell locality:

You wrote that
P(A,B | a,b,L) = P(A|a,L) P(B|b,L)

That's a much more severe requirement (which is the true Bell locality).
My claim is that we are now expressing probabilities of A AND B, which pertain to a and b, so there's no a priori need to have them factorized, because to have a single hope of observing A and B together, a and b have to be in the observer's past light cone. (that's the thing you win from an MWI viewpoint)
Of course you want to factorize this if you somehow "want the randomness to be generated locally". But that only has a meaning with a deterministic theory where "the particles carry something with them to determine that randomness". It follows naturally within the framework of an underlying mechanics that GENERATES you the probabilities. But I don't see why a stochastic theory has the "right" to tell you about "individual outcome probabilities" but not about higher order correlations. If you want to give a name to that property, namely Bell locality, then so be it, but again, I repeat that this name doesn't mean anything within the framework of a stochastic theory. Your FIRST condition, however, DOES :-)

cheers,
Patrick.

EDIT: the latter requirement of course leads to the first, but a less severe one can also do:

Integral dB P(A,B|a,b,L) shouldn't depend on b.
 
Last edited:
  • #41
ttn said:
I'd go further. I'm a scientific realist. I believe there is an external, physical, objective world that exists independent of human knowledge of it. And I think the purpose of physics is to understand what the world is like. What you are here calling "telling stories" is really the process of building up an evidence-based model of reality -- just like Copernicus was "telling a story" when he said the Earth went around the sun, Maxwell and Boltzmann were "telling stories" when they predicted the distribution of molecular speeds in a gas, and just like, say, contemporary astrophysicists "tell stories" about how shockwaves propagating through infalling matter can result in supernovas.

I would like to point out that MWI like settings ALSO give rise to an "external physical objective world" in a certain way: it is given by "the wavefunction of the universe", and its corresponding evolution (in a Schroedinger picture). Only, that "universe" is quite remote from what WE observe ; nevertheless, it explains what WE observe. So, in that way, it isn't solipsist.

About stories: I think it is increasingly clear that the mental picture we have of a physical phenomenon is dependent on the formalism in which we work. For instance, if I work in Newtonian gravity, I think of "forces" pulling on some balls in a kind of Euclidean space. When you write computer code for such a calculation, that's what you keep in mind. However, when you switch to general relativity, suddenly the picture changes. Now we are thinking of geodesics on a frozen, wobbly 4-d spacetime. It is very hard to think that way, and even to take that very seriously, because "nothing moves" in that clay-shaped 4-d thing. There's a spot on that clay model where you are already dead, for instance. So here already, there is a BIG difference between what "nature is" (a 4-d motionless piece of clay) and what you "observe" (time flow). As I said before: the big shock of relativity was the shattering of our concept of time.

When you do optics experiments, often you imagine "beams" with a kind of little "phase counter" running over the lines, in order to find out the correct interference patterns. This is on one hand a semigeometrical approximation to EM, but it is also an application of the Feynman path integral. However, when things get harder, like in cavities, you switch to a complete classical field viewpoint, with E-arrows attached to each point in space.
When you do quantum optics, often you switch back and forth between the two mental pictures (the "wave - particle duality" :-). And sometimes you even have to put it all down, and go to an abstract Fock space representation.

You don't take all these stories simultaneously to be "a true description of reality", no ? You will of course reply: no, there is ONE story (the most sophisticated one) that must be true, and all the rest are approximations. But hey, when we didn't know about that most sophisticated view, we *did* think that our approximations were the "true vision". There's nothing that makes us think that our grandchildren will not have an even more sophisiticated view which invalidates our current one, or at least makes it just an approximation to reason in. So there's no point in trying to have "a true description of reality" if it fundamentally changes every century or so.

I don't accept the idea that, in these sorts of cases, the only point of these stories is to help people develop intuition for formalism, etc. If anything, it's just the reverse: the point of the formalism is to help us figure out which story is the correct one, i.e., what the world is like. Isn't that really what science is all about?

It was, a long time ago. But after a few paradigm shifts, I think it is an illusion to think about it that way. I repeat: you have no idea how deluded we all are :smile: I think that that is the ONLY true picture of "reality" that will remain with us for ever. We only know one thing for sure: reality is way way different from what we ever will think it is.
Our only hope (seems to be satisfied up to now) is that we will be able to map approximate mathematical models on it, and tell stories that go with it. And even that is not evident, but seems to be right. Because you like Einstein: "the most incomprehensible thing about the universe is that it seems to be comprehensible" or something of the kind.


cheers,
Patrick.
 
  • #42
JesseM said:
I guess it depends on one's "aesthetic" intuitions about what the laws of nature should look like. To me, the idea that the laws of nature violate Lorentz-invariance, picking out a single preferred reference frame, yet somehow the laws of nature also conspire to make it impossible to ever detect any evidence of a preferred frame, seems like a much uglier solution to the problems raised by the EPR experiment than the idea of everything constantly splitting into multiple copies, or even the idea of backwards-in-time causation as in Cramer's "transactional interpretation".

Yes, I think we perfectly agree here. I prefer consistency over intuition. As you point out: it is not lorentz invariance by itself that is "holy", but the fact that it should be holy in practice, but violated by the wheels and gears of the system which I find very ugly.

cheers,
Patrick.
 
  • #43
AFAIK stochastic interpretations get around bells theorem as well (or rather reproduce it), so in a sense you get a philosophically pleasing determinism (though it can never be truly measured) as well as locality.

Incidentally there's a lot of definitions for locality out there, depending on the nature of the information theory you are using. What we can't have under any circumstance, is information propagating outside its own lightcone. So while locality and causality are logically linked, there is some subtleties between the two in the literature.
 
  • #44
Since no one has brought this up, I will.

I strongly suggest everyone interested to read the latest paper from the Zeilinger group appearing in PRL this week.[1] In this, they created a 2-photon pair, which at first had very weak degree of entanglement. They showed that this pair could not violate the CHSH inequality "... or any other inequality..". They referred to this pair as the "local" states. They then purified the pair to strongly entangled them and showed how it can now violate the CHSH-Bell inequality, "... proving that it cannot be described by a local realistic model..."

I think this is crucial because it compares what you'd get in two different cases, and that the relevant inequalities are only violated upon the "turning on" of the entanglement. It shows that just because you can create a pair of something (as in a pair of classical particle that are connected to each other via conservation of angular momentu, let's say), doesn't mean the pair will violate these inequalities if it isn't entangled quantum mechanically.

Zz.

[1] P. Walther et al., PRL v.94, p.040504 (2005).
 
  • #45
Haelfix said:
AFAIK stochastic interpretations get around bells theorem as well (or rather reproduce it), so in a sense you get a philosophically pleasing determinism (though it can never be truly measured) as well as locality.
What do you mean by "stochastic interpretations"? And why are you talking about determinism when the word stochastic always implies randomness?
 
  • #46
ZapperZ said:
I strongly suggest everyone interested to read the latest paper from the Zeilinger group appearing in PRL this week ...
P. Walther et al., PRL v.94, p.040504 (2005).​
I had a look at this and was shocked to find that they said they'd infringed the CHSH inequality but did not even mention the main "loophole" that bugs this -- the "fair sampling" one. Of course, if their detecters were perfect then it would have been irrelevant, but since it was an ordinary optical experiment this cannot have been so.

How can they justify quoting the results of this test without so much as a mention of the efficiencies involved? They say their results "cannot be described by a local realist model". If they have not closed the detection loophole they have not proved this!

Cat
 
  • #47
JesseM said:
What do you mean by "stochastic interpretations"? And why are you talking about determinism when the word stochastic always implies randomness?
See Clauser, J F and Horne, M A, “Experimental consequences of objective local theories”, Physical Review D, 10, 526-35 (1974).

They explain very carefully how in real Bell test experiments it's best not to try and define hidden variables that completely determine the outcomes even though they do exist. Some components of the "complete" hidden variables are tied up with the detailed behaviour of the detectors and can be treated as random. Clauser and Horne's version of the Bell inequality follows one of Bell's later ideas in defining HV's that merely determine the probabilities of detection, not the actual outcomes.

As I understand it, this kind of HV is constructed from those components of the full HV that are associated with the source and which play a logical role in relation to detection. It is what is meant by the term "stochastic hidden variable". In an attempt to avoid confusion, C and H call a stochastic HV theory an "objective local theory".

Cat
 
  • #48
Cat said:
They explain very carefully how in real Bell test experiments it's best not to try and define hidden variables that completely determine the outcomes even though they do exist. Some components of the "complete" hidden variables are tied up with the detailed behaviour of the detectors and can be treated as random.

There is no difference between a fully deterministic HV theory, and a "HV theory of which the parameters determine the local probability distributions" because this simply amounts to adding a few more hidden variables to the "full state description" (call it an extra list of random numbers) which turn the latter version in the former. For instance, these variables can determine the "microstate of the detector" which is unknown. So a HV theory is always deterministic in its approach and gets its stochastic character only from the incomplete knowledge we have about the HV. And then you always end up with classical statistical mechanics, from which, indeed, all Bell inequalities and so on follow upon assumption of locality.

cheers,
Patrick.
 
  • #49
Just for information, the kind of view I have been defending here, and which was, I thought, a kind of personal mixture of existing views, is very clearly expressed by the article by Anthony Sudbery in:
quant-ph/0011084

What is weird in reading this, for me, was that it almost sounded as if I was talking to myself... but then I'm tempted by solipsism :smile:

cheers,
patrick.
 
  • #50
vanesch said:
There is no difference between a fully deterministic HV theory, and a "HV theory of which the parameters determine the local probability distributions" because this simply amounts to adding a few more hidden variables to the "full state description" (call it an extra list of random numbers) which turn the latter version in the former. For instance, these variables can determine the "microstate of the detector" which is unknown. So a HV theory is always deterministic in its approach and gets its stochastic character only from the incomplete knowledge we have about the HV. And then you always end up with classical statistical mechanics, from which, indeed, all Bell inequalities and so on follow upon assumption of locality.

cheers,
Patrick.
Though true, if you want to derive a general Bell inequality, valid for imperfect detectors, it is necessary, I think, to do as Clauser and Horne (and Bell, in 1971) did and treat the important (type I) components of the HV in a logically different manner from unimportant (type II) ones. The "type I" ones are those such as polarisation direction and signal amplitude that are set at the source and are relevant when the particles reach the analysers. These really do play a logically different role in the experiments from the "type II" components concerned with, for instance, the microstate of the detector. The type I components are responsible for any correlation, while the type II ones are assumed to be independent on the two sides -- just random "noise".

Cat
 
  • #51
Cat said:
Though true, if you want to derive a general Bell inequality, valid for imperfect detectors, it is necessary, I think, to do as Clauser and Horne (and Bell, in 1971) did and treat the important (type I) components of the HV in a logically different manner from unimportant (type II) ones. The "type I" ones are those such as polarisation direction and signal amplitude that are set at the source and are relevant when the particles reach the analysers. These really do play a logically different role in the experiments from the "type II" components concerned with, for instance, the microstate of the detector. The type I components are responsible for any correlation, while the type II ones are assumed to be independent on the two sides -- just random "noise".
Cat

This is true, but amounts to postulating (again) a deterministic theory. My claim is that the relationship between what is called "Bell locality" (a factorisation condition joint probabilities have to satisfy and from which one can deduce the Bell inequalities) and any kind of "physical locality of interaction" only makes sense in the framework of an essentially deterministic HV theory. It is _that_ deterministic mechanism (hidden or not) which, if required to give rise to probabilities and to be based on local interactions, that gives rise to Bell locality.
But "Bell locality" doesn't make any sense for fundamentally stochastic theories, because there is no supposed hidden mechanism of interaction which is to be local or not. A fundamentally stochastic theory just tells you what are the probabilities for "single events" and for "joint events" (correlations) WITHOUT being generated by an underlying deterministic mechanism.
The only locality condition we can then require is that probabilities of observations can only depend on what is in the past lightcone of those observations, and this then gives:

P(A|a,b,L) can only be function of a because only the setting a is in the past lightcone of event A.
P(B|a,b,L) can only be function of b, because only the setting b is in the past lightcone of event B.

But:
P(A,B|a,b,L) can be function of a and b, because this correlation can only be established when we get news from A AND from B, and at that moment, a and b are in the past lightcone of a and b. Or otherwise formulated: a and b are in the past lightcones of the events A and B.

The first two conditions impose an INTEGRAL condition on the third expression, but do not require that P(A,B) factorizes. That factorization only comes about when P(A,B) is _constructed_ from an underlying deterministic model.

The objection seemed to be: hey, but I can think of hidden variable theories which are _stochastic_. And I tried to point out that that's tricking the audience, because it can trivially be transformed into a deterministic hidden variable theory. BTW, I don't understand what the purpose could be of constructing a truly stochastic hidden variable theory to explain a stochastic "no hidden variable" theory (such as QM).

cheers,
Patrick.
 
  • #52
vanesch said:
This is true, but amounts to postulating (again) a deterministic theory. My claim is that the relationship between what is called "Bell locality" (a factorisation condition joint probabilities have to satisfy and from which one can deduce the Bell inequalities) and any kind of "physical locality of interaction" only makes sense in the framework of an essentially deterministic HV theory. It is _that_ deterministic mechanism (hidden or not) which, if required to give rise to probabilities and to be based on local interactions, that gives rise to Bell locality.
But "Bell locality" doesn't make any sense for fundamentally stochastic theories, because there is no supposed hidden mechanism of interaction which is to be local or not. A fundamentally stochastic theory just tells you what are the probabilities for "single events" and for "joint events" (correlations) WITHOUT being generated by an underlying deterministic mechanism.

I'm sorry, but this really is just playing semantic word games to make the answer appear to come out the way you want. First you define "nonlocality" in terms of "underlying deterministic mechanisms", then you shrug and say: since QM has no such mechanisms, it isn't nonlocal.

The beauty of Bell's locality condition is that it doesn't require any of this loose talk about "underlying mechanisms" and "communication of information" and all these other things that lead to endless debates. And despite what you say above, Bell Locality *does* apply perfectly well to stochastic theories. The condition is, after all, stated exclusively in terms of probabilities, so the applicability is really rather obvious.



The only locality condition we can then require is that probabilities of observations can only depend on what is in the past lightcone of those observations, and this then gives:

P(A|a,b,L) can only be function of a because only the setting a is in the past lightcone of event A.
P(B|a,b,L) can only be function of b, because only the setting b is in the past lightcone of event B.

But:
P(A,B|a,b,L) can be function of a and b, because this correlation can only be established when we get news from A AND from B, and at that moment, a and b are in the past lightcone of a and b. Or otherwise formulated: a and b are in the past lightcones of the events A and B.

Here you are simply forgetting an important rule of probability calculus. I believe it is sometimes called "Bayes theorem" or something to that effect. It says:

P(A,B) = P(A|B) * P(B)

that is, you can *always* write a joint probability as a product so long as you conditionalize one of the probabilities on the other event.

If we are interested in something of the form P(A,B|a,b,L), we may write this as

P(A,B|a,b,L) = P(A|B,a,b,L) * P(B|a,b,L)

But then Bell Locality enters and says:

P(A|B,a,b,L) = P(A|a,L)

and

P(B|a,b,L) = P(B|b,L)

on the grounds of locality: neither event (A or B) may depend stochastically on occurrences outside of their past light cones. Specifically, the probability distribution of events A cannot be affected by conditionalizing on space-like separated events B and b, since we have already conditionalized on a complete description of the world in the past light cone of A, namely L. And likewise for B. There is no determinism built in here, no requirement that the probabilities P(A|a,L), etc., be zero or unity.

Bottom line: Bell Locality *does* completely justify the factorization condition that is (a) required to demonstrate the Bell Theorem and (b) violated by orthodox QM when we identify L with the QM wave function (as surely Bohr invites us to do).



The first two conditions impose an INTEGRAL condition on the third expression, but do not require that P(A,B) factorizes. That factorization only comes about when P(A,B) is _constructed_ from an underlying deterministic model.

No, this is just wrong. I got from the joint probability to the factored, Bell Local expression, by using "Bayes Theorem" (I'm not actually sure it's called that...) and Bell Locality and that's it. No mention of determinism.
 
  • #53
ttn said:
P(A,B|a,b,L) = P(A|B,a,b,L) * P(B|a,b,L)

But then Bell Locality enters and says:

P(A|B,a,b,L) = P(A|a,L)

No ! Because B enters in the condition on the left hand side, this may depend upon b. There is no way to talk about "upon condition B" without having information about B. So the conditional probability on the left hand side talks about A and B, and so can depend on a and b. Now, you can *require* that the conditional probability P(A|B) = P(A), in which case you call A and B statistically independent events. But that's a property that you can call "zork" or "Bell beauty" or "Bell locality" or "Bell desire". It isn't required for a stochastic theory that only claims that probabilities of events only depend on conditions in their past lightcones ; THIS is what is required by locality as specified by relativity. From the moment you mention A AND B in a probability (whether joint or conditional), they may depend on everything about A and everything about B.

So, again: QM probabilities do not satisfy "zork"
QM probabilities do satisfy locality as specified by relativity.

However, what I'm trying to make clear as a point, is that IF YOU WANT THOSE PROBABILITIES TO BE GENERATED FROM A DETERMINISTIC THEORY which has hidden variables (that will give you the "stochastic appearance" because of their hidden character) and YOU REQUIRE THAT ALL INTERACTIONS ARE LOCAL including those concerning the change, transfer etc... of the hidden variables, THEN YOU OBTAIN A CONDITION WHICH IS ZORK (also called Bell locality).

And from the zork condition follows the Bell inequality.

You cannot PROVE me the necessity of Bell Locality (which I call zork) without going to a deterministic model (or a pseudo-deterministic model, that can be transformed into a deterministic one by adding variables).
Try to prove me somehow (not DEFINE) that factorization is necessary for locality without using an underlying deterministic model !

However, I can PROVE you the requirement of locality specified by relativity on the basis of information theory. Now, since the concept of locality plays an eminent role only because of relativity, my point is that that is the only sensible requirement for locality given a stochastic theory. We only switch to a more severe one (zork) because we want "extra stuff" such as an underlying deterministic mechanics.

cheers,
Patrick.
 
  • #54
ttn said:
Specifically, the probability distribution of events A cannot be affected by conditionalizing on space-like separated events B and b, since we have already conditionalized on a complete description of the world in the past light cone of A, namely L.

It is in this phrase that is catched exactly the deterministic character of an underlying mechanism ! (the "complete description" part)

Why ? Because you seem to claim that "whatever happens to B and whatever choice I make for b, it can not be "signalled " to A. (by the underlying mechanism). But careful: the choice of b will of course affect the result B. So you shouldn't be surprised that P(A|B) can a priori depend on b ; as long as it is done in such a way, that P(A) doesn't depend on b. (that's the integral condition)

So my claim is: P(A|B) does not need to be equal to P(A). I wish you could pove me its necessity. (it is, as you point out, equivalent to factorizing P(A,B) = P(A) P(B) )

But of course if you want to invent a machinery that generates these probabilities, you will have a hard time sending a hidden variable messenger from B to A, and THEN of course, you can claim that any machinery that will determine things at B, as a function of b, can never send a message to A in order to do anything there.

cheers,
Patrick.
 
  • #55
vanesch said:
No ! Because B enters in the condition on the left hand side, this may depend upon b. There is no way to talk about "upon condition B" without having information about B. So the conditional probability on the left hand side talks about A and B, and so can depend on a and b. Now, you can *require* that the conditional probability P(A|B) = P(A), in which case you call A and B statistically independent events. But that's a property that you can call "zork" or "Bell beauty" or "Bell locality" or "Bell desire". It isn't required for a stochastic theory that only claims that probabilities of events only depend on conditions in their past lightcones ; THIS is what is required by locality as specified by relativity. From the moment you mention A AND B in a probability (whether joint or conditional), they may depend on everything about A and everything about B.

So... let me see if I get your position. You are willing to allow that

P(B|a,b,L) = P(B|b,L)

as a perfectly reasonable requirement of locality. But you are unwilling to allow that

P(B|A,b,L) = P(B|b,L)

is a reasonable requirement.

Do I have that straight? You think: Locality forbids the outcome B from depending on the setting (a) of the distant apparatus, but does not forbid B from depending on the *outcome* of that distant measurement (A). Is that it?


Try to prove me somehow (not DEFINE) that factorization is necessary for locality without using an underlying deterministic model !

I'm not sure what kind of thing you would take as a proof. I think Bell Locality is an extremely natural way of expressing the requirement of local causality. Bell thought so too. But there is no way to "prove" this. One has to simply accept it as a way of defining what it means for a theory to be local; then people can choose to accept or reject that definition. What bothers me is when people accept it in regard to hv theories, but reject it in regard to QM. That's just inconsistent.

However, I can PROVE you the requirement of locality specified by relativity on the basis of information theory.

Not really, although surely the statement "humans should never be able to communicate, i.e., transmit information, faster than light" is another somewhat reasonable definition of locality. The problem is, if you are going to define locality that way in order to prove that QM is local, Bohm's theory turns out to be local, too -- despite the fact that, in some *other* senses of "locality", Bohm's theory is rather blatantly *nonlocal*.

Again, I only really care here about consistency. If you're going to define locality in terms of "information", then you shouldn't say that Bohm's theory is nonlocal. And if you're going to define locality as Bell did, then you shouldn't say that orthodox QM is local.
 
  • #56
vanesch said:
It is in this phrase that is catched exactly the deterministic character of an underlying mechanism ! (the "complete description" part)

Why ? Because you seem to claim that "whatever happens to B and whatever choice I make for b, it can not be "signalled " to A. (by the underlying mechanism). But careful: the choice of b will of course affect the result B. So you shouldn't be surprised that P(A|B) can a priori depend on b ; as long as it is done in such a way, that P(A) doesn't depend on b. (that's the integral condition)

I agree with this much: P(B) depends on b. But that's precisely why I find it so silly to argue that locality requires

P(A|B,a,b,L) = P(A|B,a,L)

but not

P(A|B,a,b,L) = P(A|a,L)

If the point is that, in a local theory, P(A|a,L) should not change when you specify the distant setting b, then shouldn't it also not change if you specify the distant outcome B? If you allow the latter sort of dependence, you are in effect smuggling in the previously-eliminated dependence on "b" for just the sort of reason you elaborate above.



So my claim is: P(A|B) does not need to be equal to P(A). I wish you could pove me its necessity. (it is, as you point out, equivalent to factorizing P(A,B) = P(A) P(B) )

Well, I certainly can't prove that P(A|B) = P(A). That would be a preposterous requirement. It would basically just assert that there is no correlation between A and B. But locality doesn't forbid correlations. It merely forbids correlations which cannot be in some way accounted for by information in the past of the two events in question. That is, the condition only makes sense if you conditionalize all probabilities involved on some complete specification of the state of the system at some prior time(slice), and if you add in possible local dependencies on things like apparatus settings:

P(A,B|a,b,L) = P(A|B,a,b,L) * P(B|a,b,L) = P(A|a,L) * P(B|b,L)

where the first equality is pure unobjectionable math, and the second involves application of Bell Locality.

I'm sure you will now say "Aha!" and assert that by "accounted for" above I really mean "deterministically accounted for". But I just don't. I am perfectly happy to allow non-deterministic laws. In fact, that's one of the nice things about this probability-based notation for expressing Bell locality. Some of Bell's papers use a notation like A(a,L) where now A is the (evidently one and only) outcome consistent with setting "a" and prior-joint-state L. That notation does imply determinism, and hence a statement of Bell Locality couched in that language would *not* be able to be applied to orthodox QM (which is stochastic). But a fairly straightforward change of notation gives the statement of Bell Locality we've been discussing, the one that is couched explicitly in stochastic terms and which therefore is entirely applicable to stochastic theories like orthodox QM.
 
  • #57
We're getting closer :smile:

ttn said:
Not really, although surely the statement "humans should never be able to communicate, i.e., transmit information, faster than light" is another somewhat reasonable definition of locality.

Well, in order to satisfy relativity, replace "humans" by "anything that can send out information" (because that's the relativity paradox you want to avoid: that you receive your own information before sending it ; on which you could base a decision to send out OTHER information, hence the paradox)

The problem is, if you are going to define locality that way in order to prove that QM is local, Bohm's theory turns out to be local, too -- despite the fact that, in some *other* senses of "locality", Bohm's theory is rather blatantly *nonlocal*.

The problem with locality is that the definition is different according to whether you work with a stochastic theory or with a deterministic theory.
In a purely stochastic theory, the only definition we can have concerning locality is of course based upon information theory.
In that sense, QM and of course Bohm's theory considered as a stochatical theory (which gives the same stochastic predictions) is local.

Next, we can talk about the locality of mechanisms, whether or not they lead to a deterministic or stochastic theory ; and in the latter case, independent of whether the stochastic theory is local in the information theory sense.

For instance, the "collapse of the wavefunction" in QM is blatantly non local, because it affects the internal description at B when doing something at A.
However, the MWI approach gives us a local mechanism, in a very subtle way: you can only talk about a correlation when the events at A and B are in the past lightcone and you deny the individual existence of events at A and B until at the moment where you can observe the correlations. At most you can observe one of both.

The *hidden* variables are also subject to a non-local mechanism in Bohm's theory.

Theories which have a non-local mechanism but give rise to a stochastic theory which IS local (in the relativistic sense) are said to "conspire": they have all the gutwork to NOT respect the locality requirement of relativity, but they simply don't take advantage of it. Bohm's theory, and QM in the Copenhagen view are in that case (that's why I don't like them).
MWI QM doesn't have such a non-local mechanism

Of course a stochastic theory without any underlying mechanism cannot be analysed for their underlying mechanism!

What's now the room for Bell Locality ? It turns out that any stochastic theory generated by a deterministic theory which respects a local mechanism, satisfies Bell locality.

Amen.

cheers,
Patrick.
 
  • #58
vanesch said:
Theories which have a non-local mechanism but give rise to a stochastic theory which IS local (in the relativistic sense) are said to "conspire": they have all the gutwork to NOT respect the locality requirement of relativity, but they simply don't take advantage of it. Bohm's theory, and QM in the Copenhagen view are in that case (that's why I don't like them).
MWI QM doesn't have such a non-local mechanism

Two questions on this. First, do you have any links to discussions where this technical term "conspire" is introduced? And second, does this resolution of nonlocality exist in the weaker relative interpretation of MWI or do you require the literal multiple worlds?
 
  • #59
selfAdjoint said:
Two questions on this. First, do you have any links to discussions where this technical term "conspire" is introduced? And second, does this resolution of nonlocality exist in the weaker relative interpretation of MWI or do you require the literal multiple worlds?

I have to say that I use the term "conspire" as I intuitively thought it was typically used, namely that a strict principle should be obeyed, but that the underlying mechanism (whatever it is) doesn't obey it, but in such a way that it doesn't show. I don't know if there is a rigorous definition for the term.
An example that comes to mind is the "naive" QFT mass-energy of the 1/2 hbar omega terms (which is HUGE) and a corresponding cosmological constant which happens to exactly (or almost so) compensate this. So, or there is a principle that says that the effective cosmological constant must be small, or there is a "conspiracy" so that these two unconstrained contributions cancel.

Concerning your second point, I guess one can discuss about it, depending on exactly what one defines as a "local mechanism". If it is sufficient to say that the correlations only make sense to an observer when the corresponding events are already in the past lightcone, such as is the case in _any_ MWI like scheme, then I would think that that is sufficient to call the mechanism "local". If, however, you require a totally local state description, then there is a problem with the Schroedinger picture, where there is one, holistic wavefunction of the universe. However, Rubin has written a few articles showing that - if I understood it well - you can get rid of that problem in the Heisenberg picture. The price to pay is that you carry with you a lot of indices which indicate your whole "entanglement history". But you carry them with you at sub lightspeed.

I may have used words and definitions in my arguments here which are not 100% correct. The whole thing is of course discussable, but the intuition - to me - is clear: local means: there's no obvious way you see how to use the mechanism to make an FTL phone. Non-local means: highly suggestive of how to make an FTL phone.
The projection postulate collapses wave functions at a distance. You would think immediately that somehow you can exploit that ! It is only after doing some calculations that you find out that you can't.
We know that the stochastic predictions of QM do not allow you to make an FTL phone. That's good enough for me to call it a "local" theory. But the underlying wheels and gears can or cannot suggest that FTL phones are possible (even if we know, at the end of the day, that they aren't). In such cases, I call the mechanism "non-local".

cheers,
Patrick.
 
Last edited:
  • #60
Cat said:
[/INDENT]
I had a look at this and was shocked to find that they said they'd infringed the CHSH inequality but did not even mention the main "loophole" that bugs this -- the "fair sampling" one. Of course, if their detecters were perfect then it would have been irrelevant, but since it was an ordinary optical experiment this cannot have been so.

How can they justify quoting the results of this test without so much as a mention of the efficiencies involved? They say their results "cannot be described by a local realist model". If they have not closed the detection loophole they have not proved this!

Cat

Per the Rowe et al citation previously given, a lot of folks think this "loophole" is closed. Once the rest of us close a loophole, it is not necessary to repeat something that no longer applies. They also don't mention what day of the week the test was performed on because that doesn't matter either.

On the other hand, there are "certain" local realists out there who deny nearly every aspect (pun intended) of Bell tests. Some of us have come to the conclusion that a debate on the matter is a waste of time and effort.

BTW, I don't have a subscription to PRL. I could not find a link anywhere to the Walther article. Anyone find such?
 

Similar threads

Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 6 ·
Replies
6
Views
483
Replies
58
Views
4K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 109 ·
4
Replies
109
Views
6K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
5
Views
3K
  • · Replies 19 ·
Replies
19
Views
2K