How measurement in the double slit experiment affects result

  • #51
I would also point out that having a "classical analog" is not making the claim that everything can be explained without quantum mechanics. It just means that it is untrue to claim some phenomenon had no classical analog, i.e., that there is no classical version of a similar phenomenon. A good example is quantum chaos, where the wavefunction takes on a different character in systems that exhibit classical chaos. We don't say the existence of classical chaos means you don't need a wavefunction to describe that system. Still, having said that, I have a slightly different classical analog in mind, and I'm not sure about the Bell ramifications. Questions about wave vs. particle behavior are a bit different from Bell violations.
 
Physics news on Phys.org
  • #52
atyy said:
I think it should be made clear (if I undrerstand both correctly) that vanhees71 and billschneider are not saying the same thing.

billschneider, citing Jaynes, claims the fundamental logic of the Bell inequalities is wrong, and that local realism or Einstein Causality in the usual sense is consistent with quantum mechanics.

vanhees71 is only claiming a weakened form of Einstein Causality, in which RCC is preserved but FP is given up so that correlations do not have a common cause explanation.
I think you understood me right, but I don't see, where Einstein Causality is weakened. In my understanding of the statistical interpretation, there is no faster-than-light signal necessary to explain the outcome of the delayed-choice experiments in accordance with quantum mechanics, and I still hold to my opinion that the common-cause explanation for this phenomenon is the entanglement of the photons for the eraser. Also in the delayed-choice experiment by Peng and all there is no superluminal (or even retrocausal) action at a distance. The two-photon correlations there are also due to the preparation procedure encoded by the description given in the paper.

I'd still be glad about comments about whether my understanding of the Peng paper is correct (at least on the physics level; of course there seem to be subtle or even big differences among us concerning the metaphysical implications, which however should be left out of the discussion as long as the physics is absolutely clarified).
 
  • #53
vanhees71 said:
I think you understood me right, but I don't see, where Einstein Causality is weakened. In my understanding of the statistical interpretation, there is no faster-than-light signal necessary to explain the outcome of the delayed-choice experiments in accordance with quantum mechanics, and I still hold to my opinion that the common-cause explanation for this phenomenon is the entanglement of the photons for the eraser. Also in the delayed-choice experiment by Peng and all there is no superluminal (or even retrocausal) action at a distance. The two-photon correlations there are also due to the preparation procedure encoded by the description given in the paper.

Because of the EPR paper, Einstein Causality usually means RCS + FP, so if one rejects FP, Einstein Causality is weakened (by definition).

Also, throughout classical physics, FP is the definition of common cause. For example, in biology the framework of Bayesian networks is quite common, and FP is the definition of common cause in Bayesian networks. So if FP is rejected, there is no notion of common cause, unless one uses a new definition, which would be ok. But the question is what new definition of common cause are you using? Actually, the Cavalcanti and Lal paper http://arxiv.org/abs/1311.6852 is interesting because it discusses a proposal by Leifer and Spekkens http://arxiv.org/abs/1107.5849 to define a new notion of common cause, so that there is a modified Einstein Causality in which one keeps RCS and a more general notion of common cause.

Edit: In my previous posts I used "RCC", which is a transcription error of mine from Cavalcanti and Lal's terminology, so here I use "RCS" instead for "relativistic causal structure".
 
Last edited:
  • #54
I see, but I don't think that EPR is consistent with the modern findings concerning the predictions of quantum theory in the connection with entanglement as is precisely the topic of the EPR paper. So you cannot keep both assumptions, which you call RCS and FP. I don't see, why one should have to keep FP for any reason. For me it contradicts the fundamental principles of quantum theory, which where tested very thoroughly with all these experiments. As bhobba likes to quote, quantum theory just introduces a way to consistently describe the probabilistic behavior of nature, which is to our best knowledge today an observed fact. This doesn't exclude the possibility that one finds another more comprehensive deterministic theory, which then most probably will be even weirder than quantum theory itself from a metaphysical point of view. but physics (particularly quantum theory) teaches us that nature behaves is she behaves and doesn't care too much about our metaphysical convenience ;-)). I'll have a look at the papers later.
 
  • #55
vanhees71 said:
I see, but I don't think that EPR is consistent with the modern findings concerning the predictions of quantum theory in the connection with entanglement as is precisely the topic of the EPR paper. So you cannot keep both assumptions, which you call RCS and FP.

Yes, I mentioned EPR only because "Einstein Causality" will usually be understood to mean what EPR advocated, which was RCS + FP, which cannot both be kept.

vanhees71 said:
I don't see, why one should have to keep FP for any reason. For me it contradicts the fundamental principles of quantum theory, which where tested very thoroughly with all these experiments.

Yes, it's fine to remove FP. But since FP is the old definition of common cause, one needs a new definition if one wants to say that the entangled state is a common cause.
 
  • #56
Hm, I don't know, how to formalize my statement. Just in plain words, I think the quantum mechanical states, represented by the statistical operator in the formalism (which is by definition a trace-class positive semidefinite self-adjoint operator with trace 1; a state is pure if and only if it's a projection operator), describe the known properties of the system under consideration. It's objectively related to a single system through an equivalence class of preparation procedures but, even when we have full knowledge about the quantum system, i.e., if a complete set of compatible observables is determined through the preparation procedure and thus the state is a pure state, we have only probabilistic knowledge about the outcomes of measurements of the observables possible for this system. To test the corresponding predictions about such meaasurements one thus always needs a suffuciently large ensemble of independently prepared systems.

Among the many predictions of quantum theory are also the non-local correlations known as "entanglement", which are found contradicting local deterministic (hidden-variable) theories by Bell's and related theorems. What's now the "cause" for these "non-classically strong" correlations? In the above picture about states, it's the preparation procedure in such an "entangled state", and in all cases, known to me, (including the gedanken experiment described in the original EPR paper) these preparations are due to local manipulations on particles or photons, which then evolve for some "long" time such that subsystems can be measured at far distant (again local) experiments. The correlations due to entanglement are there for the whole time, from the very first preparation procedure. The measurements on the photons by Alice and Bob in the eraser experiment or the coincident-photon-apir countings in the above discussed experiment by Peng et all are local and can be thus, i.e., due to the locality of the interaction between each of the photons with the corresponding detector, done at space-like separated space-time regions and thus are (due to the microcausality of QED) not causally mutually affecting the far-distant photons measured in coincidence. I'm not aware of any test of these issues, where this explanation with local microcausal relativistic QFT fails, and thus I think there's no problem with causality. As demonstrated in my summaries of the two experiments above, there's no need for a collapse interpretation, and thus no need to assume a faster-than-light manipulation of a far-distant object or even a "retrocausal action at a distance". One must clearly distinguish between interactions and correlations. The former are always local by construction of the standard relativistic QFTs; the latter can refer to far-distant objects and are thus non-local. The locality of the interactions, i.e., microcausality, guarantees the consistency of the S-matrix with relativistic covariance, unitarity, and causality.
 
  • #57
vanhees71 said:
What's now the "cause" for these "non-classically strong" correlations? In the above picture about states, it's the preparation procedure in such an "entangled state", and in all cases, known to me, (including the gedanken experiment described in the original EPR paper) these preparations are due to local manipulations on particles or photons, which then evolve for some "long" time such that subsystems can be measured at far distant (again local) experiments.

vanhees71 said:
The locality of the interactions, i.e., microcausality, guarantees the consistency of the S-matrix with relativistic covariance, unitarity, and causality.

I read the rest of the post too, but I think the key issues are:
(A) Do cluster decomposition and microcausality have anything to say about the entangled state as the "cause" of the nonlocal correlations?
(B) If the entangled state is a "cause" is it a local cause?

Let me try to address (B) first. I need to think more about (A).

Let's work in a frame in which Alice and Bob measure simultaneously, so the observable measured is the tensor product of two local spacelike-separated observables. Let's use the Heisenberg picture so the initial entangled state does not evolve, but the operators do. In the Heisenberg picture, the field operator has an equation of motion that has the same form as the classical relativistic equation, except that it is an operator, so there is a good argument that the field dynamics obey local causality or Einstein causality. That leaves the initial state. I'm not sure it's the only cause, but even if it is, is the initial state a local cause?

(1) First the initial state is in Hilbert space, so it is not obviously local or associated with any point in spacetime. To avoid this we can try to
(2) Associate the initial state with the location of the preparation procedure. But if we do this, the state does not evolve, so when the measurement is made, if the preparation and measurement are spacelike separated, then the measurement outcome will depend nonlocally on the state at a spacelike location. To avoid this we can try to
(3) Associate the initial state with the entire initial spacelike slice, or put a copy of the state at every location on the intial spacelike slice. But if we do this, the preparation procedure itself is nonlocal, since the local preparation procedure is affecting the entire spacelike slice.

The basic reason I don't think the state can be a local cause is that for the state to be a cause, we have to treat it as real (FAPP). But if we treat it as real, then in a frame in which the measurements are not simultaneous, the state for the later measurement will be caused by the earlier measurement and collapse which is manifestly nonlocal. Going to a frame in which the measurements are simultaneous hides the nonlocality, but cannot make it go away.
 
Last edited:
  • #58
Regarding (A), here is a discussion that says cluster decomposition does not apply for entangled initial states: https://www.physicsforums.com/threads/cluster-decomposition-and-epr-correlations.409861/#post-2773606 (especially posts #4 by humanino and #7 by Demystifier).

As for microcausality, what it says is that the measurement of one local observable cannot affect the measurement of a different spacelike-separated local observable. This is a requirement for no superluminal signalling of classical information. This is about the causes of local events. It does not say anything about the cause of nonlocal correlations.
 
Last edited:
  • #59
I've the impression we have just a different language again. So I think we have to clarify what we mean by "cause" first. Two events a causally connected if one event necessarily leads to the other event. Now the fundamental time arrow is defined via causality, i.e., it parametrizes the order of causal events and by convention an event A can only be the cause of an event B when A happens before B.

In relativistic physics this implies that two causally connected events must be time-like or light-like relative to each other, because otherwise the statement that A causes B would become observer dependent, i.e. the causality relation, defined by time order, would not be invariant under proper orthochronous Lorentz transformations (the proper orthochronous Lorentz group is the symmetry group underlying the structure of special relativistic spacetime).

This implies that the natural laws, describing the dynamics of systems through equations of motion, should also be causal in the sense that interactions between distant subsystems, affecting changes of one subsystem due to the interaction with the other. So far, on a fundamental level, this is realized by the field picture. In classical physics this means that the equations of motion of particles are described as mediated through fields, which obey equations of motion that are formulated in terms of causal laws, which excludes (at least to experience from model building so far) tachyonic field equations (free tachyonic fields can be interpreted in a way fulfilling the causality constraint for their propagation, but this has not been achieved for interacting tachyonic fields and tachyonic fields interacting with particles). The forces in the equation of motion of the particles are local functions of the particle's space-time coordinate and its time derivatives, and the field-equations of motion have source terms due to the presence of particles which admit causal (retarded) solutions. On the classical level, however, there is a tremendous self-consistency problem. Even the most simple case of electromagnetics of point particles is intrinsically inconsistent or at least only solvable in an approximate sense (see, F. Rohrlich, Classical charged particles).

In quantum theory these ideas are transferred to the ideas of local microcausal quantum field theories, from which the causality of observable quantities can be deduced (it's a sufficient condition for that, whether it's necessary I'm not clear about; so far I haven't seen a formal proof of this, but FAPP we can constrain our discussion to such local microcausal QFTs because all practically relevant theories, including the Standard Model of elementary particle physics, is of this kind).

This also implies the linked-cluster principle, which states that local (local here means local in both space and time of course) observations (and so far all our observations are quite local, made with measurement apparati in the lab which always have finite spatial extent) are uncorrelated if the observations are space-like separated. There is no restriction on the states in which the observed (sub)systems are prepared. Also if there is entanglement as in our quantum-eraser example, there is no way to find out about this entanglement with local observations. Alice and Bob, making local polarization measurements about photons from a parametric-down conversion "preparation". They simply find unpolarized photons. There's no way for Alice (Bob) to tell that her (his) photon is part of an entangled pair. To figure this out they have to compare their measurement protocols, which they have made such as that they are able to know which photons belong to one pair. That's done by properly keeping track about the time of their measurements, i.e., the ensemble of entangled photon pairs must be timewise separated enough such that within their time-measurement accuracy (limited by the dead time of their photon detectors) they can clearly resolve which photon pair comes from the common preparation as an entangled pair by parametric downconversion. Then they find out about the correlations encoded in the formalism by the description as a two-photon state for which the polarization states of the single photons is entangled. Thus the "cause" for the correlation lies in the preparation of the two-photon state in this entangled state, i.e., in the preparation procedure at the very beginning, i.e., causally (!) before either Bob or Alice measure the polarization state of their photon.

Now to the Heisenberg picture. Of course, it's clear that the choice of the picture of time evolution is totally unimportant. The outcome is always the same in each picture. In the Heisenberg picture the state (statistical operator for pure or mixed states) is time-independent. But then the operators, describing observables and thus their corresponding (generalized) eigenstates, carry the full time evolution with the full Hamiltonian (including the interaction part). For the Schrödinger picture it's the opposite, i.e., the stat. op. carries the full time evolution and the observables are described by time-independent operators. For the general Dirac picture you split the time-dependence among the stat. op. and the operators describing observables. Among all pictures you change with unitary transformations. So the probabilities for Alice and Bob to find one of the photons in the entangled pair in a certain polarization state at their place at a certain time are of course the same in all pictures. So the answer to the question, whether "the state is the cause for the correlations" cannot depend on the choice of the picture of time evolution. This is so by construction: QT is invariant under changes of the picture of time evolution (of course, here I ignore all formal problems in connection with Haag's theorem concerning the interaction picture used to define perturbative QFT).

However, it depends somewhat on the metaphysical interpretation of "state". I think to define the meaning of the notion of "states" in QT is the key for all these debates and misunderstandings in such debates. Of course, physicswise a state is not "a selfadjoint normalized trace-class operator in Hilbert space". This is the mathematical description of it within the theory. Physical notions must be defined as statements about objectively observable facts. That's why in my understanding, the physical definition of "state" is that it is an equivalence class of preparation procedures. E.g., in a Stern-Gerlach experiment a silver atom is in the state with spin up in ##z## direction in the sense that I've let it go through an inhomogeneous magnetic field such that I can (with practically arbitrary accuracy) be sure that silver atoms are separated according to the two different spin states and that I can filter out the unwanted spin-down beam. For Alice and Bob in the quantum-eraser experiment the state is given by the parametric-down conversion setup: You shine a laser beam on the appropriate birefringent chrystal and just consider the polarization-entangled photons. This ensures, as proven empirically (with practically arbitrarily high accuracy), that you have photon pairs showing the correlations described by the corresponding statistical operators (or rays in Hilbert space for this case of a pure state). In this sense, I say the "quantum correlations are caused by the preparation in this particular state".
 
  • #60
vanhees71 said:
I've the impression we have just a different language again. So I think we have to clarify what we mean by "cause" first. Two events a causally connected if one event necessarily leads to the other event.

Let's try to define this mathematically. Let's suppose that there are only 3 events u,v,w. If u is the cause of w, then P(w|u,v) = P(w|u), otherwise u does not necessarily lead to w, since changing v can influence the distribution of w.

Let's consider a simple case without spatially separated measurements, where A is Alice's outcome, a is Alice's measurement setting, and z is the preparation procedure. In general, P(A|a,z) will not be P(A|z) so z is not the cause of A. The preparation procedure is not the cause of the measurement outcome, because the outcome also depends on the measurement setting. Using this definition of cause, the preparation procedure and measurement setting together are the cause of the outcome.
 
  • #61
To begin with, I don't understand this formal definition of "causality". This means that no matter what v is it doesn't influence w; one only needs u to lead to the same probability outcomes for w, but this is not necessarily true even for classical probability experiments, since of course there may be also causal influences additional to u, which change the probability distribution.

The rest is even more obscure to me. What else than the preparation procedure in an entangled state could be the cause for the correlations described by the entanglement? For me that's almost a tautology.
 
  • #62
vanhees71 said:
To begin with, I don't understand this formal definition of "causality". This means that no matter what v is it doesn't influence w; one only needs u to lead to the same probability outcomes for w, but this is not necessarily true even for classical probability experiments, since of course there may be also causal influences additional to u, which change the probability distribution.

Yes, in reality there may be other causal influences additional to u, which change the probability distribution. The point is that causal influences when manipulated affect the probability distribution, and non--causes when manipulated do not affect the probability distribution. So if there are only two events u and v that precede w, we say that u is a causal influence and v is not a causal influence if P(w|u,v) = P(w|u).

vanhees71 said:
The rest is even more obscure to me. What else than the preparation procedure in an entangled state could be the cause for the correlations described by the entanglement? For me that's almost a tautology.

But that is clearly not the case in a frame in which the measurements are not simultaneous. In the formalism, the state collapses after the first measurement. If you say that the state reflects a preparation procedure (which is correct), why don't you count the state after collapse as being prepared by the first measurement?
 
  • #63
But if you assume causality then the collapse assumption leads to a contradiction! Assume that A registers her photon polarization before B in your new frame. Assuming the collapse would cause Bob's measurement result would mean that you affect an event (B's measurement of his photon's polarization) that's spacelike separated in this new frame. In addition you can find another reference frame, where B's measurement process is before A's. Then your collapse hypothesis would mean that in this frame B causally affects A in contradiction to the conclusion in the other frame, where A's measurement causally affects B's measurement. That's the whole point of EPR's criticism.

For me, from a modern point of view, it's not a criticism of QT in the minimal interpretation, but only a QT, where you make this collapse hypothesis, and this hypothesis is superfluous to predict the out come of experiments by QT. That's why I abandon the collapse hypothesis and conclude that the cause of the quantum correlations for sure is not due to the meausurement of the polarization of one of the photons but due to the preparation procedure in an polarization-entangled two-photon state in the very beginning.
 
  • #64
vanhees71 said:
But if you assume causality then the collapse assumption leads to a contradiction! Assume that A registers her photon polarization before B in your new frame. Assuming the collapse would cause Bob's measurement result would mean that you affect an event (B's measurement of his photon's polarization) that's spacelike separated in this new frame. In addition you can find another reference frame, where B's measurement process is before A's. Then your collapse hypothesis would mean that in this frame B causally affects A in contradiction to the conclusion in the other frame, where A's measurement causally affects B's measurement. That's the whole point of EPR's criticism.

For me, from a modern point of view, it's not a criticism of QT in the minimal interpretation, but only a QT, where you make this collapse hypothesis, and this hypothesis is superfluous to predict the out come of experiments by QT. That's why I abandon the collapse hypothesis and conclude that the cause of the quantum correlations for sure is not due to the meausurement of the polarization of one of the photons but due to the preparation procedure in an polarization-entangled two-photon state in the very beginning.

My point is that I don't understand why your interpretation is minimal. In a minimal interpretation, the state is not real, so it is not even a cause, and we don't care if we have a cause, do we? Here RCS is not rejected, but the correlations do not have a cause within the theory (option A).

To me it seems that if you treat the state as a cause, then you are treating the state as physical, which is fine within a minimal interpretation FAPP. But then you should do so consistently, so that the state after collapse is also a cause. Here RCS is rejected, and the correlations have a cause (option B).

Copenhagen, allows both A and B, because they are on different levels. But can one say on the same level that RCS is kept and the correlations have a cause? That seems very problematic.

Edit: Strike this [Anyway, let me say a bit more why my definition of a cause makes sense (it's not my definition, most biologists use it). Let's consider EPR again. Let A be Alice's outcome, a be Alice's measurement setting, B be Bob's outcome, b be Bob's measurement setting, and z be the preparation procedure. By considering Alice's reduced density matrix, we know that P(A|B,a,b,z)=P(A|a,z). So Alice's outcome has the preparation procedure and her measurement settings as causes, but Bob's outcome and measurement setting are not causes of Alice's outcome. This is consistent with relativistic causality, since Alice's measurement setting and the preparation procedure are both in her past light cone. So this is an example that shows that my definition of cause is a reasonable one.]
 
Last edited:
  • #65
I've to think about your Edit first. I think, it's a bit more complicated than that. You have to distinguish between the statistical description of the whole ensemble or the subensembles after comparing the measurement protocols (postselection) which either erase the which-way information (restoring the interference pattern) or gain which-way information (leading to no interference pattern). Of course, these are two different mutually exclusive measurement protocols and in this sense the interference pattern and the which-way information are complementary in Bohr's sense.

What's also not so clear to me, and which may be also a key to resolve our troubles with the right interpretation, is the question, whether QT is also an extension of usual probability theory or not. Some people even talk about "quantum logics", i.e., see the necessity to define quantum theory as an alteration of the very foundations of logics and set theory, which of course is also closely related to the foundation of usual probability theory (say in the axiomatic system according to Kolmogorov).

However, also a minimal interpretation must say, what's the meaning of the state in the physical world, and this is in my understanding of a minimal interpretation just given by Born's rule, i.e., the probabilistic statements about measurements. On the other hand you also must be able to associate a real-world situation with the formally defined state (statistical operator) and this, why the state is also operationally defined as an equivalence class of preparation procedures, which let you prepare the real system in a way such that it is described by this state. This is a very tricky point in the whole quantum business. In my opinion it's best worked out in Asher Peres's book "Quantum Theory: Concepts and Methods".
 
  • #66
What I meant by the edit in post #65 is to ignore it - I don't think it's correct.
 
  • #67
atyy said:
Let's try to define this mathematically. Let's suppose that there are only 3 events u,v,w. If u is the cause of w, then P(w|u,v) = P(w|u), otherwise u does not necessarily lead to w, since changing v can influence the distribution of w.

atyy said:
So if there are only two events u and v that precede w, we say that u is a causal influence and v is not a causal influence if P(w|u,v) = P(w|u)

Therein lies the problem. There is a difference between the following statements:

1) If u is a cause of w but v is not a cause of w then P(w|u,v) = P(w|u)
2) u is a cause of w but v is not a cause of w if P(w|u,v) = P(w|u).

(1) is correct, and (2) is wrong (compounded syllogistic fallacy) as can be seen from Jaynes' simple urn example:

We have an urn with 1 red ball and 1 white ball, drawn blindfolded without replacement
w = "Red on first draw"
u = "Red on second draw"
v = "There is a full moon"

P(w|u,v) = P(w|u).

Would you say "u = Red was picked on the second draw" is a cause of "w = Red was picked on the first draw"?

Those who believe this may end up with the conclusion that the first pick did not have a result until when the second pick was revealed, it collapsed it, ie retro-causation. It is easy to find similar examples that can easily be misunderstood as "non-locality".
 
  • #68
vanhees71 said:
I've to think about your Edit first. I think, it's a bit more complicated than that. You have to distinguish between the statistical description of the whole ensemble or the subensembles after comparing the measurement protocols (postselection) which either erase the which-way information (restoring the interference pattern) or gain which-way information (leading to no interference pattern). Of course, these are two different mutually exclusive measurement protocols and in this sense the interference pattern and the which-way information are complementary in Bohr's sense.

Let's talk about EPR, just to make things easier, and explicitly not consider retrocausation.

vanhees71 said:
What's also not so clear to me, and which may be also a key to resolve our troubles with the right interpretation, is the question, whether QT is also an extension of usual probability theory or not. Some people even talk about "quantum logics", i.e., see the necessity to define quantum theory as an alteration of the very foundations of logics and set theory, which of course is also closely related to the foundation of usual probability theory (say in the axiomatic system according to Kolmogorov).

In the usual Copenhagen interpretation in which the state is considered physical FAPP, there isn't a big change from normal probability. The only difference is that the pure states are rays in Hilbert space. There is also a common sense causality, except that it is not relativistic causality. However, there is no conflict with relativity, since relativity does not require relativistic causality, and only requires that there is no superluminal transmission of classical information. It's only if one wants to maintain relativistic causality and the usual meaning of causation that one cannot use the familiar definitions of causal explanation.

vanhees71 said:
However, also a minimal interpretation must say, what's the meaning of the state in the physical world, and this is in my understanding of a minimal interpretation just given by Born's rule, i.e., the probabilistic statements about measurements. On the other hand you also must be able to associate a real-world situation with the formally defined state (statistical operator) and this, why the state is also operationally defined as an equivalence class of preparation procedures, which let you prepare the real system in a way such that it is described by this state. This is a very tricky point in the whole quantum business. In my opinion it's best worked out in Asher Peres's book "Quantum Theory: Concepts and Methods".

Yes, that is not a problem. The state is an equivalence class of preparation procedures that yield the same measurement outcome distributions. One doesn't have to go to Peres for that, it is standard Copenhagen. The question is can one have relativistic causality and have a local explanation for the correlations? For the usual definitions of causal explanation, the answer is no. It is often said that the Bell inequalities rule out local realism - which is vague, since there are several possible different meanings of realism. However one possible trade-off between locality and realism is:

(1) Accept that the correlations have no cause (the entangled state is not real, and so cannot be a cause)
(2) Accept that the entangled state is real FAPP, and together with the measurement settings can explain the correlations, but lose locality.

Again, quantum theory is local in many senses. Letting A and B be the outcomes, a and b be the settings, and z be the preparation procedure, we can consider quantum mechanics to be local because P(A|a,b,z)=P(A|a,z) meaning that the distant measurement setting does not affect the distribution of local outcomes. There is no superluminal signalling. There is commutation of spcaelike-separated observables and cluster decomposition. But these are different notions from local causality or Einstein causality.
 
Last edited:
  • #69
LocalCausality.jpg


Here is a simple way to see why Einstein causality is ruled out. A and B are the measurement outcomes, S and T are the measurement settings and ##\lambda## is the preparation procedure. The arrows indicate possible causal influences, and this is consistent with relativistic causality because the two possible causes of event A are S and ##\lambda##, both of which are in the past light cone of A; and the two possible causes of event B are T and ##\lambda## both of which are in the past light cone of B. However, the point of the Bell inequality violation is that this causal structure is inconsistent with quantum mechanics.

The diagram is Figure 19 of Woods and Spekkens http://arxiv.org/abs/1208.4119.
 
Last edited:
  • Like
Likes DrChinese
  • #70
atyy said:
View attachment 76513
Here is a simple way to see why Einstein causality is ruled out. A and B are the measurement outcomes, S and T are the measurement settings and ##\lambda## is the preparation procedure. The arrows indicate possible causal influences, and this is consistent with relativistic causality because the two possible causes of event A are S and ##\lambda##, both of which are in the past light cone of A; and the two possible causes of event B are T and ##\lambda## both of which are in the past light cone of B.
This is perfectly fine. Nothing in the above is controversial or "rules out" Einstein Causality. However you then say

However, the point of the Bell inequality violation is that this causal structure is inconsistent with quantum mechanics.
And the point I've been trying to explain to you is that the logic of that conclusion is inconsistent with the classical probability treatment of post-selected experiments even when Einstein Causality demonstrably holds. So there is absolutely no difficulty with Einstein Causality.

Secondly, your diagram is incomplete. You need to draw an arrow from both A and B to a new circle labeled "POSTPROCESSING" and then two more arrows from there to two new circles labelled ##A_B## and ##B_A##, (read as A results filtered according to B results and B results filtered using A results).
Then it is clear that the results A and B , never violate Bell's inequality on their own, rather, it is ##A_B## and ##B_A## that violate Bell's inequality. Post processing is an integral component of such types of experiments, in fact, I would argue that it is part of the "preparation procedure", though some may object to the fact that it happens after. But If the state is not physical but only represents information about the real physical situation, then the "preparation procedure" of the state, can include information gained well after the physical situation happened, without in anyway violating Einstein Causality (which is always ontological). ##\lambda## and "POSTPROCESSING" go hand in hand, so you cannot say it is the post-processing alone that "causes" the correlation. Note again that A and B do not violate the inequalities but ##A_B## and ##B_A## do.
 
  • Like
Likes vanhees71
Back
Top