How measurement in the double slit experiment affects result

Click For Summary
The discussion centers on the impact of measurement in the double slit experiment and its implications for quantum mechanics, particularly regarding entangled particles. It highlights how retaining or destroying information about one half of an entangled pair influences the behavior of the other, leading to either wave or particle results. The conversation explores whether the measurement method itself or the persistence of information causes the collapse of quantum states, emphasizing the role of decoherence in this process. It is noted that treating the measuring apparatus as either classical or quantum yields consistent results, but the interaction with the measurement device is crucial for understanding decoherence. Ultimately, the complexities of measurement and information in quantum mechanics reveal that while decoherence can sometimes appear reversible, true erasure of information does not occur, as different measurements yield different outcomes based on the questions posed.
  • #61
To begin with, I don't understand this formal definition of "causality". This means that no matter what v is it doesn't influence w; one only needs u to lead to the same probability outcomes for w, but this is not necessarily true even for classical probability experiments, since of course there may be also causal influences additional to u, which change the probability distribution.

The rest is even more obscure to me. What else than the preparation procedure in an entangled state could be the cause for the correlations described by the entanglement? For me that's almost a tautology.
 
Physics news on Phys.org
  • #62
vanhees71 said:
To begin with, I don't understand this formal definition of "causality". This means that no matter what v is it doesn't influence w; one only needs u to lead to the same probability outcomes for w, but this is not necessarily true even for classical probability experiments, since of course there may be also causal influences additional to u, which change the probability distribution.

Yes, in reality there may be other causal influences additional to u, which change the probability distribution. The point is that causal influences when manipulated affect the probability distribution, and non--causes when manipulated do not affect the probability distribution. So if there are only two events u and v that precede w, we say that u is a causal influence and v is not a causal influence if P(w|u,v) = P(w|u).

vanhees71 said:
The rest is even more obscure to me. What else than the preparation procedure in an entangled state could be the cause for the correlations described by the entanglement? For me that's almost a tautology.

But that is clearly not the case in a frame in which the measurements are not simultaneous. In the formalism, the state collapses after the first measurement. If you say that the state reflects a preparation procedure (which is correct), why don't you count the state after collapse as being prepared by the first measurement?
 
  • #63
But if you assume causality then the collapse assumption leads to a contradiction! Assume that A registers her photon polarization before B in your new frame. Assuming the collapse would cause Bob's measurement result would mean that you affect an event (B's measurement of his photon's polarization) that's spacelike separated in this new frame. In addition you can find another reference frame, where B's measurement process is before A's. Then your collapse hypothesis would mean that in this frame B causally affects A in contradiction to the conclusion in the other frame, where A's measurement causally affects B's measurement. That's the whole point of EPR's criticism.

For me, from a modern point of view, it's not a criticism of QT in the minimal interpretation, but only a QT, where you make this collapse hypothesis, and this hypothesis is superfluous to predict the out come of experiments by QT. That's why I abandon the collapse hypothesis and conclude that the cause of the quantum correlations for sure is not due to the meausurement of the polarization of one of the photons but due to the preparation procedure in an polarization-entangled two-photon state in the very beginning.
 
  • #64
vanhees71 said:
But if you assume causality then the collapse assumption leads to a contradiction! Assume that A registers her photon polarization before B in your new frame. Assuming the collapse would cause Bob's measurement result would mean that you affect an event (B's measurement of his photon's polarization) that's spacelike separated in this new frame. In addition you can find another reference frame, where B's measurement process is before A's. Then your collapse hypothesis would mean that in this frame B causally affects A in contradiction to the conclusion in the other frame, where A's measurement causally affects B's measurement. That's the whole point of EPR's criticism.

For me, from a modern point of view, it's not a criticism of QT in the minimal interpretation, but only a QT, where you make this collapse hypothesis, and this hypothesis is superfluous to predict the out come of experiments by QT. That's why I abandon the collapse hypothesis and conclude that the cause of the quantum correlations for sure is not due to the meausurement of the polarization of one of the photons but due to the preparation procedure in an polarization-entangled two-photon state in the very beginning.

My point is that I don't understand why your interpretation is minimal. In a minimal interpretation, the state is not real, so it is not even a cause, and we don't care if we have a cause, do we? Here RCS is not rejected, but the correlations do not have a cause within the theory (option A).

To me it seems that if you treat the state as a cause, then you are treating the state as physical, which is fine within a minimal interpretation FAPP. But then you should do so consistently, so that the state after collapse is also a cause. Here RCS is rejected, and the correlations have a cause (option B).

Copenhagen, allows both A and B, because they are on different levels. But can one say on the same level that RCS is kept and the correlations have a cause? That seems very problematic.

Edit: Strike this [Anyway, let me say a bit more why my definition of a cause makes sense (it's not my definition, most biologists use it). Let's consider EPR again. Let A be Alice's outcome, a be Alice's measurement setting, B be Bob's outcome, b be Bob's measurement setting, and z be the preparation procedure. By considering Alice's reduced density matrix, we know that P(A|B,a,b,z)=P(A|a,z). So Alice's outcome has the preparation procedure and her measurement settings as causes, but Bob's outcome and measurement setting are not causes of Alice's outcome. This is consistent with relativistic causality, since Alice's measurement setting and the preparation procedure are both in her past light cone. So this is an example that shows that my definition of cause is a reasonable one.]
 
Last edited:
  • #65
I've to think about your Edit first. I think, it's a bit more complicated than that. You have to distinguish between the statistical description of the whole ensemble or the subensembles after comparing the measurement protocols (postselection) which either erase the which-way information (restoring the interference pattern) or gain which-way information (leading to no interference pattern). Of course, these are two different mutually exclusive measurement protocols and in this sense the interference pattern and the which-way information are complementary in Bohr's sense.

What's also not so clear to me, and which may be also a key to resolve our troubles with the right interpretation, is the question, whether QT is also an extension of usual probability theory or not. Some people even talk about "quantum logics", i.e., see the necessity to define quantum theory as an alteration of the very foundations of logics and set theory, which of course is also closely related to the foundation of usual probability theory (say in the axiomatic system according to Kolmogorov).

However, also a minimal interpretation must say, what's the meaning of the state in the physical world, and this is in my understanding of a minimal interpretation just given by Born's rule, i.e., the probabilistic statements about measurements. On the other hand you also must be able to associate a real-world situation with the formally defined state (statistical operator) and this, why the state is also operationally defined as an equivalence class of preparation procedures, which let you prepare the real system in a way such that it is described by this state. This is a very tricky point in the whole quantum business. In my opinion it's best worked out in Asher Peres's book "Quantum Theory: Concepts and Methods".
 
  • #66
What I meant by the edit in post #65 is to ignore it - I don't think it's correct.
 
  • #67
atyy said:
Let's try to define this mathematically. Let's suppose that there are only 3 events u,v,w. If u is the cause of w, then P(w|u,v) = P(w|u), otherwise u does not necessarily lead to w, since changing v can influence the distribution of w.

atyy said:
So if there are only two events u and v that precede w, we say that u is a causal influence and v is not a causal influence if P(w|u,v) = P(w|u)

Therein lies the problem. There is a difference between the following statements:

1) If u is a cause of w but v is not a cause of w then P(w|u,v) = P(w|u)
2) u is a cause of w but v is not a cause of w if P(w|u,v) = P(w|u).

(1) is correct, and (2) is wrong (compounded syllogistic fallacy) as can be seen from Jaynes' simple urn example:

We have an urn with 1 red ball and 1 white ball, drawn blindfolded without replacement
w = "Red on first draw"
u = "Red on second draw"
v = "There is a full moon"

P(w|u,v) = P(w|u).

Would you say "u = Red was picked on the second draw" is a cause of "w = Red was picked on the first draw"?

Those who believe this may end up with the conclusion that the first pick did not have a result until when the second pick was revealed, it collapsed it, ie retro-causation. It is easy to find similar examples that can easily be misunderstood as "non-locality".
 
  • #68
vanhees71 said:
I've to think about your Edit first. I think, it's a bit more complicated than that. You have to distinguish between the statistical description of the whole ensemble or the subensembles after comparing the measurement protocols (postselection) which either erase the which-way information (restoring the interference pattern) or gain which-way information (leading to no interference pattern). Of course, these are two different mutually exclusive measurement protocols and in this sense the interference pattern and the which-way information are complementary in Bohr's sense.

Let's talk about EPR, just to make things easier, and explicitly not consider retrocausation.

vanhees71 said:
What's also not so clear to me, and which may be also a key to resolve our troubles with the right interpretation, is the question, whether QT is also an extension of usual probability theory or not. Some people even talk about "quantum logics", i.e., see the necessity to define quantum theory as an alteration of the very foundations of logics and set theory, which of course is also closely related to the foundation of usual probability theory (say in the axiomatic system according to Kolmogorov).

In the usual Copenhagen interpretation in which the state is considered physical FAPP, there isn't a big change from normal probability. The only difference is that the pure states are rays in Hilbert space. There is also a common sense causality, except that it is not relativistic causality. However, there is no conflict with relativity, since relativity does not require relativistic causality, and only requires that there is no superluminal transmission of classical information. It's only if one wants to maintain relativistic causality and the usual meaning of causation that one cannot use the familiar definitions of causal explanation.

vanhees71 said:
However, also a minimal interpretation must say, what's the meaning of the state in the physical world, and this is in my understanding of a minimal interpretation just given by Born's rule, i.e., the probabilistic statements about measurements. On the other hand you also must be able to associate a real-world situation with the formally defined state (statistical operator) and this, why the state is also operationally defined as an equivalence class of preparation procedures, which let you prepare the real system in a way such that it is described by this state. This is a very tricky point in the whole quantum business. In my opinion it's best worked out in Asher Peres's book "Quantum Theory: Concepts and Methods".

Yes, that is not a problem. The state is an equivalence class of preparation procedures that yield the same measurement outcome distributions. One doesn't have to go to Peres for that, it is standard Copenhagen. The question is can one have relativistic causality and have a local explanation for the correlations? For the usual definitions of causal explanation, the answer is no. It is often said that the Bell inequalities rule out local realism - which is vague, since there are several possible different meanings of realism. However one possible trade-off between locality and realism is:

(1) Accept that the correlations have no cause (the entangled state is not real, and so cannot be a cause)
(2) Accept that the entangled state is real FAPP, and together with the measurement settings can explain the correlations, but lose locality.

Again, quantum theory is local in many senses. Letting A and B be the outcomes, a and b be the settings, and z be the preparation procedure, we can consider quantum mechanics to be local because P(A|a,b,z)=P(A|a,z) meaning that the distant measurement setting does not affect the distribution of local outcomes. There is no superluminal signalling. There is commutation of spcaelike-separated observables and cluster decomposition. But these are different notions from local causality or Einstein causality.
 
Last edited:
  • #69
LocalCausality.jpg


Here is a simple way to see why Einstein causality is ruled out. A and B are the measurement outcomes, S and T are the measurement settings and ##\lambda## is the preparation procedure. The arrows indicate possible causal influences, and this is consistent with relativistic causality because the two possible causes of event A are S and ##\lambda##, both of which are in the past light cone of A; and the two possible causes of event B are T and ##\lambda## both of which are in the past light cone of B. However, the point of the Bell inequality violation is that this causal structure is inconsistent with quantum mechanics.

The diagram is Figure 19 of Woods and Spekkens http://arxiv.org/abs/1208.4119.
 
Last edited:
  • Like
Likes DrChinese
  • #70
atyy said:
View attachment 76513
Here is a simple way to see why Einstein causality is ruled out. A and B are the measurement outcomes, S and T are the measurement settings and ##\lambda## is the preparation procedure. The arrows indicate possible causal influences, and this is consistent with relativistic causality because the two possible causes of event A are S and ##\lambda##, both of which are in the past light cone of A; and the two possible causes of event B are T and ##\lambda## both of which are in the past light cone of B.
This is perfectly fine. Nothing in the above is controversial or "rules out" Einstein Causality. However you then say

However, the point of the Bell inequality violation is that this causal structure is inconsistent with quantum mechanics.
And the point I've been trying to explain to you is that the logic of that conclusion is inconsistent with the classical probability treatment of post-selected experiments even when Einstein Causality demonstrably holds. So there is absolutely no difficulty with Einstein Causality.

Secondly, your diagram is incomplete. You need to draw an arrow from both A and B to a new circle labeled "POSTPROCESSING" and then two more arrows from there to two new circles labelled ##A_B## and ##B_A##, (read as A results filtered according to B results and B results filtered using A results).
Then it is clear that the results A and B , never violate Bell's inequality on their own, rather, it is ##A_B## and ##B_A## that violate Bell's inequality. Post processing is an integral component of such types of experiments, in fact, I would argue that it is part of the "preparation procedure", though some may object to the fact that it happens after. But If the state is not physical but only represents information about the real physical situation, then the "preparation procedure" of the state, can include information gained well after the physical situation happened, without in anyway violating Einstein Causality (which is always ontological). ##\lambda## and "POSTPROCESSING" go hand in hand, so you cannot say it is the post-processing alone that "causes" the correlation. Note again that A and B do not violate the inequalities but ##A_B## and ##B_A## do.
 
  • Like
Likes vanhees71

Similar threads

  • · Replies 24 ·
Replies
24
Views
3K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 7 ·
Replies
7
Views
583
  • · Replies 36 ·
2
Replies
36
Views
8K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
836
  • · Replies 7 ·
Replies
7
Views
2K