How measurement in the double slit experiment affects result

In summary: The summary of the content is that it is clear from some more advanced variations on the double slit experiment that retaining or destroying "information" on one half of an entangled pair affects the result of the other. The results tend to either be wave or particle type behavior on a detection device. Including the measuring apparatus is essential for obtaining accurate results.
  • #1
BinaryMan
6
0
It is clear from some more advanced variations on the double slit experiment (quantum erasure, etc) that retaining or destroying "information" on one half of an entangled pair affects the result of the other. The results tend to either be wave or particle type behavior on a detection device.

I am wondering if it is the measurement method itself (by which we obtain this information) that causes the collapse of the pair? Or is it the actual information persistence about the pair? What I mean is can an experiment be created to demonstrate that I store information and one behavior happens, but if I destroy the information (say, the data on a hard drive from a monitoring device) so that it is no longer knowable, will the resulting behavior change?
 
Physics news on Phys.org
  • #2
If we treat the measuring device as a classical device, it is the measurement itself that causes the collapse of the pair. In quantum mechanics, a measurement is something that leave a definite macroscopic mark in the measuring apparatus, which is of course a little subjective since what is a "definite macroscopic mark"? Nonetheless, one is usually not in doubt.

A complementary approach in quantum mechanics is to treat the apparatus as quantum, so that the total wave function has to include the system and the apparatus. Then, we concentrate on the system, leaving out the apparatus. We will find that regardless of whether we obtain information from the apparatus or not, the system will have decohered.

So in both cases, it is essential to include the measuring apparatus. However, we get the same answer whether the apparatus is treated as a classical device or as a quantum device. In the case in which the apparatus is a quantum device, we will of course eventually need another classical device to get the information from the quantum apparatus. In other words, the first method treats the apparatus as entirely classical, while the second method treats the apparatus as part quantum and part classical. The second method has the advantage of making it obvious that it is the interaction with the measurement device that causes decoherence, regardless of whether one obtains information from the measuring device or not.
 
  • #3
If I understand you correctly, an example of the second type is when you entangle A to B, B to C, then observe how A affects C; in this case, as long as the "measurement" is not made, they can all stay entangled so interactions ARE possible which preserve the entanglement. My understanding is that a "measurement" entails reading the polarization of light; in so doing it resolves to a single polarization and thus only passes through one slit, and the same effect occurs simultaneously in the other entangled entity (whether you measure that one or not). The "oddity" is the fact that the one you didn't measure (didn't directly seem to "interact with") still resolves even though you only interacted with it indirectly through its pair.

So the basic problem is that we cannot "read" the state of a photon without "writing" that state to it. It seems the best way to understand it is that the photon has 3 states (of polarization): x, y, and x/y superposition. In the x/y state, the photon behaves like it contains both an x and y polarized photon (I don't think that's exactly correct but you get my point). In fact both photons will behave that way but as soon as an interaction leads one photon to collapse to x or y, the other does as well.

However, it seems that certain experiments (http://www.photonics.com/Article.aspx?AID=52250, Peruzzo et al. (2012)) have occurred where a photon has transitioned from x/y to x or y and back to x/y, referred to "unmarking" it (not entirely sure how). But, we are talking about state transitions then correct? The device which puts it into superposition in the first place (the beam splitter I think) and the device which measures but also inevitably changes its state to x or y (or I suppose, other environmental interactions).
 
  • #4
Do you know the mathematical formalism of quantum mechanics?
 
  • #5
I've read what I can take in, but I think in concepts and visuals so it's very difficult to comprehend it through equations alone (I have taken up to calculus though in college). What I want to know typically is the logic, shape, meaning or pattern behind it. I'm familiar with wave forms and probability densities to a point. I have read about the measurement problem, which consists in part of having a pure state and the post-measurement state which relates directly to what I am investigating.
 
  • #6
BinaryMan said:
I've read what I can take in, but I think in concepts and visuals so it's very difficult to comprehend it through equations alone (I have taken up to calculus though in college). What I want to know typically is the logic, shape, meaning or pattern behind it. I'm familiar with wave forms and probability densities to a point. I have read about the measurement problem, which consists in part of having a pure state and the post-measurement state which relates directly to what I am investigating.

I don't think it is such a good idea to look at the delayed-choice experiments or complicated permutations of basic things. There are ways to make quantum mechanics sound mysterious and all that, but it really isn't, except for the measurement problem. The measurement problem can be set up very simply in a way which involves treating the system and apparatus together as a quantum system. I like the description in http://arxiv.org/abs/quant-ph/0306072. See the section "Correlations and Measurements", beginning with "A convenient starting point for the discussion of the measurement problem and, more generally, of the emergence of classical behavior from quantum dynamics is the analysis of quantum measurements due to John von Neumann (1932)."
 
  • #7
BinaryMan said:
It is clear from some more advanced variations on the double slit experiment (quantum erasure, etc) that retaining or destroying "information" on one half of an entangled pair affects the result of the other. The results tend to either be wave or particle type behavior on a detection device.
First forget this wave particle stuff - it was basically consigned to the dustbin of history when Dirac came up with the transformation theory in late 1926 - its basically what we call QM today. To fully get to grips with this you need to see a correct analysis of the double slit:
http://cds.cern.ch/record/1024152/files/0703126.pdf

What the erasure experiment shows is that decoherence can be undone in simple cases.
BinaryMan said:
I am wondering if it is the measurement method itself (by which we obtain this information) that causes the collapse of the pair?
Its the interaction between the measuring device and the quantum system via the process of decoherence:
http://www.ipod.org.uk/reality/reality_decoherence.asp
BinaryMan said:
I've read what I can take in, but I think in concepts and visuals so it's very difficult to comprehend it through equations alone (I have taken up to calculus though in college)..

Actually it turns out in QM things become clearer when you understand the technical detail better.

If you have done calculus check out the two following books specifically designed for people like you:
https://www.amazon.com/dp/0465075681/?tag=pfamazon01-20
https://www.amazon.com/dp/0465036678/?tag=pfamazon01-20

There are also associated video lectures:
http://theoreticalminimum.com/
BinaryMan said:
What I want to know typically is the logic, shape, meaning or pattern behind it. I'm familiar with wave forms and probability densities to a point. I have read about the measurement problem, which consists in part of having a pure state and the post-measurement state which relates directly to what I am investigating.

If you want to know its conceptual core then its what's called a generalised probability theory:
http://www.scottaaronson.com/democritus/lec9.html
http://arxiv.org/pdf/1205.3833v2.pdf
http://arxiv.org/pdf/1402.6562v3.pdf

In fact its the most reasonable such theory that allows continuous transformations between so called pure states or entanglent - either one singles out quantum theory from ordinary probability theory:
http://arxiv.org/pdf/quantph/0101012.pdf:
http://arxiv.org/pdf/0911.0695v1.pdf

Thanks
Bill
 
Last edited by a moderator:
  • #8
What may seem a little confusing is that if the measurement device is classical, we require an "irreversible definite macroscopic mark" to be made in order to consider the measurement to have occurred. Yet if the measurement device is treated as classical with a quantum part, then the apparent collapse is caused by decoherence, which as bhobba says is reversible in simple cases. How can the collapse be both irreversible and reversible? The answer is that if the decoherence is reversible, the observer still has full control over the measurement device and the quantum system, and there is no effective collapse from the point of view of the observer. In order for decoherence to cause effective collapse, the observer must ignore the entanglement between the measuring device and the quantum system, and concentrate only on the quantum system. By ignoring the entanglement between the measuring device and the quantum system, the observer is effectively losing information, which will prevent him from reversing the decoherence. So in both cases - whether the apparatus is classical or classical and quantum, there must be an irreversibility for the observer in order for collapse or effective collapse to occur.
 
Last edited:
  • #9
atyy said:
What may seem a little confusing is that if the measurement device is classical, we require an "irreversible definite macroscopic mark" to be made in order to consider the measurement to have occurred.

Just to elaborate a bit further, rigorously defining exactly when an observation has occurred is an issue. In practice its utterly obvious - but if you are being ultra careful its difficult. In modern times, because its an unambiguous quantum process, its usually taken to be just after decoherence has caused interference terms to fall way below detectability.

Thanks
Bill
 
  • #10
atyy said:
So in both cases - whether the apparatus is classical or classical and quantum, there must be an irreversibility for the observer in order for collapse or effective collapse to occur.

Mostly - yes.

But as the Quantum Eraser experiment shows in some simple cases decoherence can be reversed. Because of that do you reject that as an observation? I don't think most would. IMHO best, when being ultra careful, to base it on decoherence. In practice what you wrote is correct - its only in very contrived circumstances can it be reversed.

Thanks
Bill
 
  • #11
bhobba said:
Mostly - yes.

But as the Quantum Eraser experiment shows in some simple cases decoherence can be reversed. Because of that do you reject that as an observation? I don't think most would. IMHO best, when being ultra careful, to base it on decoherence. In practice what you wrote is correct - its only in very contrived circumstances can it be reversed.

There is no reversal of a measurement in the eraser, and I think "eraser" is a choice of term that emphasizes the counterintuitive view. Another way to see it is that nothing is ever erased. Irreversible marks remain irreversible. However, depending on what question one asks, one gets interference or no interference. If one asks about an unconditional probability, there is no interference. If one asks about a conditional probability, there is interference. But there is no true erasure, just that different questions have different answers.

I think it's analogous to the EPR experiments. Let's say the initial entangled state is so that if Alice's and Bob's measurement outcomes are always perfectly correlated. If we ask what Alice's result is, it is an equal mix of up and down spins. But if we ask what Alice's result is conditioned on Bob getting an up spin, then Alice gets only up spins. Bob can make his measurement much later than Alice, but clearly, he is not changing Alice's results from a mixture of up and down to purely up spins. It's just that with two measurements we can now ask a conditional probability question.
 
Last edited:
  • #12
atyy said:
There is no reversal of a measurement in the eraser, and I think "eraser" is a choice of term that emphasizes the counterintuitive view.

I beg to differ. This has been discussed a number of times on this forum:
https://www.physicsforums.com/threa...and-the-delayed-choice-quantum-eraser.623648/
'Decoherence is irreversible only when caused by a LARGE number of degrees of freedom. A quantum eraser involves a small number of degrees of freedom, which is why it is reversible.'

Thanks
Bill
 
  • #13
bhobba said:
I beg to differ. This has been discussed a number of times on this forum:
https://www.physicsforums.com/threa...and-the-delayed-choice-quantum-eraser.623648/
'Decoherence is irreversible only when caused by a LARGE number of degrees of freedom. A quantum eraser involves a small number of degrees of freedom, which is why it is reversible.'

I don't think we are thinking of the same degrees of freedom. I am thinking of the photon interacting with the detector, which is a large number of degrees of freedom, so the decoherence is irreversible. Also, the delayed choice eraser is a selective measurement, and hence the collapse is not equivalent to decoherence alone.

Here's what I'm saying in more technical language to see that it's correct. Let's consider the interaction between a measurement apparatus and a quantum system. If one performs a measurement and collapses the wave function and ignores the outcome, then measures an observable that is local to the quantum system, that is equivalent to decoherence which one chooses to be irreversible by concentrating only on the quantum system and measuring observables local to it. Here the decoherence must be irreversible either by lack of control or by choice, because if one has sufficient control and measures an appropriate nonlocal observable, one should be able to distinguish the global pure state from the proper mixture after collapse.
 
  • #14
atyy said:
I don't think we are thinking of the same degrees of freedom. I am thinking of the photon interacting with the detector, which is a large number of degrees of freedom, so the decoherence is irreversible.

That I agree with.

Thanks
Bill
 
  • #15
For me the "quantum eraser experiment" is the best example for the fact that collapse ideas are very misleading at best. As all of quantum mechanics it's best understood in terms of the minimal statistical interpretation and the fact that for all practical purposes (FAPP) the frequentist (statistical) interpretation of probabilities is the only reliable interpretation. Everything else is metaphysics and hinders the understanding of the basic concepts of quantum theory. In addition, the quantum-eraser experiment is a good example for the fact that one cannot even describe it adequately without the mathematics underlying quantum theory. However, it can be utmost simplified by only considering the polarization state of the photons.

The polarization of a photon is described by a two-dimensional complex vector space. We work in a fixed reference frame, given by a Cartesian coordinate system. We assume that the photons enter the double slit from the ##z## direction. anThe polarization state is then spanned by the two linear-polarization states ##|\hat{x} \rangle## and ##|\hat{y} \rangle##. In the following we'll also need the left- and right-circular polarization states,
$$| L \rangle=\frac{1}{\sqrt{2}}(|\hat{x} \rangle + \mathrm{i} |\hat{y} \rangle), \quad|R \rangle=\frac{1}{\sqrt{2}}(|\hat{x} \rangle - \mathrm{i} |\hat{y} \rangle).$$
Here and in the following we assume that all these vectors are normalized to 1, e.g., ##\langle \hat{x} | \hat{x} \rangle=1##. Both the linear-polarization and the circular-polarization states then build an orthonormal basis of the two-dimensional vector space of polarization states.

What's now the physical (and only physical!) meaning of these vectors? According to the minimal interpretation, they answer the question, what's the probability to measure a photon with a certain polarization (assuming ideal polarization filters an photon detectors). If the photon is prepared in an arbitrary normalized polarization state ##|\psi \rangle##, then the probability to find it to be, say, polarized in ##x## direction is given by Born's rule:
$$P(\hat{x}|\psi)=|\langle \hat{x}|\psi \rangle|^2.$$
There's no more from a physicist's point of view to be associated with the state, and no more is necessary to use the formalism of quantum theory to real-world experiments. According to this minimal interpretation, how has an experiment to look like when I want to test the above idea about the probabilities? The answer is that the experimentalist has to prepare a lot of single photons, independently from each other, in the state ##|\psi \rangle##, which is done by a practical procedure, e.g., one generates single photons of arbitrary polarization and let's it run through a polarization foil, letting through photons with a given linear polarization, e.g, one which is in a direction given by the angle ##\phi## with respect to the ##x## axis,
$$|\hat{\phi} \rangle=\cos \phi |\hat{x} \rangle+\sin \phi |\hat{y} \rangle.$$
Then you always either get a photon of this polarization state or no photon. Then you only consider the cases, where you have a photon. That's called the preparation procedure. These photons, now prepared in a well-defined polarization state, you let run through a polarization foil, letting through photons which are linearly polarized in ##x## direction and you count how many of the photons, polarized in ##\phi## direction run through this polarization foil. Divide this number by the total number of prepared photons, and you get a relative frequency, which should converge (in the weak sense) to the predicted probability, if quantum theory is right. According to Born's rule, this probability is
$$P(\hat{x}|\hat{\phi})=|\langle \hat{x}|\hat{\phi} \rangle|^2=\cos^2 \phi.$$
Now you can check this hypothesis, using the usual rules for statistical evaluation of data, taught in the basic lab courses at the beginning of any physics curriculum. There's no need for complicated philosophical debates about "collapse" or the "meaning of probabilities" or anything else metatphysical. It's simply this very practical way to measure the polarization. Of course, there's no need to stress the fact that the predictions of quantum theory (QED in this case) about the polarization of photons have been tested very thoroughly, and according to all the measurements of quantum opticians, the predictions were always found to be correct with the highest accuracy, and quantum optics is among the fields with the highest accuracy reached in modern experimental physics ever!

Now let's discuss the single-photon double-slit experiment in a quite qualitative way. Letting run a single photon (with a well-enough defined momentum (in ##z## direction, the double slit is located in a plane parallel to the ##xy## plane) through a double slit leads to an unpredictable single dot on a screen place far away from the double slit also parallel to the ##xy## plane. Preparing very many such photons, all these single spots lead to an interference pattern as expected from classical wave theory (Maxwell electrodynamics). This, however is only true, if the slits are indistinguishable for the photons, i.e., if you cannot know (even in principle) through which slit each single photon has gone. If the initial photon's polarization state is [itex]|\psi \rangle[/itex] the photon distribution after the slit is described by
$$|\psi' \rangle=N [1+\exp(\mathrm{i} \varphi(x))] |\psi \rangle, \quad \varphi(x) \simeq 2 \pi \frac{x d}{l \lambda},$$
where ##x## is the x position on the screen, ##d## the distance of the slits ##l## the distance between the slits and the screen, and ##\lambda## the wavelength of the photon, and ##N## is an appropriate normalization factor. This you can read in any elementary textbook on optics (it doesn't even need to be a quantum textbook, because the interference pattern is of course described by classical optics in this approximation).

The detection probability making the interference pattern when a large number of photons are sent through the slits, in this case is, according to Born's rule
$$P(x)=|N|^2 |1+\exp(\mathrm{i} \varphi x)|^2=2|N|^2 [1+\cos \varphi(x)].$$
How to gain such which-way information? The only way is to mark the slits somehow, and for this one can use quarter-wave plates. These are made of a birefringent chystal, which have a different index of refrection for photons polarized in two perpendicular directions. They are made just as thick that a photon polarized in [itex]\phi[/itex] direction suffers a phase shift that is different from the phase shift of a photon that is polarized perpendicular to that, by ##\pi/4##. That's why it's called a quarter wave plate. Now we put in one of the slits a quarter-wave plate that's oriented such that [itex]\phi_1=+\pi/4[/itex] and the other perpendicular to it [itex]\phi_2=-\pi/4[/itex]. Now again we prepare photons polarized in [itex]\hat{x}[/itex] direction and do the double-slit experiment. Without the quarter-wave plates we'd find a pattern showing interference effects when looking at the result of a large ensemble of such photons. Now, with the wave plates, we mark the photons dependent on through which slit they go, because those running through slit 1 will become circular left-handed and those running through slit 2 will become right-handed polarized. Thus you only need to measure whether behind the slits a photon is left- or right-handed polarized. If you don't do this measurement but simply let run the photons through this "manipulated" double slits, the interference pattern is gone. It appears in the "non-manipulated" experiment, because of the relative phase shift due to the different pathlenght a photon going through the one or the other slit suffers and then superimposing the probability amplitudes, taking its modulus squared. Then you have an interference term, because the photons coming through the one or the other slit are still both ##x## polarized, and there's an interference term present in the probability at which place the photons will hit the screen. With the quarter-wave plates put into the slits, the photon coming to slit one is in the polarization state [itex]|L \rangle[/itex] and the one coming through slit 2 is in the polarization state ##|R \rangle##. Thus in this case the state of a photon hitting the screen at position ##x## is given by
$$|\psi' \rangle= \frac{N}{\sqrt{2}} [|L \rangle + \exp(\mathrm{i} \phi(x)) |R \rangle].$$
Since now the two kets in this superposition are perpendicular to each other we find for the detection probability
$$P'(x)=|N|^2,$$
which is ##x## independent, i.e., the interference pattern is completely gone.

It's important to note that we don't need to take notice about through which slit the photons have gone. It is only important that the photons carry now this information, and that it is possible to gain this information completely by making a measurement. The reason, why we can gain the complete which-way information in the setup with the quarter-wave plates is, that the polarization state of a photon going through the one slit is exactly orthogonal to the one of the photons going through the other, and that's decieded by the very fact that we have placed the quarter-wave plates in a relative orientation different by precisely ##\pi/2##. If you distort this relative orientation, you cannot gain complete which-way information, but only with a certain probability that the photon is gone through slit 1 or slit 2. Then you can only gain partial which-way information, and the interference pattern is still partially present (but with lower contrast). In this sense which-way information and the contrast of the interference pattern are mutually exclusive possibilities to prepare the photons running through the double slit.

Now comes the really difficult part :-), but it's difficult only in the sense that we are not used to quantum entanglement in everyday life. The formalism is pretty much straight forward. The point is that nowadays the quantum opticians can create pretty easily photon pairs with entangled polarizations. This was not easy some decades ago, when A. Aspect made his groundbreaking experiment concerning such biphoton states demonstrating the violation of Bell's inequality for the first time during his PhD work:

http://en.wikipedia.org/wiki/Alain_Aspect

The modern way of creating such biphotons is to shine laser light on certain types of berefringent crystals, leading to the emission of a pair of photons that are entangled concerning their polarization states. A composite system's state vector is described by tensor products of the single-component state vectors and (very important!) by superpositions of such direct-product states. Again, we only note the polarization state of the photon pair and look at the case where "singlet states" are prepared, i.e.,
$$|\Psi \rangle = \frac{1}{\sqrt{2}} (|\hat{x} \rangle \otimes \hat{y} \rangle - |\hat{x} \rangle \otimes \hat{y} \rangle).$$
We have not noted the other degrees of freedom (e.g., position). The funny thing, however is, that the two photons can go a far distance apart from each other without the polarization state being disturbed, i.e., all the time, from their creation on, they are in this state, as long as nobody disturbs (measures!) their polarization state. So it's probable to detect one photon at one place A ("Alice" measuring the photon) letting it run through a polarization filter to measure its spin and the other one at a far distant place B ("Bob" measuring the photon).

The first question now is, what would Alice and Bob find when measuring their photon's polarization? This is answered by "tracing out" the unmeasured degrees of freedom. Although the biphoton state is a pure state, usually the parts of such a composite system (here the polarization of, say, Alice's photon) is a mixture, and the corresponding statistical operator is described as
$$\hat{\rho}_A=\mathrm{Tr}_B |\Psi \rangle \langle \Psi|=\frac{1}{2} (|\hat{x} \rangle \langle \hat{x} |+|\hat{y} \rangle \langle \hat{y} |).$$
This means, before the measurement Alice does not know anything about her photon's polarization. It's a state of minimal knowledge (maximal entropy). What does that mean? Again, to make sense of all these probabilistic content of the state, one has to prepare an ensemble of equally prepared biphotons and do the experiment very often to check statistically whether the prediction of QT is correct. Here ##\hat{\rho}_A## simply describes that a so prepared ensemble leads to totally unpolarized photons for Alice. The same holds for Bob's photons.

But now comes the very "quantic" point about such entangled states! The trick is that Alice and Bob note the polarization of their photons both measuring the polarization in ##x## direction and note the time when their photon arrived. Because the photons where sent from a common place the time stamps admit to make sure that you compare the states of the two photons that belong together from the very beginning as they were prepared by parametric downconversion in the entangled state ##|\Psi \rangle##. And at the end of the experiment, Allice and Bob can meet and compare their results. Now they can ask the following: Is there a correlation between the outcome of Bob's and Alice's measurement of the polarization state of their photons? Quantum theory predicts the following: If Alice finds that her photon is polarized in [itex]x[/itex] direction (which happens in 50% of all cases), then Bob's photon is described by the corresponding projection,
$$|\hat{\rho}_{B|A \hat{x}} =\mathrm{Tr}_A(|\hat{x} \rangle \langle \hat{x} \langle \otimes \hat{1} |\Psi \rangle \langle \Psi|)=\frac{1}{2} |\hat{x} \rangle \langle \hat{y}|.$$
This says: If Alice measured here photon to be ##x## polarized, Bob's photon must necessarily by ##y## polarized. If Alice finds her photon to be ##y## polarized, Bob's photon must be ##x## polarized, and each case happens in 50% of all measurements.

Although each experimenters ensemble of photons as a whole shows totally unpolarized photons, there is this 100% correlation between them. It also doesn't matter, whether Alice or Bob measure there photons first or if they do it at the same time. The outcome is always the same. The 100% correlation is thus not due to the measurement of 1 photon but, as also clear from the above description, due to the common preparation of the two photons by parametric downconversion. Also neither Alice nor Bob can know about the correlation just from their measurement, they have to compare their measurement protocols afterwards to find the correlation. Thus, it's not possible to send any signals via the correlation by manipulating one of the photons an measure the other. Thus the entanglement does not admit a measurement protocol that could enable us to communicated with signals faster than the speed of light in vacuum. So everything is consistent with the relativistic space-time and causality framework. We mention in passing that such measurements can check Bell's inequality. You only have to measure Alice's photon polarization in another cleverly chosen direction relative to Bob's. But let's come back to the quantum eraser experiment now.

Now we use a parametric-down converted pair of photons as describe above. Alice's photon is sent through the double slit with the quarter-wave plates, again oriented in ##+\pi/4## and ##-\pi/4## orientation. First we have to check, what the quarter-wave plates do to Alice's photon within the biphoton state. To that end we note that the unitary operators describing the quarter-wave plates' operation on an arbitrary single-photon polarization state is given by
$$\hat{Q}_{\pm}=|L \rangle \langle \hat{x} | \pm \mathrm{i} |R \rangle \langle \hat{y}|.$$
Thus after Alice's photon is gone through the double-slit, the photon going through slit 1 and 2 the biphoton corresponding states are given by
$$|\Psi_{+}' \rangle = (\hat{Q}_{\pm} \times \hat{1}) |\Psi \rangle= \frac{1}{\sqrt{2}} (|L \rangle \otimes |\hat{y} \rangle + \mathrm{i} |R \rangle \otimes |\hat{y} \rangle).$$
$$|\Psi_{-}' \rangle = (\hat{Q}_{\pm} \times \hat{1}) |\Psi \rangle= \frac{1}{\sqrt{2}} (|R \rangle \otimes |\hat{y} \rangle - \mathrm{i} |L \rangle \otimes |\hat{y} \rangle),$$
and the probability distribution of Alice's photons at the screen is given by the biphoton state
$$|\Psi_1' \rangle=N(|\Psi_+' \rangle+\exp(\mathrm{i} \varphi(x)) |\Psi_0' \rangle).$$
The interference pattern is gone again, because ##|\Psi_+' \rangle## and ##|\Psi_-' \rangle## are orthogonal to each other. Obviously we can gain which-way-information with 100% accuracy by measuring both photon's polarization state. If Alice's measures an L-polarized photon and Bob a ##x## polarized one the photon must have gone through the ##-## slit and anlogously for the three other possible cases. One should note that we can put Bob so far away from Alice and the double slit that his measurement of the photon-polarization state does not affect Alice's photon in any way.

Now we alter the experimental setup only at Bob's place. We let him direct his polarization filter in an angle ##\alpha## relative to the ##x## axis and consider only the case when Bob's photon is found to be in this state, which is the case in 50% of all cases (Bob's photons are of course totally unpolarized as Alice's as discussed above). The corresponding sub-ensemble then is found by projecting out all other states. The projection operator for Bob's single photon polarization state is given by
$$\hat{P}_{\alpha}=|\hat{\alpha} \rangle \langle \hat{\alpha}|, \quad |\hat{\alpha} \rangle=\cos \alpha |\hat{x} \rangle + \sin \alpha |\hat{y} \rangle.$$
Thus the interference pattern for the sub-ensemble is given by the (unnormalized!) state ket, when setting $\alpha=\pi/4$:
$$\hat{1} \otimes \hat{P}_{\pi/4} |\Psi_1' \rangle=N \frac{1+\exp(\mathrm{i} \varphi(x))}{2 \sqrt{2}}(|L \rangle \otimes |\hat{\pi/4} + \mathrm{i} |R \rangle \otimes |\hat{\pi/4 }\rangle).$$
Now the interference pattern is found again at full contrast, but the overall brightness is reduced by a factor 1/2, because we have only looked at an sub-ensemble. Again we stress that we can make this interference pattern visible only after bringing Alice's ans Bob's measurements together, and we must make sure that Alice and Bob note their times when detecting their photons (Alice measures the position of her photon hitting the screen and Bob notes, whether his photon has passed his polarization filter orientied with angle ##\pi/4## relative to the ##x## axis) to be able to identify the photons belonging to the same biphoton. Then, thanks to the 100% correlation encoded in the polarization-entangled photons, we can filter out all photons at Alice's screen which are entangled with Bob's photon, which is measured to have the corresponding ##\pi/4##-polarization, and this happens after all photons are long gone just using Alice's and Bob's measurement protocols.

The total ensemble of Alice's photons do not make up the interference pattern in any case, because it's possible to gain which-way information in principle, but for this Bob would have to make a polarization measurement with his polarizer exactly oriented in ##x## (or equivalently in ##y##) direction. Orienting Bob's polarizer in Direction ##\alpha=\pi/4##, does not enable us to gain which-way information at all. The which-way information is completely gone for the so obtained subensemble, i.e., again the appearance of a polarization pattern at full contrast is only possible if Bob makes a measurement which disables us to gain any information about the way Alice's photon has taken through the double slit. The reappearance of the interference pattern is thus due to "post selection", i.e., it is done long after all photons are gone, and thus it's indeed not Bob's measurement that causes the reappearance of the pattern, but the reappearance by post selecting the appropriate subensemble is due to the original preparation of the photon pair at the very beginning in the entangled state, describing the (long-range) correlations between the polarization states of the two photons in the biphoton state.

Thus, there's no collapse assumption whatsoever needed to describe the reappearance of the interference pattern through "erasing the which-way infromation" due to Bob's ##\alpha=\pi/4## polarization measurement, having an effect on Alice's photon whatsoever. This would indeed violate Einstein Causality, because choosing the distances between the biphoton source and Alice's double-slit experiment and Bob's polarization measurement such that Alice's photon hit's the screen earlier than Bob has measured the polarization state of his photon, the cause (collapse through Bob's measurement) would be after the effect (reappearance of Alice's interference pattern for the appropriate sub-ensemble). Thus there is not only no need for a collapse assumption but this assumption would violate the very reason for why physics is possible it all, namely the validity of causality!

That's why I stick to the minimal-ensemble interpretation, which is fully satisfactory from a physicist's point of view. Nature doesn't ask whether we like how she behaves or if its behavior is consistent with our metaphysical or philosophical prejudices, she just behaves like she does. That's it. Case closed!
 
  • Like
Likes jbergman, dlgoff, billschnieder and 2 others
  • #16
vanhees71 said:
... That's it. Case closed!

That's great, Vanhees71. But I would've liked to have seen a little more detail.

:-) :-) :-)
 
  • Like
Likes dlgoff, billschnieder and bhobba
  • #17
Thanks! What should I work out more? Some years ago, I've written a German FAQ article about this. Perhaps I should translate this and put it here? The German version is here:

http://theory.gsi.de/~vanhees/faq/qradierer/qradierer.html
http://theory.gsi.de/~vanhees/faq-pdf/qradierer.pdf
 
  • #18
I would say that is a very concise and important analysis for understanding polarization, interference, and measurement, but it doesn't bring home the fundamentally new philosophical problems presented by the "quanta" in quantum mechanics, because there are not necessarily any "quanta" in that analysis. In other words, everything you said also applies to the classical wave description of polarized light, including the down-converters (interpreted without the quanta, as can be done and still use all your results). All we need to do is interpret your projection operators as light intensity coefficients, and compare with a classical experiment, and we get all the same stuff-- including the same "erasure" effects. Put differently, it is not surprising that "coherences are global even when passing outside of causal connections" because that is a perfectly classical statement, what is weird about quantum mechanics is that we expected the coherences between quanta to work differently-- we thought they would "store" their correlations locally, "inside the quantum." That is what turns out to be wrong, but we can't use classical analogs to show why it's wrong, because we didn't expect classical wave behavior. That's the irony-- the weirdness is not that we are not getting classical behavior, it is that we are-- but it's classical wave behavior, not classical particle behavior.

Now don't get me wrong, personally I think it is extremely important to make the point that "quantum erasure" has a fully classical analog, which in my view is the crux of your argument. I've argued the same thing on this very forum, though I did not provide the formal mathematics as you so elegantly do here. To complete that classical analog, instead of Alice and Bob cross-correlating their outcomes (subluminally), they could simply pipe their radiation fields to each other, and let them speak for themselves by watching their interference pattern. In that situation, what you need for "erasure" is none other than maintenance of the original coherences in the experiment, and can be done entirely classically.

But what makes quantum erasure fundamentally different from a classical-wave analysis is the quanta-- because we normally think quanta exhibit "local realism". In other words, we normally think that if we have a quanta, it should "carry with it" everything that it needs in order to figure out what to do. Note this is not true of waves-- waves have a fundamentally global character. Sure the wave amplitude, especially if there is a medium involved, is something that obeys local realism, but the way the wave influences its environment, and the measurements we do, is something that receives input from all over the causal basin, and the classical analog of the quantum erasure experiment includes everything in its causal basin or it won't give the "strange" results desired. So the puzzle to solve for quantum mechanics is not how can the mathematics work out to allow you to "erase" interference-breaking elements in an apparatus that receives input over the whole causal basin, it is how to do this for individual quanta that pass out of each other's causal sphere when they interact with their environment and partial knowledge is gained about them, yet when brought back into that causal sphere, they "remember" the correlations that have not yet been destroyed. In other words, the "quantum" interpretation gives us a sense that we have "something there" that we "could have done a measurement on" but chose not to, something that "would have given" a given result "had we measured it."

This, I feel, is the real conundrum of entanglement, and it only shows up when we interact with its quantum nature in such a way that we would normally like to think about "what would have happened had we done something" that we did not in fact do. The takehome lesson is like that of Yoda: there is no such thing as "what we could have done", there is only what we did do, or what we did not do, and what we did do destroyed some coherences, and what we did not do maintained some other ones that can always be re-established. Measuring where a quantum hits a wall does not destroy all its coherences with an entangled pair particle, there is still "nonlocal" or "holistic" information still present even after the particle is destroyed and its location known, and that information is what sets or groups it belonged to. That information is the "quantum" analog of classical wave coherence. That's also what I would say is the deeper context of the "frequentist" interpretation of probabilities-- it's all about keeping track of the groups of belonging.
 
  • #19
vanhees71 said:
The reappearance of the interference pattern is thus due to "post selection", i.e., it is done long after all photons are gone, and thus it's indeed not Bob's measurement that causes the reappearance of the pattern, but the reappearance by post selecting the appropriate subensemble is due to the original preparation of the photon pair at the very beginning in the entangled state, describing the (long-range) correlations between the polarization states of the two photons in the biphoton state.

In fact this is controversial! Is the entangled state itself the "cause" of the long-range correlations? In http://arxiv.org/abs/1311.6852 Cavalcanti and Lal consider your proposal that "the quantum state of the joint system in its causal past can itself be considered as the common cause of the correlations." But then they say that this is not universally acknowledged because "An objection to this point of view, however, is that the precise correlations cannot be determined without knowledge of the measurements to be performed."

I think your "minimal interpretation" is not minimal enough. :) If it were more minimal, you would have avoided talking about the necessity of Einstein Causality, and stuck with signal locality.
 
Last edited:
  • #20
atyy said:
There is no reversal of a measurement in the eraser, and I think "eraser" is a choice of term that emphasizes the counterintuitive view. Another way to see it is that nothing is ever erased. Irreversible marks remain irreversible. However, depending on what question one asks, one gets interference or no interference. If one asks about an unconditional probability, there is no interference. If one asks about a conditional probability, there is interference. But there is no true erasure, just that different questions have different answers

vanhees71 said:
The which-way information is completely gone for the so obtained subensemble, i.e., again the appearance of a polarization pattern at full contrast is only possible if Bob makes a measurement which disables us to gain any information about the way Alice's photon has taken through the double slit. The reappearance of the interference pattern is thus due to "post selection", i.e., it is done long after all photons are gone, and thus it's indeed not Bob's measurement that causes the reappearance of the pattern, but the reappearance by post selecting the appropriate subensemble is due to the original preparation of the photon pair at the very beginning in the entangled state, describing the (long-range) correlations between the polarization states of the two photons in the biphoton state.

Just in case the OP thinks vanhees71 and I have any serious disagreement, I want to stress that the "controversy" in the above post is a minor philosophical disagreement. The major thing we agree on is that the "erasure" is mainly a technical term, and there is no erasure of any measurement outcome. What I call a conditional probability is exactly what vanhees71 calls postselection.
 
  • #21
@Ken G: The entangoement has no classical analogon. You can not do the erasure experiment with classical em. waves, because you need the two-photon srare with entangled polarizations of the photons. I also do not claim that photons are localized objects. To the contrary, photons cannot be localized in a classical sense. What's local are the interactions in QFT; correlations can refer to far-distance subsystems as in the here discussed case. Finally, as high-precision tests show QT is very successful in describing reality. Ironically the most unrealistic philosophical term in the debate about interpretational problems of QT is "realism".
 
  • #22
@atyy. I don't understand the objection concerning the causality argument. For me the reason for the correlations is due to the preparation of the biohoton in this entngled state. Quantum states are defined operationally as an equivalence class of preparation procedure. I also don't think that we disagree on any physics relater point here. Einstein causality, however, is violated if you interpret collapse as a physical process, because then a local interaction with a measurement apparutus would act instantaneously over long distances.
 
  • #23
vanhees71 said:
@Ken G: The entangoement has no classical analogon. You can not do the erasure experiment with classical em. waves, because you need the two-photon srare with entangled polarizations of the photons.
Why not-- just turn up the amplitude of the waves in your very apparatus, until you can measure the electromagnetic fields directly. Are you saying your system does not obey the correspondence principle?
I also do not claim that photons are localized objects. To the contrary, photons cannot be localized in a classical sense.
They can be localized well enough, that's how they show up as "blips" in the patterns being measured. I'm saying that nothing in entanglement is at all surprising until you introduce a concept of a quantum-- any entanglement experiment can be done as a classical experiment, where instead of time stamping the data to make the correlation experiment, you literally bring the fields together and let them tell you their correlations via direct interference. But if you do this, no one is surprised by the result, because they never invoked the concept of local realism that the "quantum" invokes in our classical particle view.
What's local are the interactions in QFT; correlations can refer to far-distance subsystems as in the here discussed case.
Yet the correlations are not surprising when they have a "Bertlmann's socks" flavor (which your correlations did by the way, that's a detail that needs to be fixed up but it's not essential). The surprise is never that correlations can exist outside a causal sphere, that's commonplace-- the surprise is that they can depend on choices made by the experimenter outside the causal sphere! But that surprise would never happen for a classical field, because those choices in some sense "come back together" into the causal basin when the fields are brought together to show their correlations. But in a quantum approach, we have instead the concept of a timestamp on a quantum, and that information is stored manually after the quantum is destroyed. That's where the surprise comes in, choices made after the quantum is destroyed seem to matter, but I'm saying that not everything about that quantum is destroyed-- the group it belongs to is retained, so that is the information that must be responsible for the interference pattern. Note that group belonging has a holistic character-- "I belong to that group even if they are far away, so measurements done on them over there still involve me over here if they are used to cull me out of a class of results here." It has got to do with the tags we put on things, if we choose to distinguish them in certain ways, then our choices will influence the results we get by using those tags.
Finally, as high-precision tests show QT is very successful in describing reality. Ironically the most unrealistic philosophical term in the debate about interpretational problems of QT is "realism".
Yes, that irony has not escaped me either-- indeed, I hold that the standard meaning of "realism" is always quite unrealistic, it is much more "realistic" to hold that the answers we get, when we ask nature a question, depend on the questions we ask, not just in the sense of selecting from a menu of answers, but also in the sense that the question is part of its own answer.
 
Last edited:
  • #24
Ok, then give the description if the quantum-erasure experiment within classical electromagnetism. I don't think that this is possible.
 
  • #25
vanhees71 said:
@atyy. I don't understand the objection concerning the causality argument. For me the reason for the correlations is due to the preparation of the biohoton in this entngled state. Quantum states are defined operationally as an equivalence class of preparation procedure. I also don't think that we disagree on any physics relater point here. Einstein causality, however, is violated if you interpret collapse as a physical process, because then a local interaction with a measurement apparutus would act instantaneously over long distances.

If I understand Cavalcanti and Lal correctly (and they are just reporting, I think, not necessarily their own views), the argument is that the preparation is not sufficient to explain the correlation, because there would be no correlation without the choice of measurement settings also.

To put it a slightly different way, if state preparation is the cause of future outcomes, we can say that in a frame in which the measurement is simultaneous, it is the initial preparation that is responsible for the outcome of the simultaneous nonlocal measurement by A and B. But if we choose a frame in which A measures before B, the measurement by A will collapse the state, where collapsing means preparing the state conditioned on the measurement outcome. So if state preparation is the cause of future outcomes, then preparation of the state by A is the cause of B's measurement outcome.

I think it is agreed that one doesn't have to give up Einstein Causality if one also gives up the idea that the correlations have a common cause. It is more problematic to say that one can have Einstein Causality and also explain the nonlocal correlations.
 
  • #26
DrChinese said:
That's great, Vanhees71. But I would've liked to have seen a little more detail.

:) :) :)

Without a doubt one of the finest posts I have ever seen :D:D:D:D:D:D:D.

I would have given it 10 likes if I could.

Thanks
Bill
 
  • #27
atyy said:
I think it is agreed that one doesn't have to give up Einstein Causality if one also gives up the idea that the correlations have a common cause. It is more problematic to say that one can have Einstein Causality and also explain the non-local correlations.
If collapse is not physical, but only involves information, then there is no difficulty. Post-selection and separability are incompatible, it is a contradiction to assume separability while post-selecting, irrespective of locality. Therefore the difficulty comes from assuming local causality equals separability, even for experiments that demand post-selection. When you post-select, to calculate P(AB) you take Bob's result P(B) and multiply it with Alice's result, which has been post-selected, using Bob's detection times, ie, P(A|B). You end up with P(AB) = P(B)P(A|B) or P(AB) = P(B)P(B|A). Obviously P(A|B) is not equal to P(A), if it was, the experiment would produce the same result without post-selection. But it doesn't. So the "spooky" business or the information transfer takes place right there in the post-selection, well after the fact, at a speed slower than what it takes for Alice and Bob to bring their data together and compare. If an experiment purports to demonstrate non-local correlations, they would have to do it without post-selection. None have so far.

BTW, P(B|A) does not mean that A causes the B results. B may very well have been measured before A. Post-selection still uses time-stamps to determine which B-photon corresponds to which A-photon, so it does not matter which one was measured before the other.

Without taking away from Vanhees' excellent answer, I would answer the original question somewhat more simplistically:

How measurement in the double slit experiment affects result?
* By disturbing the photons. Photons are very fragile, and anything you put in their path will change their behavior so you'll get different results from what you would have obtained without a "detector" at one of the slits.
 
Last edited:
  • #28
billschnieder said:
If collapse is not physical, but only involves information, then there is no difficulty. Post-selection and separability are incompatible, it is a contradiction to assume separability while post-selecting, irrespective of locality. Therefore the difficulty comes from assuming local causality equals separability, even for experiments that demand post-selection. When you post-select, to calculate P(AB) you take Bob's result P(B) and multiply it with Alice's result, which has been post-selected, using Bob's detection times, ie, P(A|B). You end up with P(AB) = P(B)P(A|B) or P(AB) = P(B)P(B|A). Obviously P(A|B) is not equal to P(A), if it was, the experiment would produce the same result without post-selection. But it doesn't. So the "spooky" business or the information transfer takes place right there in the post-selection, well after the fact, at a speed slower than what it takes for Alice and Bob to bring their data together and compare. If an experiment purports to demonstrate non-local correlations, they would have to do it without post-selection. None have so far.

BTW, P(B|A) does not mean that A causes the B results. B may very well have been measured before A. Post-selection still uses time-stamps to determine which B-photon corresponds to which A-photon, so it does not matter which one was measured before the other.

Without taking away from Vanhees' excellent answer, I would answer the original question somewhat more simplistically:

How measurement in the double slit experiment affects result?
* By disturbing the photons. Photons are very fragile, and anything you put in their path will change their behavior so you'll get different results from what you would have obtained without a "detector" at one of the slits.

No you are wrong. The essence of your argument is that information does not propagate faster than light. That is correct, but Einstein Causality is a different issue. Einstein Causality is either not tenable or empty or requires a redefinition to be tenable.
 
Last edited:
  • #29
atyy said:
No you are wrong. The essence of your argument is that information does not propagate faster than light. That is correct, but Einstein Causality is a different issue. Einstein Causality is either not tenable or empty or requires a redefinition to be tenable.
I disagree. But since you do not substantiate, I'm unable to explain why you are wrong either.
 
  • #30
atyy said:
If I understand Cavalcanti and Lal correctly (and they are just reporting, I think, not necessarily their own views), the argument is that the preparation is not sufficient to explain the correlation, because there would be no correlation without the choice of measurement settings also.

To put it a slightly different way, if state preparation is the cause of future outcomes, we can say that in a frame in which the measurement is simultaneous, it is the initial preparation that is responsible for the outcome of the simultaneous nonlocal measurement by A and B. But if we choose a frame in which A measures before B, the measurement by A will collapse the state, where collapsing means preparing the state conditioned on the measurement outcome. So if state preparation is the cause of future outcomes, then preparation of the state by A is the cause of B's measurement outcome.

I think it is agreed that one doesn't have to give up Einstein Causality if one also gives up the idea that the correlations have a common cause. It is more problematic to say that one can have Einstein Causality and also explain the nonlocal correlations.

Again, I don't understand this argument, and I also have no clue what the authors are after in their article. It's too far from my expertise.

Now to the argument that it's not the state preparation which determines the outcome of measurements. This is pretty absurd, or do I misunderstand this statement completely? Even in classical physics, of course the preparation of an experiment determines the outcome of measurements. This is trivial, isn't it? The difference in quantum theory is that there are indetermined observables even when we know the exact (pure) state of the system, as with our entangled photon pair.

If I prepare photons in the described polarization-entangled state, quantum theory predicts the probabilistic properties of measurements of the single photons' polarizations uniquely, including correlations of (independent and local) measurements by Alice and Bob, when they compare their measurement protocols, provided the measurements are accurate enough in time resolution to be able to always associate the photons belonging to one entangled pair. Of course, the polarization state of both photons is maximally indetermined, but for each measurement of the polarization of Alice's and Bob's photons, we can predict the corresponding conditional probabilities. The same holds for the prediction of the interference pattern, which reflects the probabilities to detect photons on the screen behind the double slit, and as far as I know these probabilistic predictions of QT are very well (i.e., with very high statistical significance) agreeing with the findings in experiments. Why then, can't I then conclude that it is simply the preparation in this entangled state by parametric down conversion that determines the (probabilistic) behavior predicted by QT and confirmed by experiment? If I can't, I don't know, how to make sense of the notion of states in quantum theory at all, but also this contradicts the experience that we very well know how to use quantum theory to describe the empirical findings when performing such experiments.
 
  • #31
vanhees71 said:
Now to the argument that it's not the state preparation which determines the outcome of measurements. This is pretty absurd, or do I misunderstand this statement completely? Even in classical physics, of course the preparation of an experiment determines the outcome of measurements. This is trivial, isn't it? The difference in quantum theory is that there are indetermined observables even when we know the exact (pure) state of the system, as with our entangled photon pair.

In both classical and quantum physics, there is the possibility that the state preparation and measurement procedure determines the outcome.

Another way to see the problem is that even if one says that state preparation determines measurement outcomes, the problem is that the quantum formalism says that measurement is a form of state preparation. If there an initial state preparation, followed by measurement A, followed by measurement B, there is more than one state preparation procedure, so it is unclear which state preparation procedure is the "cause" for the outcome of measurement B.

vanhees71 said:
If I prepare photons in the described polarization-entangled state, quantum theory predicts the probabilistic properties of measurements of the single photons' polarizations uniquely, including correlations of (independent and local) measurements by Alice and Bob, when they compare their measurement protocols, provided the measurements are accurate enough in time resolution to be able to always associate the photons belonging to one entangled pair. Of course, the polarization state of both photons is maximally indetermined, but for each measurement of the polarization of Alice's and Bob's photons, we can predict the corresponding conditional probabilities. The same holds for the prediction of the interference pattern, which reflects the probabilities to detect photons on the screen behind the double slit, and as far as I know these probabilistic predictions of QT are very well (i.e., with very high statistical significance) agreeing with the findings in experiments. Why then, can't I then conclude that it is simply the preparation in this entangled state by parametric down conversion that determines the (probabilistic) behavior predicted by QT and confirmed by experiment? If I can't, I don't know, how to make sense of the notion of states in quantum theory at all, but also this contradicts the experience that we very well know how to use quantum theory to describe the empirical findings when performing such experiments.

Yes, quantum mechanics works. The question is whether quantum mechanics respects Einstein causality. Let me try to extract what I think is the essence of the Cavalcanti and Lal paper. The two important definitions are:

(RCC) Relativistic causality: the cause of an event is in the past light cone of the event
(FP) Common cause of a correlation: if z is the common cause of a correlation between A and B, then P(A,B|z) = P(A|z)P(B|z)

It can be shown that Bell's local causality, which is understood to be equivalent to Einstein Causality, is essentially RCC + FP. Quantum mechanics does not obey local causality, so either RCC or FP or both must be rejected. Presumably we are trying to keep RCC, since that is essential for Einstein Causality. If we reject FP, then correlations cannot have a common cause. It may be possible to redefine what it means for a correlation to have a common cause, but FP is the definition of common cause for all classical causality including Einstein Causality, so if you redefine common cause, one would be redefining Einstein causality.

I would say that in a minimal interpretation, relativistic quantum mechanics does not need Einstein Causality for correlations. Only signal locality is needed, ie. classical information cannot travel faster than light. The requirement that spacelike separated observables commute is closer to signal locality, because measurement of an observable is something that extracts a classical outcome from a quantum state.
 
Last edited:
  • #32
atyy said:
Yes, quantum mechanics works. The question is whether quantum mechanics respects Einstein causality. Let me try to extract what I think is the essence of the Cavalcanti and Lal paper. The two important definitions are:

(RCC) Relativistic causality: the cause of an event is in the past light cone of the event
(FP) Common cause of a correlation: if z is the common cause of a correlation between A and B, then P(A,B|z) = P(A|z)P(B|z)

It can be shown that Bell's local causality, which is understood to be equivalent to Einstein Causality, is essentially RCC + FP. Quantum mechanics does not obey local causality, so either RCC or FP or both must be rejected. Presumably we are trying to keep RCC, since that is essential for Einstein Causality. If we reject FP, then correlations cannot have a common cause. It may be possible to redefine what it means for a correlation to have a common cause, but FP is the definition of common cause for all classical causality including Einstein Causality, so if you redefine common cause, one would be redefining Einstein causality.

I would say that in a minimal interpretation, relativistic quantum mechanics does not need Einstein Causality for correlations. Only signal locality is needed, ie. classical information cannot travel faster than light. The requirement that spacelike separated observables commute is closer to signal locality, because measurement of an observable is something that extracts a classical outcome from a quantum state.

Microcausal local relativistic QFTs imply the linked-cluster theorem, according to which local far-distant experiments appear uncorrelated. Thus, indeed you always need a classical information exchange after doing the experiments as between Alice and Bob in our example to "erase" the (putative) which-way information of the photons. So (RCC) is fulfilled by such theories at least in this weak form. I don't see, why (FP) is in any way necessary for the consistency of physics to hold. So I've no problem to give it up, and indeed quantum theory and the outcome of measurements checking the Bell or CHSH inequality, which base on (RCC+FP), which at the same time confirm Q(F)T, show that this is what one has to do to be in accordance with these empirical facts. To give up RCC is no option, because then you'd give up causality, which is the very reason why physics is possible at all. Of course, you can argue that nature may be such that natural sciences are impossible by first principle, but this seems not to be the case either since natural sciences are pretty successful in their description of the world around us.
 
  • #33
atyy said:
(FP) Common cause of a correlation: if z is the common cause of a correlation between A and B, then P(A,B|z) = P(A|z)P(B|z)
With post-processing, P(A,B|z) = P(A|z)P(B|z) is wrong even in classical probability, since the only B outcomes used are then onces which have been filtered according to Alice's outcomes, it should be P(A,B|z) = P(A|z)P(B|Az). So there should be no problem rejecting FP without fanfare.
 
  • #34
vanhees71 said:
Microcausal local relativistic QFTs imply the linked-cluster theorem, according to which local far-distant experiments appear uncorrelated. Thus, indeed you always need a classical information exchange after doing the experiments as between Alice and Bob in our example to "erase" the (putative) which-way information of the photons. So (RCC) is fulfilled by such theories at least in this weak form. I don't see, why (FP) is in any way necessary for the consistency of physics to hold. So I've no problem to give it up, and indeed quantum theory and the outcome of measurements checking the Bell or CHSH inequality, which base on (RCC+FP), which at the same time confirm Q(F)T, show that this is what one has to do to be in accordance with these empirical facts.

Yes, it is fine to keep RCC without FP. The linked-cluster theorem and commutation of spacelike separated observables are all about RCC without FP. It means that the probabilities of outcomes of local measurements don't depend on distant measurement choices and outcomes, and that classical information cannot be communicated faster than light. However giving up FP also means that the nonlocal correlations do not have any local common cause, because FP is the usual definition of what it means for correlations to have a common cause. Because Einstein Causality is usually associated with EPR, Einstein Causality usually means RCC+FP, so if one means RCC without FP, people usually say signal locality or commutation of spacelike separated observables. The term "RCC" itself may be a bit confusing, because it sometimes means Einstein Causality, but it is also commonly used to mean signal locality. For example, Popescu and Rohrlich http://arxiv.org/abs/quant-ph/9508009 use the term "relativistic causality" to mean signal locality.

(Actually, there is an even finer distinction going at least back to Bell that I've ignored, but that Cavalcanti and Wiseman mention in http://arxiv.org/abs/0911.2504.)

vanhees71 said:
To give up RCC is no option, because then you'd give up causality, which is the very reason why physics is possible at all. Of course, you can argue that nature may be such that natural sciences are impossible by first principle, but this seems not to be the case either since natural sciences are pretty successful in their description of the world around us.

I think this is beyond a minimal interpretation. If there is a non-perturbative theory of quantum gravity, then spacetime itself may be emergent and RCC may be emergent, so that RCC is not fundamental. Certainly it seems very hard to do physics without some notion of locality, so that the big picture can be built up from small pictures, and we can have predictions for the small picture without knowing the big picture, and Weinberg does say this in his books. However, while agreeing with him that it is difficult, I think I wouldn't go all the way to impossible, at least not yet.

Also, one of the beautiful things about the quantum formalism is that for any choice of inertial frame, once we take a classical/quantum cut so that the wave function is not real, we can take the wave function as FAPP real, and we can give up RCC and take FP so that the correlations are FAPP explained by the quantum state and measurement choice. So quantum theory at the top level has RCC-FP, but FAPP it has FP-RCC, and we can use both to help our intuition and get correct predictions.
 
  • Like
Likes vanhees71
  • #35
billschnieder said:
With post-processing, P(A,B|z) = P(A|z)P(B|z) is wrong even in classical probability, since the only B outcomes used are then onces which have been filtered according to Alice's outcomes, it should be P(A,B|z) = P(A|z)P(B|Az). So there should be no problem rejecting FP without fanfare.

One can reject FP. Indeed, in general P(A,B|z) = P(A|z)P(B|A,z) in classical probability. However, in such a case, one cannot call z the sole cause of B, because A is also potentially a cause or correlated with a cause that is independent of z. Also, Einstein Causality usually means classical relativistic causality so it includes FP. Of course, everything is fine if another less common definition of Einstein Causality is used.
 
Last edited:

Similar threads

  • Quantum Physics
2
Replies
36
Views
2K
Replies
5
Views
797
Replies
60
Views
3K
Replies
7
Views
1K
Replies
3
Views
999
  • Quantum Physics
Replies
18
Views
1K
Replies
42
Views
1K
Replies
75
Views
4K
Replies
19
Views
997
Replies
8
Views
2K
Back
Top