Quantum Eraser and Its Implications

The quantum eraser experiment does not work if the measuring devices are destroyed before the which-path data is looked at. c) The quantum eraser experiment seems to work because the entangled photons are recombined and some unknown property of that recombination results in the interference pattern.f
  • #1
4
0
Hi all. I am new here, and am very interested in developments in theoretical physics, though I am not trained as a scientist. I am hoping some of you can help answer a question.

The quantum eraser experiment is said to prove that when which-path information is "erased" we get an interference pattern; the wave function has not collapsed, even though the "erasure" took place after the which-path information was initially obtained.

At the same time, we know that that description does not apply when which-path information is destroyed by at least some means other than those employed in the Kim, et. al, experiment. For example, if in the basic double slit experiment we obliterate irretrievably the measuring device and the which-path data it contains before looking at the screen, we will not see an interference pattern. "Erasure" of the which-path information does not in that case prevent wave-function collapse.

Does this not suggest, then, that it is not the information about which path was followed that collapses the wave function (or, corollarily, that it is the erasure of that which-path information that results in the fringed pattern in the quantum erasure experiment)? Mightn't it suggest, for example, that the quantum eraser experiment works instead because the entangled photons are recombined, and that there is some as yet undiscovered characteristic of that recombined photon that produces what looks to us like a pattern caused by wave interference (something that would have broad implications)?

Thanks so much,
Peter
 
  • #2
For example, if in the basic double slit experiment we obliterate irretrievably the measuring device and the which-path data it contains before looking at the screen, we will not see an interference pattern. "Erasure" of the which-path information does not in that case prevent wave-function collapse.

Welcome to PhysicsForums, NewPeter!

I am not aware of any experiments in which the measuring devices are destroyed. :smile: It would be a bit expensive.

The issue with any eraser is that IF you can restore indistinguishability (is that a word?) of the results of an intermediate measurement, then the effect is erasure. So erasure restores a superposition.

As best as the evidence indicates: erasure can occur before or after (yes!) the final outcome is obtained (is there interference or not). This is assuming you use entangled particles of course. I will tell you that following the logic of the erasure experiments is quite complicated, and not easy to discuss. But if you have a specific question, I will do my best to answer (as will many others here).
 
  • #3
Thanks for your reply. I'll try to restate. My question is about what the eraser experiment actually proves. It seems to me that other mechanisms for erasing the which-path information (for example, actually and irrevocably erasing that information before looking at it) do not create the same result (i.e., a restoration of superposition). Consequently, I am wondering whether it is, in fact, the "erasure" of the information that restores superposition in the quantum eraser experiment, or whether something else about the recombination of the entangled particles is what results in the interference pattern. My question is whether it isn't possible that the quantum eraser experiment, rather than telling us anything about what happens when information is erased, actually reflects something unexplained and fundamental about the nature of the results of the double-slit experiment.

Thanks,
Peter
 
  • #4
... My question is about what the eraser experiment actually proves. It seems to me that other mechanisms for erasing the which-path information (for example, actually and irrevocably erasing that information before looking at it) do not create the same result (i.e., a restoration of superposition). Consequently, I am wondering whether it is, in fact, the "erasure" of the information that restores superposition in the quantum eraser experiment, or whether something else about the recombination of the entangled particles is what results in the interference pattern. My question is whether it isn't possible that the quantum eraser experiment, rather than telling us anything about what happens when information is erased, actually reflects something unexplained and fundamental about the nature of the results of the double-slit experiment.

Thanks,
Peter

Well it could. The easiest explanation is to consider the probability wave as "real" until finally actualized. What does that mean? No one really knows. All we can say is that that we can make statistical predictions without having an understanding of the underlying mechanism. So is there something we don't understand? I would say so. Between the available interpretations, it is simply a guess. So far no one has an answer as to how to discern one from the other experimentally. I think we are getting closer every day. The telling points are these:

a) There appears to be a non-local component
b) There appears to be a non-causal component
c) There appears to be a non-realistic component

But none of these are completely proven individually. We know from Bell that one or more is correct.
 
  • #5
Thanks for your reply. I'll try to restate. My question is about what the eraser experiment actually proves. It seems to me that other mechanisms for erasing the which-path information (for example, actually and irrevocably erasing that information before looking at it) do not create the same result (i.e., a restoration of superposition). Consequently, I am wondering whether it is, in fact, the "erasure" of the information that restores superposition in the quantum eraser experiment, or whether something else about the recombination of the entangled particles is what results in the interference pattern. My question is whether it isn't possible that the quantum eraser experiment, rather than telling us anything about what happens when information is erased, actually reflects something unexplained and fundamental about the nature of the results of the double-slit experiment.

Thanks,
Peter

hi peter, welcome to the forum.

the delayed choice quantum eraser (dcqe) would seem to suggest that past can be changed/effected by the future.

however, i think, the dcqe is a "filtering trick" i.e. only those photons get filtered that would support the pattern (interference or non-interference), creating the illusion that past can be effect by future.

- in dcqe sub-samples of samples filter through.
- the interference pattern is hidden/embedded in the non-interference pattern and the interference pattern is gleaned/filtered in some of the dcqe scenarios.

thus the dcqe does not prove anything beyond the now well known/expected properties/phenomena of
1. quantum entanglement and
2. single particle interference

dcqe is simply a mixture of the above two phenomena in a single experiment
 
  • #6
...however, i think, the dcqe is a "filtering trick" i.e. only those photons get filtered that would support the pattern (interference or non-interference), creating the illusion that past can be effect by future.
...

That is certainly one "interpretation". But it must be some via kind of non-local effect. After all, there can't be an attribute called "interfering" that is realistic (well defined) because the partner's measurement basis can be changed (erased or not) after the interference is detected.

Remember, the DCQE is set up to provide a context which spans both time and space (spacetime separation). The results agree with the context alone.

http://grad.physics.sunysb.edu/~amarch/Walborn.pdf [Broken]

"We report a quantum eraser experiment which actually uses a Young double slit to create interference. The experiment can be considered an optical analogy of an experiment proposed by Scully, Englert, and Walther. One photon of an entangled pair is incident on a Young double slit of appropriate dimensions to create an interference pattern in a distant detection region. Quarter-wave plates, oriented so that their fast axes are orthogonal, are placed in front of each slit to serve as which-path markers. The quarter-wave plates mark the polarization of the interfering photon and thus destroy the interference pattern. To recover interference, we measure the polarization of the other entangled photon. In addition, we perform the experiment under ‘‘delayed erasure’’ circumstances."
 
Last edited by a moderator:
  • #7
So are you both agreeing that it is not the erasure of the previously observed information that leads to the interference pattern? Is that a widely shared understanding?
 
  • #8
So are you both agreeing that it is not the erasure of the previously observed information that leads to the interference pattern? Is that a widely shared understanding?

I don't know how to agree or disagree with your question. Erasure can lead to interference in these setups - that would be where which-path information is not available.
 
  • #9
So are you both agreeing that it is not the erasure of the previously observed information that leads to the interference pattern? Is that a widely shared understanding?
I'd say your question points to the reasons why "erasure" is such a misnomer here. First of all, there is no such thing as "erasure" of "previously observed information"-- if the information is previously observed, it can never be erased. Erasure works by not observing the previous information, so by not destroying various coherences, which means the information was never extracted, so it is still "in" the experiment (and has therefore not been "erased"). Hence "erasure" is quite a misleading term, a better word would be "retroactive non-actualization". But that's a lot longer, so you see why "erasure" is used instead! The point is, it is quite important that we avoid the conceptually fatal tendency to imagine that some reality has been actualized before it has actually been put to a test that comes out A if the reality is that way and B if it is some other way.

For example, when we see a pattern that is the sum of two one-slit patterns, rather than a two-slit interference pattern, we are tempted to conclude "no two-slit interference occurred there." Is this a valid conclusion? No it isn't, because we have no idea what kinds of interference occurred there, our experiment has not put that question to the test-- it has only put to the test what is the net outcome of all the interferences that occurred there, and the net outcome is not a two-slit interference pattern. But living within that net outcome could be all kinds of contributory outcomes, and above all, the quantum erasure experiment tells us that two-slit interference is indeed a contributory outcome to the net non-two-slit pattern. Erasure is simply the means for disentangling those contributory two-slit outcomes, whereas failure to erase simply means failure to have access to that mode of disentanglement. "Erasure" doesn't erase anything at all-- indeed, what it actually does is much closer to not erasing some information that we might otherwise fail to access.
 
  • #10
I'd say your question points to the reasons why "erasure" is such a misnomer here. First of all, there is no such thing as "erasure" of "previously observed information"-- if the information is previously observed, it can never be erased. Erasure works by not observing the previous information, so by not destroying various coherences, which means the information was never extracted, so it is still "in" the experiment (and has therefore not been "erased"). Hence "erasure" is quite a misleading term, a better word would be "retroactive non-actualization". But that's a lot longer, so you see why "erasure" is used instead!

agreed

The point is, it is quite important that we avoid the conceptually fatal tendency to imagine that some reality has been actualized before it has actually been put to a test that comes out A if the reality is that way and B if it is some other way.

well put ken
 
  • #11
That is certainly one "interpretation". But it must be some via kind of non-local effect. After all, there can't be an attribute called "interfering" that is realistic (well defined) because the partner's measurement basis can be changed (erased or not) after the interference is detected.


yes drchinese...non-local, quantum entanglement (qe) phenomena is non-local. we are on the same page on that. how qe works is anyone's guess, no one knows yet.

when the partner's measurement basis is changed then only those "delayed" photons will pass through, the filter(s), that support the story/pattern that would map with the new measurement basis...and later when "delayed photons" are compared (via coincidence counter) with the "initial twin photons"...the pattern would map to the new/changed measurement basis...

however i don't think the future is changing the past or any points in future/present are changing any points in the past...in the dcqe experiment.
 
Last edited:
  • #12
Very interesting response, Ken. Am I correct that what you're saying is that observation "extracts information," and that that is what causes waveform collapse? Is that a widely shared understanding, or is the reason observation causes waveform collapse debatable? Are you saying that in the DCQE the information is never extracted? I thought the point was that at a point after which-path information is available, entangled particles -which, frankly I don't understand - are recombined (something I also don't understand) in a way that makes it (again) impossible to determine the which-path information: and the result is an interference pattern. (And what I was struggling with in my question was what the experiment demonstrates in light of the fact that other methods of putting which-path information out of reach, e.g., having a mechanical observer that never records the information for human observation or obliterating the information absolutely, do not result in an interference pattern.)

Thanks,
Peter
 
  • #13
As described in Wikipedia:
“The Quantum eraser experiment is a double-slit experiment in which particle entanglement and selective polarization is used to determine which slit a particle goes through by measuring the particle's entangled partner. This entangled partner never enters the double slit experiment. Earlier experiments with the basic Young double-slit experiment had already determined that anything done to discover by which path (through slit A or through slit B) a photon traveled to the detection screen would destroy the interference effect that is responsible for the production of interference fringes on the detection screen. …
“The advantage of manipulating the entangled partners of the photons in the double-slit part of the experimental apparatus is that experimenters can destroy or restore the interference pattern in the latter without changing anything in that part of the apparatus. Experimenters do so by manipulating the entangled photon; and they can do so before or after its partner has entered or after it has exited the double-slits and other elements of experimental apparatus between the photon emitter and the detection screen. So, under conditions where the double-slit part of the experiment has been set up to prevent the appearance of interference phenomena (because there is definitive "which path" information present), the quantum eraser can be used to effectively erase that information. In doing so, the experimenter restores interference without altering the double-slit part of the experimental apparatus. An event that is remote in space and in time can restore the readily visible interference pattern that manifests itself through the constructive and destructive wave interference. …
A variation of this experiment, the delayed choice quantum eraser experiment, “allows the decision whether to measure or destroy the ‘which path’ information to be delayed until after the entangled particle partner (the one going through the slits) has either interfered with itself or not. Doing so appears to have the bizarre effect of determining the outcome of an event after it has already occurred.” See Wikipedia’s articles on Quantum eraser experiments. delayed choice quantum eraser experiments, Wheeler's delayed choice experiment, the Afshar experiment, and Retrocausality. See also “Random Delayed-Choice Quantum Eraser via Two-Photon Imaging” by: Giuliano Scarcelli, Yu Zhou, Yanhua Shih (http://arxiv.org/abs/quant-ph/0512207v1)
 
  • #14
My purpose in providing the above description of the delayed choice quantum erasure experiments was to provide a foundation on which to argue that time reversibility is a necessary element for any explanation of these experiments. I wish to first examine whether the common quantum explanations for these experimental results (e.g. collapse of the wave function and decoherence) are viable. It is my "belief" that they are not. First, if we assume that wave functions actually collapse, it is my understanding this event is not time reversible such that no interference pattern could be recovered for the signal photons once the collapse occurred.
Please reflect on what is happening to the information in the time reversible path (i) when the photon(s) pass through the double slit; (ii) when the down converter creates an entangled “signal” and “idler” photon (iii) when the idler photon passively retains or is actively imparted "which path" information; (iv) when the signal photon reaches the detector; (v) when the active or passive erasure of the idler photon's "which path" information occurs (which theoretically could occur years after the signal photon reached the detector); (vi) when the measurement of the idler photon occurs; again theoretically years after the interference pattern theoretically was or was not recorded for the signal photon, and (vii) when the existence or non-existence of the interference pattern becomes known to an observer.
To the extent any of these events results in interactions between the quantum system with its environments, most physicists would currently interpret these interactions in the context of decoherence.

According to Wikipedia: “quantum decoherence is the mechanism by which quantum systems interact with their environments to exhibit probabilistically additive behavior - a feature of classical physics - and give the appearance of wavefunction collapse. Decoherence occurs when a system interacts with its environment, or any complex external system, in such a thermodynamically irreversible way that ensures different elements in the quantum superposition of the system+environment's wavefunction can no longer interfere with each other. … Decoherence does not provide a mechanism for the actual wave function collapse; rather it provides a mechanism for the appearance of wavefunction collapse. The quantum nature of the system is simply "leaked" into the environment so that a total superposition of the wavefunction still exists, but exists beyond the realm of measurement.”

If we assume, for the purposes of this discussion, that decoherence has occurred, I recognize that all components of the wave function are presumed to still exist in a global superposition even after a measurement or environmental interaction has rendered the prior coherences no longer "accessible" by local observers. I further understand that all lesser interactions are believed to be time reversible. However, this analysis requires that I ask a question that I am not aware others are asking: Did any of the interactions in these experiments increase the entropy of the system. Of course, entropy, like QM, is also time symmetric. However, it is well established that, in a time symmetric system, entropy should increase both backward and forward in time. I also recognize that entropy is not deterministic, but only probabilistic. However, if the time reversal path includes an event where entropy increased, should we not then ask: How is the entropy that was introduced by this interaction undone?

Please note that I have tentatively excluded the active erasure experiments from this conjecture in recognitions of Huw Price's paper "Boltzmann's Time Bomb", because active erasure (such as causing all the idler photons to have the same spin) might be seen to have created a localized low entropy state

Nonetheless, even if we only consider the passive erasure experiments, the advocates of wavefunction collapse and quantum decoherence have a problem: How does a photon that starts in a state of lower coherence (i.e. higher entropy) go backwards in time and, in doing so, regain the greater coherence (e.g. lower entropy) that it previously had? See “The Thermodynamical Arrow Of Time: Reinterpreting The Boltzmann-Schuetz Argument” by Milan M. Ćirković
http://philsci-archive.pitt.edu/archive/00000941/03/Boltzmann_final5.doc [Broken])
and “Probability, Arrow of Time and Decoherence ”by Guido Bacciagaluppi (http://arxiv.org/abs/quant-ph/0701225v1)

The forgoing has hopefully "fleshed out" the dilemma for the conventional interpretations of QM. Yakir Aharonov's time symmetric interpretation of quantum mechanics, by retaining the information from both the initial and final boundary conditions (the point of origination and the final actualization event) provides a first-order resolution to the problem.

I just posted a discussion of this in the context of the EPR paradox. See: zttp://www.physicsforums.com/showthread.php?t=546740
 
Last edited by a moderator:
  • #15
Very interesting response, Ken. Am I correct that what you're saying is that observation "extracts information," and that that is what causes waveform collapse?
Yes, I'm saying that any act of observation is an act of conversion-- we convert potential truths about a system that have not been actualized into actualized truths. In classical physics, we imagined that this conversion was passive-- the truths were "already there" before we actualized them. But in quantum mechanics, we find that the conversion is quite an active participant in the reality of the situation, and the things we "discovered" about the system were simply not true about it before the measurement led to their discovery. And along with this comes the key point that we must notice what information we have actually extracted (what truths have been actualized), and not assume things that are not in evidence.

In particular, we know that if we do a two-slit experiment and actualize the truth of which slit the particles went through (by correlating detection hits with which-way information), then the particles that went through each slit will make a one-slit pattern in front of that slit. But if we see a detection pattern that looks like two superimposed one-slit patterns, can we invert that information to conclude that which slit the particles went through must have been actualized even if we have no evidence that it has? No, we cannot say that, because we cannot demonstrate actualization of that truth just from looking at overlapping one-slit patterns in the aggregate detection pattern.

Instead, it seems to me that the quantum erasure experiment demonstrates above all that overlapping one-slit patterns can be produced as a result of two-slit interference patterns, offset from each other to produce something that looks just like two one-slit patterns because of some truth that was actualized that mimics the action of which-way information without really actualizing which-way information for those particles. Just having the detection pattern doesn't tell us that-- we need some additional prescription for sorting the hits in the pattern with the slit they went through (as in the non-erased case) to actualize which-way information, but if we don't have that (as in the erased case), we can use a different prescription to sort the hits (well after the fact of their being detected) that demonstrates two spatially offset two-slit patterns that go into the detection pattern, without any which-way information involved in the sorting. It is not the pattern itself that is determined by actualizing that information, because we can actualize that information long after the pattern is done, it is just how we explain the pattern (by sorting its contributors a certain way) that involves actualizing (or not actualizing) which-way information. If you don't actualize the which-way information, that inforrmation is not extracted by destroying the necessary coherences, so that information is still "in there", i.e. the coherences are still in there, and can then be used to explain the detection pattern (long after the fact of its creation) without appealing to any which-way information.

I guess the bottom line to what I'm saying is that extracted information does not determine what happens, it only determines how we attribute causes to what happened. That cause-attribution can occur long after the fact, just as I can find some novel explanation for why World War I happened long after that war is over, if I extracted some new information that was previously encoded in the history in a way that had gone unnoticed. We should not imagine that World War I required my attribution in order to happen-- cause and effect is a mental process, not a physical one. Above all, I'm saying we would not say that my new attribution to the cause of World War I propagated backward in time and became the reason that World War I happened, any more than decisions I make later can go back in time and change a detection pattern on a screen.
Is that a widely shared understanding, or is the reason observation causes waveform collapse debatable?
The whole issue of collapse is highly debatable. Many interpretations of quantum mechanics don't recognize any kind of collapse at all, either because the state continues uncollapsed (MWI), or because a wavefunction is not a state of a system at all (ensemble interpretation).
Are you saying that in the DCQE the information is never extracted?
Yes, which-way information is never extracted when the choice to "erase" is made. Alternatively, the choice can be made to extract that information, after the fact. Either way, the detection pattern under study is not any different-- it is already done after all. What is different is the way it is explained in terms of contributing parts-- it is separated into parts by correlating the original detector hits with the entangled results. But the whole can be a sum of parts in different ways and still be the same whole-- because here the "whole" is a detection pattern that has lost a huge amount of the information/coherences that went into making it. That information is still encoded in there somewhere, and can be extracted by entangled experiments, but you can't say what went into that pattern just by looking at it, and indeed nature doesn't say what went into it either-- until you actualize that information, after the fact.

(And what I was struggling with in my question was what the experiment demonstrates in light of the fact that other methods of putting which-path information out of reach, e.g., having a mechanical observer that never records the information for human observation or obliterating the information absolutely, do not result in an interference pattern.)
Whether or not the information that has been actualized/extracted is recorded or noticed in any way is largely irrelevant, and is not a fruitful path to follow in your analysis. Instead, consider what information is available, whether or not it is used or noticed. For this, it suffices to imagine a hypothetical observer, but not a hypothetical apparatus-- that's the key point, the apparatus is classical and macroscopic, so we can get away with the hypothetical observer concept on our "end" of the apparatus. But we can't get away with imagining "super-observers" who know things about a system without any apparatus capable of establishing/extracting/actualizing that truth.
 
  • #16
My purpose in providing the above description of the delayed choice quantum erasure experiments was to provide a foundation on which to argue that time reversibility is a necessary element for any explanation of these experiments. I wish to first examine whether the common quantum explanations for these experimental results (e.g. collapse of the wave function and decoherence) are viable. It is my "belief" that they are not. First, if we assume that wave functions actually collapse, it is my understanding this event is not time reversible such that no interference pattern could be recovered for the signal photons once the collapse occurred.

You are already wrong at this point. The most important point of the wiki article is this sentence: "Doing so appears to have the bizarre effect of determining the outcome of an event after it has already occurred."

And it should be taken seriously. It appears to have this effect - but it does not have this effect. There is no determining the outcome of the event after it has occured. You just throw away a part of the data by doing coincidence counting. The whole dataset never changes and does not care about whether you erase which-way info or not.
 
  • #17
... And it should be taken seriously. It appears to have this effect - but it does not have this effect. There is no determining the outcome of the event after it has occured. You just throw away a part of the data by doing coincidence counting. The whole dataset never changes and does not care about whether you erase which-way info or not.

I think we would need to know more about the underlying mechanism to make this statement. There are other experiments, such as this one, that "tend" to point the other way.

http://arxiv.org/abs/quant-ph/0201134

See 2nd paragraph, page 5. The decision to entangle 2 photons can be made AFTER their polarization was observed. Jon_Trevathan's "V" explanation (see other thread) becomes a "W" in this case.

I'm not saying your explanation isn't correct, just that we appear to have no way to see deep enough to discern one from the other.
 
Last edited:
  • #18
There are other experiments, such as this one, that "tend" to point the other way.

http://arxiv.org/abs/quant-ph/0201134

See 2nd paragraph, page 5. The decision to entangle 2 photons can be made AFTER their polarization was observed. Jon_Trevathan's "V" explanation (see other thread) becomes a "W" in this case.

I'm not saying your explanation isn't correct, just that we appear to have no way to see deep enough to discern one from the other.

Do they really point the other way? The paper says "Therefore, this result indicate that the time ordering of the detection events has no influence on the results and strengthens the argument of A. Peres: this paradox does not arise if the correctness of quantum mechanics is firmly believed."
I fully agree with that.

Or to rephrase, when the paper says "Thus depending on Alice’s later measurement, Bob’s earlier results either indicate that photons 0 and 3 were entangled or photons 0 and 1 and photons 2 and 3. This means that the physical interpretation of his results depends on Alice’s later decision.", I fully agree. The results on Bob's side do not change or depend on Alice's choice. The physical interpretation clearly does. For me that does not point to any "the present changes the past"-scenario. It clearly does point towards a non-local scenario, though.
 
  • #19
Do they really point the other way? ...

(Not really disagreeing, by the way...)

I think so. The problem with the non-local effect idea is that the effects span time as well as space (whereas non-local implies spanning of space alone). The advantage (if you want to call it that) to the time-symmetric interpretation that Jon was mentioning is that this time spanning falls out naturally and there is no underlying non-local effect to explain. Just the setup for the experiment ("W") suggests a time symmetric vision by the authors, but that is strictly a guess. Of course there are some disadvantages too.

We can probably agree that different interpretations give different explanations, and that in many essential ways they are equivalent.
 
  • #20
I think we need to ask, "just what is time symmetric, the reality, or our way of interpreting the reality?" I would say it is the latter, which would seem to be in agreement with Cthugha. (Not that it disagrees with anyone else.) To me, a key lesson of quantum erasure, and quantum mechanics as a whole, is that our tendency to uniquely associate cause and effect (which are attributes of an interpretation) with actual events is illusory. When what actually happened can be interpreted in multiple ways, perhaps because some entanglements have not been put to some test yet, then so can cause and effect, and the time symmetry that is being referred to is a freedom in our ability to attribute reasons for what happened-- not a freedom exhibited by what actually happened.

That was the point of my "World War I" analogy (albeit classical)-- in history, the tendency is to imagine there "really were some reasons" that World War I happened, and it is historians job to figure out the appropriate weight of those reasons. However, both "reasons", and "weight" behind them, are purely constructs of our intelligence-- they have more to do with how we think about what happened than about what actually happened. It's all the history we ever get-- the stories we tell. So if someone makes some totally new discovery about World War I, it can completely change, after the fact, our entire notion of what World War I was. The events didn't change-- there is no time symmetry in the actual chain of events. But our interpretation of what happened, which is pretty close to what we mean by "what happened", can certainly change after the fact, and does exhibit a kind of time symmetry for that reason.
 
  • #21
While this subject is up again can somebody answer a question I've asked previously? (I never really got a satisfactory answer)

The question is specifically about this experiment - http://arxiv.org/abs/quant-ph/9903047" [Broken]

The thing I don't understand is that the D1 and D2 detectors both show interference fringes and anti-fringes when the sub-samples are examined. What I don't get is that the idler photons encounter a beam splitter before going to either of the detectors. As I understand it, the chance of passing through this BS or reflecting off it is 50/50. So I would expect no interference patterns in these sub-samples.

To put it another way - how do idler photons, of signal photons which contribute to an interference pattern, always end up at the same detector?
 
Last edited by a moderator:
  • #22
We can probably agree that different interpretations give different explanations, and that in many essential ways they are equivalent.

Well, sure. I fully agree that you can create an interpretation that involves backward causation or something similar and get it to match reality. I just do not think there has been any experiment out there that really needs such an explanation. The bottomline of the Scarcelli and Shih paper Jon cited

"for entangled photons it is misleading and incorrect to interpret the physical phenomena in terms of independent photons. On the contrary the concept of “biphoton” wavepacket has to be introduced to understand the non-local spatio-temporal correlations of such kind of states. Based on such a concept, a complete equivalence between two-photon Fourier optics and classical Fourier optics can be established if the classical electric field is replaced with the two-photon probability amplitude. The physical interpretation of the eraser that is so puzzling in terms of individual photons’ behavior is seen as a straightforward application of two-photon imaging systems if the nonlocal character of the biphoton is taken into account by using Klyshko’s picture."

is fully sufficient and very important as this aspect is often ignored.

Joncon said:
To put it another way - how do idler photons, of signal photons which contribute to an interference pattern, always end up at the same detector?

See above. You cannot say that there are signal photons that contribute to an interference pattern. The interference patterns are always two-photon interference patterns that require to detect both photons. In simplifying terms the beam splitter and detectors D1 and D2 form a kind of Mach-Zehnder-interferometer and it is well known that the relative intensities at the exit ports of the beam splitter depend on the phase of the light field (or to be more precise on the phase differences between the indistinguishable probability amplitudes). On the other side the signal photons basically pass a double slit. The resulting interference pattern of course also depends on the phase of the light field/the probability amplitudes.

Now, as you have entangled photons, the light itself is not phase stable and therefore incoherent, so you do not get a double slit or Mach-Zehnder-interference pattern by looking at any of the two sides alone (the phase differences are different for each repeated emission of a photon pair). What is however well defined is loosely speaking the relative phase of the entangled biphoton state. The phase difference differs from photon pair to photon pair, but it is of course the same for both photons forming the pair. Now if one photon is detected at D2, that gives you some information about the phase difference for the photon pair examined right now. The phase difference will surely not have a value that will cause detections at D1 and it is more likely that it is a phase difference which is connected with a high detection probability at D2. Now that you have some information about the phase difference, you also get some information about the most probable detection positions of the entangled partner on the double slit side of the experiment.

In other words, the detection events on both sides are not statistically independent, but a detection at some position at scanner D0 is directly linked to a higher or lower probability to detect a photon at D1/D2. For every position of D0, there is some preference whether it is more likely that photons will be detected at D1 or D2.
 
  • #23
In other words, the detection events on both sides are not statistically independent, but a detection at some position at scanner D0 is directly linked to a higher or lower probability to detect a photon at D1/D2. For every position of D0, there is some preference whether it is more likely that photons will be detected at D1 or D2.

So does that mean the chance of the photon passing through the BS or reflecting off it is NOT 50/50, but is influenced by phase?
 
  • #24
...

"for entangled photons it is misleading and incorrect to interpret the physical phenomena in terms of independent photons. On the contrary the concept of “biphoton” wavepacket has to be introduced to understand the non-local spatio-temporal correlations of such kind of states. Based on such a concept, a complete equivalence between two-photon Fourier optics and classical Fourier optics can be established if the classical electric field is replaced with the two-photon probability amplitude. The physical interpretation of the eraser that is so puzzling in terms of individual photons’ behavior is seen as a straightforward application of two-photon imaging systems if the nonlocal character of the biphoton is taken into account by using Klyshko’s picture."

is fully sufficient and very important as this aspect is often ignored.

...

This is a great point. The system is not separable into components while there is entanglement. Calling them 2 photons is a convenience which is not always justified.
 
  • #25
A 50/50 beam splitter has two input ports and two output ports. If you have a field entering at a single input port, the probability of the corresponding photons to exit via either ooutput port is indeed 50/50. If you have mutually coherent fields present at both input ports, the relative phase of the two fields entering creates an interference effect and the probability can differ significantly from 50/50. Read up on Mach-Zehnder interferometers, if you are interested in the details.
 
  • #26
A 50/50 beam splitter has two input ports and two output ports. If you have a field entering at a single input port, the probability of the corresponding photons to exit via either ooutput port is indeed 50/50. If you have mutually coherent fields present at both input ports, the relative phase of the two fields entering creates an interference effect and the probability can differ significantly from 50/50. Read up on Mach-Zehnder interferometers, if you are interested in the details.

OK, maybe I'm misunderstanding you (probable) or you're misunderstanding my question (possible). The paper I linked shows two interference patterns when coincidence counting is done: -

29o49bs.jpg

2iqk48j.jpg


So if a single photon (half photon??) enters the final BS, which has a 50/50 output, why don't these patterns look like this?

9uc8ew.jpg
 
Last edited:
  • #27
Gentlemen, please correct a layman if I’m wrong, but this is how I understand DCQE:

600px-Kim_EtAl_Quantum_Eraser.svg.png

EDIT: Isn’t something wrong with this picture? If you look at the red/blue path to D1 and D2 via BSa b c this would add a 'phase' depending on red or blue, no? The path is thru+mirrored vs. thru+thru?? :uhh:

After the whole experiment is completed, we get a bunch of data in the detectors (D0,D1,D2,D3,D4) and the coincidence counter.

What we’re interested in, is an interference pattern in the data of the signal photons in detector D0, but all we find there is 'random noise', if inspecting this data only.

However, using the coincidence counter to match D0 data with the entangled twin idler photons, we do get an interference pattern for detectors D1/D2 (no path info), and when matching with detectors D3/D4 (path info) the interference pattern is lost.

Here’s an overlay of data matching. The blue dots represent matching coincidence counts for D0 + D2 and the white dots represent matching coincidence counts for D0 + D3:

nn2xjk.png


Hence, nothing is erased or changed in retrospect.

To me, this means that if one were to look only in the data for D1/D2 (no path info) we will also see the interference pattern there, and in the data sole for D3/D4 (path info) the interference pattern is lost.

What is real weird though (to me), is that 120+ and 100+ photon counts (red circles) is found at this position on the x-axis at D0, and this data was recorded 8ns earlier than that of the idler!

How the h*ll did get there??
 
Last edited:
  • #28
OK, maybe I'm misunderstanding you (probable)

Hehe Joncon, interesting... I don’t know if we’re asking the same question here... but if you add CR+LF [Enter] between picture 29o49bs.jpg and 2iqk48j.jpg this page will be much nicer to look at... :wink:
 
  • #29
Ah OK, it looked fine on my screen but maybe not for different resolutions. Any better?
 
  • #30
Thanks, much better!
(It was okay with the browser in full screen, but I’m a sneaky bastard when comes to space :smile:)


Hopefully you will get some ideas on how the graphs work from my post... The last R03 graph is a data match (coincidence count) between detectors D0 + D3.
 
  • #31
My question is whether it isn't possible that the quantum eraser experiment, rather than telling us anything about what happens when information is erased, actually reflects something unexplained and fundamental about the nature of the results of the double-slit experiment.
The net effect of erasing info is that you then have less info. Which would seem to reveal less, not more, than the case where you have more info.

What's unexplained about the double-slit experiment, the essential conundrum, is 1) if what's going through the slits is a wavefront (or sequence of wavefronts, ie., a wave train), then why the extremely localized detections, and 2) if what's going through the slits is a particle (or sequence of particles), then why the interference pattern?

Since there's no way, currently (maybe ever), to answer this question, the conundrum is spoken of in terms of wave-particle duality. A nice expression of our ignorance regarding what's actually the case wrt quantum-level phenomena.

So, yes, there's something fundamental and unexplained (perhaps unexplainable) about the nature of the results of the double-slit experiment -- and any experiment which entails individual particle detection and interference patterns is a "reflection" of this conundrum.
 
  • #32
The bottomline of the Scarcelli and Shih paper Jon cited

"for entangled photons it is misleading and incorrect to interpret the physical phenomena in terms of independent photons. On the contrary the concept of “biphoton” wavepacket has to be introduced to understand the non-local spatio-temporal correlations of such kind of states. Based on such a concept, a complete equivalence between two-photon Fourier optics and classical Fourier optics can be established if the classical electric field is replaced with the two-photon probability amplitude. The physical interpretation of the eraser that is so puzzling in terms of individual photons’ behavior is seen as a straightforward application of two-photon imaging systems if the nonlocal character of the biphoton is taken into account by using Klyshko’s picture."
Can you say again what that paper is? I was on a thread on here some months back where quite a few self-styled quantum physics experts told me I was nuts to suggest that delayed choice experiments had a classical analog, and indeed the only thing that made the experiment strange was the attempt to connect it with the concept of discrete particles behaving independently of each other-- a notion never encountered in classical wave mechanics.

In other words, classical wave mechanics has no difficulty with DCQE, because it doesn't need to support a particle concept there. Perhaps much, if not all, of the difficulties in interpretation DCQE stem from over-intepreting the concept of a local particle. So I'm not wild about the time-symmetric interpretation's tendency to imagine physical effects traveling backward in time to the origin, then forward along entangled paths, because it's too literal a description of a process that could easily be framed as simply us "changing our story" about what happened in some past event.
Now, as you have entangled photons, the light itself is not phase stable and therefore incoherent, so you do not get a double slit or Mach-Zehnder-interference pattern by looking at any of the two sides alone (the phase differences are different for each repeated emission of a photon pair). What is however well defined is loosely speaking the relative phase of the entangled biphoton state. The phase difference differs from photon pair to photon pair, but it is of course the same for both photons forming the pair. Now if one photon is detected at D2, that gives you some information about the phase difference for the photon pair examined right now. The phase difference will surely not have a value that will cause detections at D1 and it is more likely that it is a phase difference which is connected with a high detection probability at D2. Now that you have some information about the phase difference, you also get some information about the most probable detection positions of the entangled partner on the double slit side of the experiment.
This is the clearest description of "what is really happening" in the DSQE experiment that I"ve ever seen.
 
  • #33
Ken G said:
... if the information is previously observed, it can never be erased. Erasure works by not observing the previous information, so by not destroying various coherences, which means the information was never extracted, so it is still "in" the experiment (and has therefore not been "erased"). Hence "erasure" is quite a misleading term ...
Good point, imo.

Ken G said:
I think we need to ask, "just what is time symmetric, the reality, or our way of interpreting the reality?" I would say it is the latter ...
Another good point, imo. As well as other good points which I won't reproduce here.

So, do so-called quantum erasure experiments inform wrt the reality underlying instrumental behavior? Or, are we still left with the fundamental conundrum illustrated by quantum double-slit experiments?
 
  • #34
So if a single photon (half photon??) enters the final BS, which has a 50/50 output, why don't these patterns look like this?

Assuming the setup as shown in DevilsAvocado's post, you always have two fields arriving at the final BS. One from the red path and one from the blue path. As you cannot say that the photon that will be detected later has taken one of these paths, you need to take both of them into account which then gives the interference effect mentioned earlier. If you just send single photons down one of these paths, you will instead get a pattern like the last one you posted.

Ken G said:
Can you say again what that paper is? I was on a thread on here some months back where quite a few self-styled quantum physics experts told me I was nuts to suggest that delayed choice experiments had a classical analog, and indeed the only thing that made the experiment strange was the attempt to connect it with the concept of discrete particles behaving independently of each other-- a notion never encountered in classical wave mechanics.

G. Scarcelli et a., "Random delayed-choice quantum eraser via two-photon imaging", The European Physical Journal D - Atomic, Molecular, Optical and Plasma Physics
Volume 44, Number 1, 167-173 (2007). You can also find it on ArXiv.

However, one should not take the classical analogy too far. The two-photon probability amplitude can be quite a non-classical entity.

DevilsAvocado said:
To me, this means that if one were to look only in the data for D1/D2 (no path info) we will also see the interference pattern there, and in the data sole for D3/D4 (path info) the interference pattern is lost.

D1 and D2 are bucket detectors. If you just look at them, all you get is some constant count rate which will obviously cannot give any interference pattern. You really need the additional information obtained from coincidence counting at various positions of D0 to get some kind of pattern.
 
  • #35
D1 and D2 are bucket detectors. If you just look at them, all you get is some constant count rate which will obviously cannot give any interference pattern. You really need the additional information obtained from coincidence counting at various positions of D0 to get some kind of pattern.

Of course!

I totally missed the |x in the picture, sorry... :blushing:

So what you get is a number of 'blind' (no position info) detections in D2 (only talking about D0+D2 now), and corresponding entangled twin detections in D0, where the position on the x-axis for D0 is stored. For both D0 and D2 we also store the time tag in the coincidence counter.

This enables us to later filter out those photons in D0 which corresponds to D2 and this information forms the interference pattern in graph R02.

My question remains: How on Earth could we expect to get any interference pattern out of the data in DO?? Either it should be there all the time, or nothing at all, right?? Why is there even a 'seed' for anything that later could be filtered out into an interference pattern?? And this data at D0 is recorded 8ns earlier than that of D2?? We could easily extend this to seconds, hours...

I don’t get it...


P.S. Wouldn’t we be able to tell 'which path' from the phase differences in the set up in 'my' picture in post #27?
 
Last edited:

Suggested for: Quantum Eraser and Its Implications

Back
Top