Quantum Eraser and Its Implications

NewPeter
Messages
4
Reaction score
0
Hi all. I am new here, and am very interested in developments in theoretical physics, though I am not trained as a scientist. I am hoping some of you can help answer a question.

The quantum eraser experiment is said to prove that when which-path information is "erased" we get an interference pattern; the wave function has not collapsed, even though the "erasure" took place after the which-path information was initially obtained.

At the same time, we know that that description does not apply when which-path information is destroyed by at least some means other than those employed in the Kim, et. al, experiment. For example, if in the basic double slit experiment we obliterate irretrievably the measuring device and the which-path data it contains before looking at the screen, we will not see an interference pattern. "Erasure" of the which-path information does not in that case prevent wave-function collapse.

Does this not suggest, then, that it is not the information about which path was followed that collapses the wave function (or, corollarily, that it is the erasure of that which-path information that results in the fringed pattern in the quantum erasure experiment)? Mightn't it suggest, for example, that the quantum eraser experiment works instead because the entangled photons are recombined, and that there is some as yet undiscovered characteristic of that recombined photon that produces what looks to us like a pattern caused by wave interference (something that would have broad implications)?

Thanks so much,
Peter
 
Physics news on Phys.org
NewPeter said:
For example, if in the basic double slit experiment we obliterate irretrievably the measuring device and the which-path data it contains before looking at the screen, we will not see an interference pattern. "Erasure" of the which-path information does not in that case prevent wave-function collapse.

Welcome to PhysicsForums, NewPeter!

I am not aware of any experiments in which the measuring devices are destroyed. :smile: It would be a bit expensive.

The issue with any eraser is that IF you can restore indistinguishability (is that a word?) of the results of an intermediate measurement, then the effect is erasure. So erasure restores a superposition.

As best as the evidence indicates: erasure can occur before or after (yes!) the final outcome is obtained (is there interference or not). This is assuming you use entangled particles of course. I will tell you that following the logic of the erasure experiments is quite complicated, and not easy to discuss. But if you have a specific question, I will do my best to answer (as will many others here).
 
Thanks for your reply. I'll try to restate. My question is about what the eraser experiment actually proves. It seems to me that other mechanisms for erasing the which-path information (for example, actually and irrevocably erasing that information before looking at it) do not create the same result (i.e., a restoration of superposition). Consequently, I am wondering whether it is, in fact, the "erasure" of the information that restores superposition in the quantum eraser experiment, or whether something else about the recombination of the entangled particles is what results in the interference pattern. My question is whether it isn't possible that the quantum eraser experiment, rather than telling us anything about what happens when information is erased, actually reflects something unexplained and fundamental about the nature of the results of the double-slit experiment.

Thanks,
Peter
 
NewPeter said:
... My question is about what the eraser experiment actually proves. It seems to me that other mechanisms for erasing the which-path information (for example, actually and irrevocably erasing that information before looking at it) do not create the same result (i.e., a restoration of superposition). Consequently, I am wondering whether it is, in fact, the "erasure" of the information that restores superposition in the quantum eraser experiment, or whether something else about the recombination of the entangled particles is what results in the interference pattern. My question is whether it isn't possible that the quantum eraser experiment, rather than telling us anything about what happens when information is erased, actually reflects something unexplained and fundamental about the nature of the results of the double-slit experiment.

Thanks,
Peter

Well it could. The easiest explanation is to consider the probability wave as "real" until finally actualized. What does that mean? No one really knows. All we can say is that that we can make statistical predictions without having an understanding of the underlying mechanism. So is there something we don't understand? I would say so. Between the available interpretations, it is simply a guess. So far no one has an answer as to how to discern one from the other experimentally. I think we are getting closer every day. The telling points are these:

a) There appears to be a non-local component
b) There appears to be a non-causal component
c) There appears to be a non-realistic component

But none of these are completely proven individually. We know from Bell that one or more is correct.
 
NewPeter said:
Thanks for your reply. I'll try to restate. My question is about what the eraser experiment actually proves. It seems to me that other mechanisms for erasing the which-path information (for example, actually and irrevocably erasing that information before looking at it) do not create the same result (i.e., a restoration of superposition). Consequently, I am wondering whether it is, in fact, the "erasure" of the information that restores superposition in the quantum eraser experiment, or whether something else about the recombination of the entangled particles is what results in the interference pattern. My question is whether it isn't possible that the quantum eraser experiment, rather than telling us anything about what happens when information is erased, actually reflects something unexplained and fundamental about the nature of the results of the double-slit experiment.

Thanks,
Peter

hi peter, welcome to the forum.

the delayed choice quantum eraser (dcqe) would seem to suggest that past can be changed/effected by the future.

however, i think, the dcqe is a "filtering trick" i.e. only those photons get filtered that would support the pattern (interference or non-interference), creating the illusion that past can be effect by future.

- in dcqe sub-samples of samples filter through.
- the interference pattern is hidden/embedded in the non-interference pattern and the interference pattern is gleaned/filtered in some of the dcqe scenarios.

thus the dcqe does not prove anything beyond the now well known/expected properties/phenomena of
1. quantum entanglement and
2. single particle interference

dcqe is simply a mixture of the above two phenomena in a single experiment
 
San K said:
...however, i think, the dcqe is a "filtering trick" i.e. only those photons get filtered that would support the pattern (interference or non-interference), creating the illusion that past can be effect by future.
...

That is certainly one "interpretation". But it must be some via kind of non-local effect. After all, there can't be an attribute called "interfering" that is realistic (well defined) because the partner's measurement basis can be changed (erased or not) after the interference is detected.

Remember, the DCQE is set up to provide a context which spans both time and space (spacetime separation). The results agree with the context alone.

http://grad.physics.sunysb.edu/~amarch/Walborn.pdf

"We report a quantum eraser experiment which actually uses a Young double slit to create interference. The experiment can be considered an optical analogy of an experiment proposed by Scully, Englert, and Walther. One photon of an entangled pair is incident on a Young double slit of appropriate dimensions to create an interference pattern in a distant detection region. Quarter-wave plates, oriented so that their fast axes are orthogonal, are placed in front of each slit to serve as which-path markers. The quarter-wave plates mark the polarization of the interfering photon and thus destroy the interference pattern. To recover interference, we measure the polarization of the other entangled photon. In addition, we perform the experiment under ‘‘delayed erasure’’ circumstances."
 
Last edited by a moderator:
So are you both agreeing that it is not the erasure of the previously observed information that leads to the interference pattern? Is that a widely shared understanding?
 
NewPeter said:
So are you both agreeing that it is not the erasure of the previously observed information that leads to the interference pattern? Is that a widely shared understanding?

I don't know how to agree or disagree with your question. Erasure can lead to interference in these setups - that would be where which-path information is not available.
 
NewPeter said:
So are you both agreeing that it is not the erasure of the previously observed information that leads to the interference pattern? Is that a widely shared understanding?
I'd say your question points to the reasons why "erasure" is such a misnomer here. First of all, there is no such thing as "erasure" of "previously observed information"-- if the information is previously observed, it can never be erased. Erasure works by not observing the previous information, so by not destroying various coherences, which means the information was never extracted, so it is still "in" the experiment (and has therefore not been "erased"). Hence "erasure" is quite a misleading term, a better word would be "retroactive non-actualization". But that's a lot longer, so you see why "erasure" is used instead! The point is, it is quite important that we avoid the conceptually fatal tendency to imagine that some reality has been actualized before it has actually been put to a test that comes out A if the reality is that way and B if it is some other way.

For example, when we see a pattern that is the sum of two one-slit patterns, rather than a two-slit interference pattern, we are tempted to conclude "no two-slit interference occurred there." Is this a valid conclusion? No it isn't, because we have no idea what kinds of interference occurred there, our experiment has not put that question to the test-- it has only put to the test what is the net outcome of all the interferences that occurred there, and the net outcome is not a two-slit interference pattern. But living within that net outcome could be all kinds of contributory outcomes, and above all, the quantum erasure experiment tells us that two-slit interference is indeed a contributory outcome to the net non-two-slit pattern. Erasure is simply the means for disentangling those contributory two-slit outcomes, whereas failure to erase simply means failure to have access to that mode of disentanglement. "Erasure" doesn't erase anything at all-- indeed, what it actually does is much closer to not erasing some information that we might otherwise fail to access.
 
  • #10
Ken G said:
I'd say your question points to the reasons why "erasure" is such a misnomer here. First of all, there is no such thing as "erasure" of "previously observed information"-- if the information is previously observed, it can never be erased. Erasure works by not observing the previous information, so by not destroying various coherences, which means the information was never extracted, so it is still "in" the experiment (and has therefore not been "erased"). Hence "erasure" is quite a misleading term, a better word would be "retroactive non-actualization". But that's a lot longer, so you see why "erasure" is used instead!

agreed

Ken G said:
The point is, it is quite important that we avoid the conceptually fatal tendency to imagine that some reality has been actualized before it has actually been put to a test that comes out A if the reality is that way and B if it is some other way.

well put ken
 
  • #11
DrChinese said:
That is certainly one "interpretation". But it must be some via kind of non-local effect. After all, there can't be an attribute called "interfering" that is realistic (well defined) because the partner's measurement basis can be changed (erased or not) after the interference is detected.
yes drchinese...non-local, quantum entanglement (qe) phenomena is non-local. we are on the same page on that. how qe works is anyone's guess, no one knows yet.

when the partner's measurement basis is changed then only those "delayed" photons will pass through, the filter(s), that support the story/pattern that would map with the new measurement basis...and later when "delayed photons" are compared (via coincidence counter) with the "initial twin photons"...the pattern would map to the new/changed measurement basis...

however i don't think the future is changing the past or any points in future/present are changing any points in the past...in the dcqe experiment.
 
Last edited:
  • #12
Very interesting response, Ken. Am I correct that what you're saying is that observation "extracts information," and that that is what causes waveform collapse? Is that a widely shared understanding, or is the reason observation causes waveform collapse debatable? Are you saying that in the DCQE the information is never extracted? I thought the point was that at a point after which-path information is available, entangled particles -which, frankly I don't understand - are recombined (something I also don't understand) in a way that makes it (again) impossible to determine the which-path information: and the result is an interference pattern. (And what I was struggling with in my question was what the experiment demonstrates in light of the fact that other methods of putting which-path information out of reach, e.g., having a mechanical observer that never records the information for human observation or obliterating the information absolutely, do not result in an interference pattern.)

Thanks,
Peter
 
  • #13
As described in Wikipedia:
“The Quantum eraser experiment is a double-slit experiment in which particle entanglement and selective polarization is used to determine which slit a particle goes through by measuring the particle's entangled partner. This entangled partner never enters the double slit experiment. Earlier experiments with the basic Young double-slit experiment had already determined that anything done to discover by which path (through slit A or through slit B) a photon traveled to the detection screen would destroy the interference effect that is responsible for the production of interference fringes on the detection screen. …
“The advantage of manipulating the entangled partners of the photons in the double-slit part of the experimental apparatus is that experimenters can destroy or restore the interference pattern in the latter without changing anything in that part of the apparatus. Experimenters do so by manipulating the entangled photon; and they can do so before or after its partner has entered or after it has exited the double-slits and other elements of experimental apparatus between the photon emitter and the detection screen. So, under conditions where the double-slit part of the experiment has been set up to prevent the appearance of interference phenomena (because there is definitive "which path" information present), the quantum eraser can be used to effectively erase that information. In doing so, the experimenter restores interference without altering the double-slit part of the experimental apparatus. An event that is remote in space and in time can restore the readily visible interference pattern that manifests itself through the constructive and destructive wave interference. …
A variation of this experiment, the delayed choice quantum eraser experiment, “allows the decision whether to measure or destroy the ‘which path’ information to be delayed until after the entangled particle partner (the one going through the slits) has either interfered with itself or not. Doing so appears to have the bizarre effect of determining the outcome of an event after it has already occurred.” See Wikipedia’s articles on Quantum eraser experiments. delayed choice quantum eraser experiments, Wheeler's delayed choice experiment, the Afshar experiment, and Retrocausality. See also “Random Delayed-Choice Quantum Eraser via Two-Photon Imaging” by: Giuliano Scarcelli, Yu Zhou, Yanhua Shih (http://arxiv.org/abs/quant-ph/0512207v1)
 
  • #14
My purpose in providing the above description of the delayed choice quantum erasure experiments was to provide a foundation on which to argue that time reversibility is a necessary element for any explanation of these experiments. I wish to first examine whether the common quantum explanations for these experimental results (e.g. collapse of the wave function and decoherence) are viable. It is my "belief" that they are not. First, if we assume that wave functions actually collapse, it is my understanding this event is not time reversible such that no interference pattern could be recovered for the signal photons once the collapse occurred.
Please reflect on what is happening to the information in the time reversible path (i) when the photon(s) pass through the double slit; (ii) when the down converter creates an entangled “signal” and “idler” photon (iii) when the idler photon passively retains or is actively imparted "which path" information; (iv) when the signal photon reaches the detector; (v) when the active or passive erasure of the idler photon's "which path" information occurs (which theoretically could occur years after the signal photon reached the detector); (vi) when the measurement of the idler photon occurs; again theoretically years after the interference pattern theoretically was or was not recorded for the signal photon, and (vii) when the existence or non-existence of the interference pattern becomes known to an observer.
To the extent any of these events results in interactions between the quantum system with its environments, most physicists would currently interpret these interactions in the context of decoherence.

According to Wikipedia: “quantum decoherence is the mechanism by which quantum systems interact with their environments to exhibit probabilistically additive behavior - a feature of classical physics - and give the appearance of wavefunction collapse. Decoherence occurs when a system interacts with its environment, or any complex external system, in such a thermodynamically irreversible way that ensures different elements in the quantum superposition of the system+environment's wavefunction can no longer interfere with each other. … Decoherence does not provide a mechanism for the actual wave function collapse; rather it provides a mechanism for the appearance of wavefunction collapse. The quantum nature of the system is simply "leaked" into the environment so that a total superposition of the wavefunction still exists, but exists beyond the realm of measurement.”

If we assume, for the purposes of this discussion, that decoherence has occurred, I recognize that all components of the wave function are presumed to still exist in a global superposition even after a measurement or environmental interaction has rendered the prior coherences no longer "accessible" by local observers. I further understand that all lesser interactions are believed to be time reversible. However, this analysis requires that I ask a question that I am not aware others are asking: Did any of the interactions in these experiments increase the entropy of the system. Of course, entropy, like QM, is also time symmetric. However, it is well established that, in a time symmetric system, entropy should increase both backward and forward in time. I also recognize that entropy is not deterministic, but only probabilistic. However, if the time reversal path includes an event where entropy increased, should we not then ask: How is the entropy that was introduced by this interaction undone?

Please note that I have tentatively excluded the active erasure experiments from this conjecture in recognitions of Huw Price's paper "Boltzmann's Time Bomb", because active erasure (such as causing all the idler photons to have the same spin) might be seen to have created a localized low entropy state

Nonetheless, even if we only consider the passive erasure experiments, the advocates of wavefunction collapse and quantum decoherence have a problem: How does a photon that starts in a state of lower coherence (i.e. higher entropy) go backwards in time and, in doing so, regain the greater coherence (e.g. lower entropy) that it previously had? See “The Thermodynamical Arrow Of Time: Reinterpreting The Boltzmann-Schuetz Argument” by Milan M. Ćirković
http://philsci-archive.pitt.edu/archive/00000941/03/Boltzmann_final5.doc )
and “Probability, Arrow of Time and Decoherence ”by Guido Bacciagaluppi (http://arxiv.org/abs/quant-ph/0701225v1)

The forgoing has hopefully "fleshed out" the dilemma for the conventional interpretations of QM. Yakir Aharonov's time symmetric interpretation of quantum mechanics, by retaining the information from both the initial and final boundary conditions (the point of origination and the final actualization event) provides a first-order resolution to the problem.

I just posted a discussion of this in the context of the EPR paradox. See: zttp://www.physicsforums.com/showthread.php?t=546740
 
Last edited by a moderator:
  • #15
NewPeter said:
Very interesting response, Ken. Am I correct that what you're saying is that observation "extracts information," and that that is what causes waveform collapse?
Yes, I'm saying that any act of observation is an act of conversion-- we convert potential truths about a system that have not been actualized into actualized truths. In classical physics, we imagined that this conversion was passive-- the truths were "already there" before we actualized them. But in quantum mechanics, we find that the conversion is quite an active participant in the reality of the situation, and the things we "discovered" about the system were simply not true about it before the measurement led to their discovery. And along with this comes the key point that we must notice what information we have actually extracted (what truths have been actualized), and not assume things that are not in evidence.

In particular, we know that if we do a two-slit experiment and actualize the truth of which slit the particles went through (by correlating detection hits with which-way information), then the particles that went through each slit will make a one-slit pattern in front of that slit. But if we see a detection pattern that looks like two superimposed one-slit patterns, can we invert that information to conclude that which slit the particles went through must have been actualized even if we have no evidence that it has? No, we cannot say that, because we cannot demonstrate actualization of that truth just from looking at overlapping one-slit patterns in the aggregate detection pattern.

Instead, it seems to me that the quantum erasure experiment demonstrates above all that overlapping one-slit patterns can be produced as a result of two-slit interference patterns, offset from each other to produce something that looks just like two one-slit patterns because of some truth that was actualized that mimics the action of which-way information without really actualizing which-way information for those particles. Just having the detection pattern doesn't tell us that-- we need some additional prescription for sorting the hits in the pattern with the slit they went through (as in the non-erased case) to actualize which-way information, but if we don't have that (as in the erased case), we can use a different prescription to sort the hits (well after the fact of their being detected) that demonstrates two spatially offset two-slit patterns that go into the detection pattern, without any which-way information involved in the sorting. It is not the pattern itself that is determined by actualizing that information, because we can actualize that information long after the pattern is done, it is just how we explain the pattern (by sorting its contributors a certain way) that involves actualizing (or not actualizing) which-way information. If you don't actualize the which-way information, that inforrmation is not extracted by destroying the necessary coherences, so that information is still "in there", i.e. the coherences are still in there, and can then be used to explain the detection pattern (long after the fact of its creation) without appealing to any which-way information.

I guess the bottom line to what I'm saying is that extracted information does not determine what happens, it only determines how we attribute causes to what happened. That cause-attribution can occur long after the fact, just as I can find some novel explanation for why World War I happened long after that war is over, if I extracted some new information that was previously encoded in the history in a way that had gone unnoticed. We should not imagine that World War I required my attribution in order to happen-- cause and effect is a mental process, not a physical one. Above all, I'm saying we would not say that my new attribution to the cause of World War I propagated backward in time and became the reason that World War I happened, any more than decisions I make later can go back in time and change a detection pattern on a screen.
Is that a widely shared understanding, or is the reason observation causes waveform collapse debatable?
The whole issue of collapse is highly debatable. Many interpretations of quantum mechanics don't recognize any kind of collapse at all, either because the state continues uncollapsed (MWI), or because a wavefunction is not a state of a system at all (ensemble interpretation).
Are you saying that in the DCQE the information is never extracted?
Yes, which-way information is never extracted when the choice to "erase" is made. Alternatively, the choice can be made to extract that information, after the fact. Either way, the detection pattern under study is not any different-- it is already done after all. What is different is the way it is explained in terms of contributing parts-- it is separated into parts by correlating the original detector hits with the entangled results. But the whole can be a sum of parts in different ways and still be the same whole-- because here the "whole" is a detection pattern that has lost a huge amount of the information/coherences that went into making it. That information is still encoded in there somewhere, and can be extracted by entangled experiments, but you can't say what went into that pattern just by looking at it, and indeed nature doesn't say what went into it either-- until you actualize that information, after the fact.

(And what I was struggling with in my question was what the experiment demonstrates in light of the fact that other methods of putting which-path information out of reach, e.g., having a mechanical observer that never records the information for human observation or obliterating the information absolutely, do not result in an interference pattern.)
Whether or not the information that has been actualized/extracted is recorded or noticed in any way is largely irrelevant, and is not a fruitful path to follow in your analysis. Instead, consider what information is available, whether or not it is used or noticed. For this, it suffices to imagine a hypothetical observer, but not a hypothetical apparatus-- that's the key point, the apparatus is classical and macroscopic, so we can get away with the hypothetical observer concept on our "end" of the apparatus. But we can't get away with imagining "super-observers" who know things about a system without any apparatus capable of establishing/extracting/actualizing that truth.
 
  • #16
Jon_Trevathan said:
My purpose in providing the above description of the delayed choice quantum erasure experiments was to provide a foundation on which to argue that time reversibility is a necessary element for any explanation of these experiments. I wish to first examine whether the common quantum explanations for these experimental results (e.g. collapse of the wave function and decoherence) are viable. It is my "belief" that they are not. First, if we assume that wave functions actually collapse, it is my understanding this event is not time reversible such that no interference pattern could be recovered for the signal photons once the collapse occurred.

You are already wrong at this point. The most important point of the wiki article is this sentence: "Doing so appears to have the bizarre effect of determining the outcome of an event after it has already occurred."

And it should be taken seriously. It appears to have this effect - but it does not have this effect. There is no determining the outcome of the event after it has occured. You just throw away a part of the data by doing coincidence counting. The whole dataset never changes and does not care about whether you erase which-way info or not.
 
  • #17
Cthugha said:
... And it should be taken seriously. It appears to have this effect - but it does not have this effect. There is no determining the outcome of the event after it has occured. You just throw away a part of the data by doing coincidence counting. The whole dataset never changes and does not care about whether you erase which-way info or not.

I think we would need to know more about the underlying mechanism to make this statement. There are other experiments, such as this one, that "tend" to point the other way.

http://arxiv.org/abs/quant-ph/0201134

See 2nd paragraph, page 5. The decision to entangle 2 photons can be made AFTER their polarization was observed. Jon_Trevathan's "V" explanation (see other thread) becomes a "W" in this case.

I'm not saying your explanation isn't correct, just that we appear to have no way to see deep enough to discern one from the other.
 
Last edited:
  • #18
DrChinese said:
There are other experiments, such as this one, that "tend" to point the other way.

http://arxiv.org/abs/quant-ph/0201134

See 2nd paragraph, page 5. The decision to entangle 2 photons can be made AFTER their polarization was observed. Jon_Trevathan's "V" explanation (see other thread) becomes a "W" in this case.

I'm not saying your explanation isn't correct, just that we appear to have no way to see deep enough to discern one from the other.

Do they really point the other way? The paper says "Therefore, this result indicate that the time ordering of the detection events has no influence on the results and strengthens the argument of A. Peres: this paradox does not arise if the correctness of quantum mechanics is firmly believed."
I fully agree with that.

Or to rephrase, when the paper says "Thus depending on Alice’s later measurement, Bob’s earlier results either indicate that photons 0 and 3 were entangled or photons 0 and 1 and photons 2 and 3. This means that the physical interpretation of his results depends on Alice’s later decision.", I fully agree. The results on Bob's side do not change or depend on Alice's choice. The physical interpretation clearly does. For me that does not point to any "the present changes the past"-scenario. It clearly does point towards a non-local scenario, though.
 
  • #19
Cthugha said:
Do they really point the other way? ...

(Not really disagreeing, by the way...)

I think so. The problem with the non-local effect idea is that the effects span time as well as space (whereas non-local implies spanning of space alone). The advantage (if you want to call it that) to the time-symmetric interpretation that Jon was mentioning is that this time spanning falls out naturally and there is no underlying non-local effect to explain. Just the setup for the experiment ("W") suggests a time symmetric vision by the authors, but that is strictly a guess. Of course there are some disadvantages too.

We can probably agree that different interpretations give different explanations, and that in many essential ways they are equivalent.
 
  • #20
I think we need to ask, "just what is time symmetric, the reality, or our way of interpreting the reality?" I would say it is the latter, which would seem to be in agreement with Cthugha. (Not that it disagrees with anyone else.) To me, a key lesson of quantum erasure, and quantum mechanics as a whole, is that our tendency to uniquely associate cause and effect (which are attributes of an interpretation) with actual events is illusory. When what actually happened can be interpreted in multiple ways, perhaps because some entanglements have not been put to some test yet, then so can cause and effect, and the time symmetry that is being referred to is a freedom in our ability to attribute reasons for what happened-- not a freedom exhibited by what actually happened.

That was the point of my "World War I" analogy (albeit classical)-- in history, the tendency is to imagine there "really were some reasons" that World War I happened, and it is historians job to figure out the appropriate weight of those reasons. However, both "reasons", and "weight" behind them, are purely constructs of our intelligence-- they have more to do with how we think about what happened than about what actually happened. It's all the history we ever get-- the stories we tell. So if someone makes some totally new discovery about World War I, it can completely change, after the fact, our entire notion of what World War I was. The events didn't change-- there is no time symmetry in the actual chain of events. But our interpretation of what happened, which is pretty close to what we mean by "what happened", can certainly change after the fact, and does exhibit a kind of time symmetry for that reason.
 
  • #21
While this subject is up again can somebody answer a question I've asked previously? (I never really got a satisfactory answer)

The question is specifically about this experiment - http://arxiv.org/abs/quant-ph/9903047"

The thing I don't understand is that the D1 and D2 detectors both show interference fringes and anti-fringes when the sub-samples are examined. What I don't get is that the idler photons encounter a beam splitter before going to either of the detectors. As I understand it, the chance of passing through this BS or reflecting off it is 50/50. So I would expect no interference patterns in these sub-samples.

To put it another way - how do idler photons, of signal photons which contribute to an interference pattern, always end up at the same detector?
 
Last edited by a moderator:
  • #22
DrChinese said:
We can probably agree that different interpretations give different explanations, and that in many essential ways they are equivalent.

Well, sure. I fully agree that you can create an interpretation that involves backward causation or something similar and get it to match reality. I just do not think there has been any experiment out there that really needs such an explanation. The bottomline of the Scarcelli and Shih paper Jon cited

"for entangled photons it is misleading and incorrect to interpret the physical phenomena in terms of independent photons. On the contrary the concept of “biphoton” wavepacket has to be introduced to understand the non-local spatio-temporal correlations of such kind of states. Based on such a concept, a complete equivalence between two-photon Fourier optics and classical Fourier optics can be established if the classical electric field is replaced with the two-photon probability amplitude. The physical interpretation of the eraser that is so puzzling in terms of individual photons’ behavior is seen as a straightforward application of two-photon imaging systems if the nonlocal character of the biphoton is taken into account by using Klyshko’s picture."

is fully sufficient and very important as this aspect is often ignored.

Joncon said:
To put it another way - how do idler photons, of signal photons which contribute to an interference pattern, always end up at the same detector?

See above. You cannot say that there are signal photons that contribute to an interference pattern. The interference patterns are always two-photon interference patterns that require to detect both photons. In simplifying terms the beam splitter and detectors D1 and D2 form a kind of Mach-Zehnder-interferometer and it is well known that the relative intensities at the exit ports of the beam splitter depend on the phase of the light field (or to be more precise on the phase differences between the indistinguishable probability amplitudes). On the other side the signal photons basically pass a double slit. The resulting interference pattern of course also depends on the phase of the light field/the probability amplitudes.

Now, as you have entangled photons, the light itself is not phase stable and therefore incoherent, so you do not get a double slit or Mach-Zehnder-interference pattern by looking at any of the two sides alone (the phase differences are different for each repeated emission of a photon pair). What is however well defined is loosely speaking the relative phase of the entangled biphoton state. The phase difference differs from photon pair to photon pair, but it is of course the same for both photons forming the pair. Now if one photon is detected at D2, that gives you some information about the phase difference for the photon pair examined right now. The phase difference will surely not have a value that will cause detections at D1 and it is more likely that it is a phase difference which is connected with a high detection probability at D2. Now that you have some information about the phase difference, you also get some information about the most probable detection positions of the entangled partner on the double slit side of the experiment.

In other words, the detection events on both sides are not statistically independent, but a detection at some position at scanner D0 is directly linked to a higher or lower probability to detect a photon at D1/D2. For every position of D0, there is some preference whether it is more likely that photons will be detected at D1 or D2.
 
  • #23
Cthugha said:
In other words, the detection events on both sides are not statistically independent, but a detection at some position at scanner D0 is directly linked to a higher or lower probability to detect a photon at D1/D2. For every position of D0, there is some preference whether it is more likely that photons will be detected at D1 or D2.

So does that mean the chance of the photon passing through the BS or reflecting off it is NOT 50/50, but is influenced by phase?
 
  • #24
Cthugha said:
...

"for entangled photons it is misleading and incorrect to interpret the physical phenomena in terms of independent photons. On the contrary the concept of “biphoton” wavepacket has to be introduced to understand the non-local spatio-temporal correlations of such kind of states. Based on such a concept, a complete equivalence between two-photon Fourier optics and classical Fourier optics can be established if the classical electric field is replaced with the two-photon probability amplitude. The physical interpretation of the eraser that is so puzzling in terms of individual photons’ behavior is seen as a straightforward application of two-photon imaging systems if the nonlocal character of the biphoton is taken into account by using Klyshko’s picture."

is fully sufficient and very important as this aspect is often ignored.

...

This is a great point. The system is not separable into components while there is entanglement. Calling them 2 photons is a convenience which is not always justified.
 
  • #25
A 50/50 beam splitter has two input ports and two output ports. If you have a field entering at a single input port, the probability of the corresponding photons to exit via either ooutput port is indeed 50/50. If you have mutually coherent fields present at both input ports, the relative phase of the two fields entering creates an interference effect and the probability can differ significantly from 50/50. Read up on Mach-Zehnder interferometers, if you are interested in the details.
 
  • #26
Cthugha said:
A 50/50 beam splitter has two input ports and two output ports. If you have a field entering at a single input port, the probability of the corresponding photons to exit via either ooutput port is indeed 50/50. If you have mutually coherent fields present at both input ports, the relative phase of the two fields entering creates an interference effect and the probability can differ significantly from 50/50. Read up on Mach-Zehnder interferometers, if you are interested in the details.

OK, maybe I'm misunderstanding you (probable) or you're misunderstanding my question (possible). The paper I linked shows two interference patterns when coincidence counting is done: -

29o49bs.jpg

2iqk48j.jpg


So if a single photon (half photon??) enters the final BS, which has a 50/50 output, why don't these patterns look like this?

9uc8ew.jpg
 
Last edited:
  • #27
Gentlemen, please correct a layman if I’m wrong, but this is how I understand DCQE:

600px-Kim_EtAl_Quantum_Eraser.svg.png

EDIT: Isn’t something wrong with this picture? If you look at the red/blue path to D1 and D2 via BSa b c this would add a 'phase' depending on red or blue, no? The path is thru+mirrored vs. thru+thru?? :rolleyes:

After the whole experiment is completed, we get a bunch of data in the detectors (D0,D1,D2,D3,D4) and the coincidence counter.

What we’re interested in, is an interference pattern in the data of the signal photons in detector D0, but all we find there is 'random noise', if inspecting this data only.

However, using the coincidence counter to match D0 data with the entangled twin idler photons, we do get an interference pattern for detectors D1/D2 (no path info), and when matching with detectors D3/D4 (path info) the interference pattern is lost.

Here’s an overlay of data matching. The blue dots represent matching coincidence counts for D0 + D2 and the white dots represent matching coincidence counts for D0 + D3:

nn2xjk.png


Hence, nothing is erased or changed in retrospect.

To me, this means that if one were to look only in the data for D1/D2 (no path info) we will also see the interference pattern there, and in the data sole for D3/D4 (path info) the interference pattern is lost.

What is real weird though (to me), is that 120+ and 100+ photon counts (red circles) is found at this position on the x-axis at D0, and this data was recorded 8ns earlier than that of the idler!

How the h*ll did get there??
 
Last edited:
  • #28
Joncon said:
OK, maybe I'm misunderstanding you (probable)

Hehe Joncon, interesting... I don’t know if we’re asking the same question here... but if you add CR+LF [Enter] between picture 29o49bs.jpg and 2iqk48j.jpg this page will be much nicer to look at... :wink:
 
  • #29
Ah OK, it looked fine on my screen but maybe not for different resolutions. Any better?
 
  • #30
Thanks, much better!
(It was okay with the browser in full screen, but I’m a sneaky bastard when comes to space :smile:)


Hopefully you will get some ideas on how the graphs work from my post... The last R03 graph is a data match (coincidence count) between detectors D0 + D3.
 
  • #31
NewPeter said:
My question is whether it isn't possible that the quantum eraser experiment, rather than telling us anything about what happens when information is erased, actually reflects something unexplained and fundamental about the nature of the results of the double-slit experiment.
The net effect of erasing info is that you then have less info. Which would seem to reveal less, not more, than the case where you have more info.

What's unexplained about the double-slit experiment, the essential conundrum, is 1) if what's going through the slits is a wavefront (or sequence of wavefronts, ie., a wave train), then why the extremely localized detections, and 2) if what's going through the slits is a particle (or sequence of particles), then why the interference pattern?

Since there's no way, currently (maybe ever), to answer this question, the conundrum is spoken of in terms of wave-particle duality. A nice expression of our ignorance regarding what's actually the case wrt quantum-level phenomena.

So, yes, there's something fundamental and unexplained (perhaps unexplainable) about the nature of the results of the double-slit experiment -- and any experiment which entails individual particle detection and interference patterns is a "reflection" of this conundrum.
 
  • #32
Cthugha said:
The bottomline of the Scarcelli and Shih paper Jon cited

"for entangled photons it is misleading and incorrect to interpret the physical phenomena in terms of independent photons. On the contrary the concept of “biphoton” wavepacket has to be introduced to understand the non-local spatio-temporal correlations of such kind of states. Based on such a concept, a complete equivalence between two-photon Fourier optics and classical Fourier optics can be established if the classical electric field is replaced with the two-photon probability amplitude. The physical interpretation of the eraser that is so puzzling in terms of individual photons’ behavior is seen as a straightforward application of two-photon imaging systems if the nonlocal character of the biphoton is taken into account by using Klyshko’s picture."
Can you say again what that paper is? I was on a thread on here some months back where quite a few self-styled quantum physics experts told me I was nuts to suggest that delayed choice experiments had a classical analog, and indeed the only thing that made the experiment strange was the attempt to connect it with the concept of discrete particles behaving independently of each other-- a notion never encountered in classical wave mechanics.

In other words, classical wave mechanics has no difficulty with DCQE, because it doesn't need to support a particle concept there. Perhaps much, if not all, of the difficulties in interpretation DCQE stem from over-intepreting the concept of a local particle. So I'm not wild about the time-symmetric interpretation's tendency to imagine physical effects traveling backward in time to the origin, then forward along entangled paths, because it's too literal a description of a process that could easily be framed as simply us "changing our story" about what happened in some past event.
Now, as you have entangled photons, the light itself is not phase stable and therefore incoherent, so you do not get a double slit or Mach-Zehnder-interference pattern by looking at any of the two sides alone (the phase differences are different for each repeated emission of a photon pair). What is however well defined is loosely speaking the relative phase of the entangled biphoton state. The phase difference differs from photon pair to photon pair, but it is of course the same for both photons forming the pair. Now if one photon is detected at D2, that gives you some information about the phase difference for the photon pair examined right now. The phase difference will surely not have a value that will cause detections at D1 and it is more likely that it is a phase difference which is connected with a high detection probability at D2. Now that you have some information about the phase difference, you also get some information about the most probable detection positions of the entangled partner on the double slit side of the experiment.
This is the clearest description of "what is really happening" in the DSQE experiment that I"ve ever seen.
 
  • #33
Ken G said:
... if the information is previously observed, it can never be erased. Erasure works by not observing the previous information, so by not destroying various coherences, which means the information was never extracted, so it is still "in" the experiment (and has therefore not been "erased"). Hence "erasure" is quite a misleading term ...
Good point, imo.

Ken G said:
I think we need to ask, "just what is time symmetric, the reality, or our way of interpreting the reality?" I would say it is the latter ...
Another good point, imo. As well as other good points which I won't reproduce here.

So, do so-called quantum erasure experiments inform wrt the reality underlying instrumental behavior? Or, are we still left with the fundamental conundrum illustrated by quantum double-slit experiments?
 
  • #34
Joncon said:
So if a single photon (half photon??) enters the final BS, which has a 50/50 output, why don't these patterns look like this?

Assuming the setup as shown in DevilsAvocado's post, you always have two fields arriving at the final BS. One from the red path and one from the blue path. As you cannot say that the photon that will be detected later has taken one of these paths, you need to take both of them into account which then gives the interference effect mentioned earlier. If you just send single photons down one of these paths, you will instead get a pattern like the last one you posted.

Ken G said:
Can you say again what that paper is? I was on a thread on here some months back where quite a few self-styled quantum physics experts told me I was nuts to suggest that delayed choice experiments had a classical analog, and indeed the only thing that made the experiment strange was the attempt to connect it with the concept of discrete particles behaving independently of each other-- a notion never encountered in classical wave mechanics.

G. Scarcelli et a., "Random delayed-choice quantum eraser via two-photon imaging", The European Physical Journal D - Atomic, Molecular, Optical and Plasma Physics
Volume 44, Number 1, 167-173 (2007). You can also find it on ArXiv.

However, one should not take the classical analogy too far. The two-photon probability amplitude can be quite a non-classical entity.

DevilsAvocado said:
To me, this means that if one were to look only in the data for D1/D2 (no path info) we will also see the interference pattern there, and in the data sole for D3/D4 (path info) the interference pattern is lost.

D1 and D2 are bucket detectors. If you just look at them, all you get is some constant count rate which will obviously cannot give any interference pattern. You really need the additional information obtained from coincidence counting at various positions of D0 to get some kind of pattern.
 
  • #35
Cthugha said:
D1 and D2 are bucket detectors. If you just look at them, all you get is some constant count rate which will obviously cannot give any interference pattern. You really need the additional information obtained from coincidence counting at various positions of D0 to get some kind of pattern.

Of course!

I totally missed the |x in the picture, sorry... :blushing:

So what you get is a number of 'blind' (no position info) detections in D2 (only talking about D0+D2 now), and corresponding entangled twin detections in D0, where the position on the x-axis for D0 is stored. For both D0 and D2 we also store the time tag in the coincidence counter.

This enables us to later filter out those photons in D0 which corresponds to D2 and this information forms the interference pattern in graph R02.

My question remains: How on Earth could we expect to get any interference pattern out of the data in DO?? Either it should be there all the time, or nothing at all, right?? Why is there even a 'seed' for anything that later could be filtered out into an interference pattern?? And this data at D0 is recorded 8ns earlier than that of D2?? We could easily extend this to seconds, hours...

I don’t get it...


P.S. Wouldn’t we be able to tell 'which path' from the phase differences in the set up in 'my' picture in post #27?
 
Last edited:
  • #36
Ken G said:
self-styled quantum physics experts

That’s me!

(:smile:)

Ken G said:
In other words, classical wave mechanics has no difficulty with DCQE

Now you’re dreaming again Ken G.
 
  • #37
DevilsAvocado said:
Of course!

And this data at D0 is recorded 8ns earlier than that of D2?? We could easily extend this to seconds, hours...

I don’t get it... P.S. Wouldn’t we be able to tell 'which path' from the phase differences in the set up in 'my' picture in post #27?

hi devilsavocado

thanks for editing my other post.

Not sure what you are asking, however if I guessed correctly (as to what you are asking) then the below information might help.

When the photon at Do is recorded then the probabilities of its entangled twin photon (hitting specific locations/detector also gets fixed (the wave function collapses and the idler photon has a definite state).

Thus we can probabilistic-ally say:
given that the photon at Do is (recorded)at position x,y ...the probability of its twin photon arriving at a detector d2 is z etc.

On a separate note: can we say that an avocado is much higher on the morality hierarchy and closer to god/heaven than an advocate?...:)
 
Last edited:
  • #38
San K said:
hi devilsavocado

thanks for editing my other post.

You’re welcome, hope it helped!

San K said:
Not sure what you are asking, however if I guessed correctly (as to what you are asking) then the below information might help.

When the photon at Do is recorded then the probabilities of its entangled twin photon (hitting specific locations/detector also gets fixed (the wave function collapses and the idler photon has a definite state).

I see a problem right there; the idler twin photons don’t hit any "specific locations" (on the x-axis). It’s only a count of photon hits and timing, and depending on path you could tell/or not tell which slit it went thru. That’s it.

San K said:
Thus we can probabilistic-ally say:
given that the photon at Do is (recorded)at position x,y ...the probability of its twin photon arriving at a detector d2 is z etc.

I don’t get it. There is no "z" for D2, only registration of the hit + time.

My confusion is due to the fact that no interference pattern is ever seen in the total pattern of signal photons at D0, meaning that if you "throw away" the other idler detectors you got nothing but 'noise' at D0.

This picture shows data for D0+D2 and D0+D3:

nn2xjk.png


Now imagine you would add the data for D1 & D4 also, and remove any colors and 'guiding curves' = there’s your "noise" at D0.

So my question is: I understand that we could get a mix of interference/non-interference pattern in D0, what I don’t understand is how this could be set before any of the idler detectors has recorded anything at all. In this paper the time difference is 8ns, but this could easily be extended into 'absurdum'...

Unless I’m missing something substantial – something 'weird' is happening here.

San K said:
On a separate note: can we say that an avocado is much higher on the morality hierarchy and closer to god/heaven than an advocate?...:)

Definitely, case closed! :approve: (:smile:)
 
  • #39
ThomasT said:
So, do so-called quantum erasure experiments inform wrt the reality underlying instrumental behavior? Or, are we still left with the fundamental conundrum illustrated by quantum double-slit experiments?
I think we are left with the fundamental conundrum, but we get insights into how to make it not a conundrum: by working on our expectations rather than our physics. In some sense, the discovery of DCQE moves us one step farther away from understanding the double slit-- it is a more sophisticated way to explore the double-slit behavior, but it results in even more sophisticated questions that we cannot answer, rather than answering the original ones! But in another way it moves us closer to not being concerned about our lack of answers, because it actually teaches us something about what an answer is, specifically, it teaches us the difference between an answer to a question that an experiment can give meaning to, and an answer to a question that we imagine has meaning but probably doesn't (because we can't find an experiment to answer it, without changing the question we are answering).

I'd say the main lesson is that what we think happened in some experiment depends on how analyze and test what happened, because many roads can lead to the same destination-- many types of empirical augmentation can be relevant to the same originally stripped-down empirical investigation, but they all might be consistent with something different happening because the original stripped-down version doesn't distinguish them.

Where all this is most relevant is in regard to the "next theory" after quantum mechanics. The key question is, is there really something wrong with quantum mechanics that needs fixing, or is there something wrong with our expectations for physics that need fixing? If we work on the latter hard enough, we might be able to get quantum mechanics to seem like a "perfect theory," in that it does everything we can expect a physics theory to do. But that doesn't mean there aren't really problems with quantum mechanics, that might make some future generation look back on us, with their new improved theory, and say "I can't believe you were really satisfied with that state of affairs." Just as we look back on those before us.
 
  • #40
DevilsAvocado said:
I understand that we could get a mix of interference/non-interference pattern in D0, what I don’t understand is how this could be set before any of the idler detectors has recorded anything at all. In this paper the time difference is 8ns, but this could easily be extended into 'absurdum'...
This is the point I'm making, that when you call something "noise", you have no idea what "information" went into it. One man's noise is another man's information, the only difference is how they are slicing that information. You see "no interference pattern", and conclude that no interference occurred at all. But you can't conclude that-- you can only conclude that no interference occured in the net. It seems to me that the main message of DCQE is that if our experiment cannot tell us why or how we get the "noise" we get, then we cannot conclude it is "really noise", it might be layers and layers of highly structured information (including interference patterns) that we are simply not extracting in our experiment, whether it be a stripped-down double slit with which-way information, or a more sophisticated DCQE experiment with a delayed choice to extract which-way information. Other experiments (like a delayed choice to "leave in" the which-way information rather than extract it by destroying the necessary coherences) might be able to extract that information, but not by changing anything that happened in the first detection-- but simply by looking at it in a different way. There is no problem with looking at something in a different way any amount of time later-- what actually happened is never changing. (That's what I meant by the causes of WWI, an analogy that doesn't seem to resonate for you. Too classical for your taste, probably.)

But what gets very subtle is when "what actually happened" is itself a construct of our intelligence, based on what information we have about the apparatus, and what expectations we have for physics. So looking underneath the lesson of DCQE, I think we see a deeper message: what actually happened is what the detector showed, period-- our desire to deconstruct further what happened requires different apparatus to disentangle, but if the apparatus is different then something different happened-- even if it is something that must be consistent with the stripped-down version, it doesn't have to be the same thing as happened in the stripped-down version. Gone are the days when we could imagine our measuring apparatus was a "fly on the wall" to what is really happening-- instead, we find that our measuring apparatus is a defining element of what is really happening-- regardless of when in time those defining choices are made. Thus we should not talk so much about what actually happened, or changing what actually happened, we should simply talk about what we can say about what happened, and when we can say it!
 
Last edited:
  • #41
Cthugha said:
Assuming the setup as shown in DevilsAvocado's post, you always have two fields arriving at the final BS. One from the red path and one from the blue path. As you cannot say that the photon that will be detected later has taken one of these paths, you need to take both of them into account which then gives the interference effect mentioned earlier. If you just send single photons down one of these paths, you will instead get a pattern like the last one you posted.

Ah OK, I think I've got it now. So just to confirm I've understood - the idler photon "travels both paths" in the same way the signal photon does, and so interferes with itself?
 
  • #42
DevilsAvocado said:
P.S. Wouldn’t we be able to tell 'which path' from the phase differences in the set up in 'my' picture in post #27?

No. You start with an unknown phase. So you end up with a phase of unknown +x for going one way and unknown+y for the other which is basically still unknown. The difference y-x has some influence on the result, but gives no which way info.

DevilsAvocado said:
I understand that we could get a mix of interference/non-interference pattern in D0, what I don’t understand is how this could be set before any of the idler detectors has recorded anything at all. In this paper the time difference is 8ns, but this could easily be extended into 'absurdum'...

I am not quite sure I get your question right, so forgive me if my answer is way off target. What you get at D0 is basically a superposition of many interference patterns which then add up to noise. As a maybe easier to grasp example imagine a sine wave. You have a device that gives out two identical sine waves as a signal. You switch it on and off and you get sine waves out every time, but the initial phase differs every time you switch it on. So the sine waves sometimes start at the highest point, sometimes at zero, sometimes at the lowest point, sometimes in between and so on, but both of the waves coming out at the same time are exactly equal. Now you switch it on and off many times and perform two different measurements on the two sine waves coming out:

1) For one of them you just add up every sine wave that comes out after each time you switch your device on. If you integrate long enough, all you will get is a straight line as for each sine wave coming out having its highest value at some point there will on average also be one sine wave coming out which is exactly out of phase so it has its lowest value at that very same point. The sum of all of them just gives the straight line.

2) For the other sine wave coming out, you do not measure the shape of the sine wave, but have some measuring device that just gives you the initial phase. Nothing more.

So now you can sort. The sum at 1) is a straight line, but you can filter using the information from 2. For example you could take all runs with the initial phase being zero. If you now take all switch on processes at 1) which correspond to the subset you chose at 2), you will get a sine wave back. If you pick the subset with a phase of pi, you will get a different sine wave. And so on and so forth.

The DCQE experiment of course adds some extras, for example non-local effects enter as you have entangled photons.

DevilsAvocado said:
I see a problem right there; the idler twin photons don’t hit any "specific locations" (on the x-axis). It’s only a count of photon hits and timing, and depending on path you could tell/or not tell which slit it went thru. That’s it.

That's almost it. One important point is missing. If you cannot tell which slit it went through, this means that the photon will either arrive at D1 or D2. However, the probability is not 50/50. Depending on which position of D0 the corresponding signal photon is detected, it is either more likely that the idler will end up at D1 or at D2.


Joncon said:
the idler photon "travels both paths" in the same way the signal photon does, and so interferes with itself?

Yes!
 
  • #43
Thanks Cthugha, that one's been bugging me for a while :smile:

Although this doesn't seem too much more mysterious than the standard double slit experiment now ...
 
  • #44
Joncon said:
Although this doesn't seem too much more mysterious than the standard double slit experiment now ...

Well, there are several implementations of DCQE out there. Some of them are more complicated, some can be broken down to be explained in rather simple terms. That depends. By the way in my opinion the standard double slit can already be pretty mysterious...
 
  • #45
Cthugha said:
By the way in my opinion the standard double slit can already be pretty mysterious...

Oh yeah, I agree. What I mean is that the DCQE (at least the one mentioned here) doesn't seem to add as much mystery as it appears to initally.
 
  • #46
Cthugha said:
I am not quite sure I get your question right, so forgive me if my answer is way off target.

I have a feeling it’s a great answer... if there’s any 'problem' it most probably is located between my ears...

I think I’m almost there, but I’m slightly 'blinded' by years of discussing EPR-Bell, rotating polarizers, and stuff, so I must ask you this, before going any further:

  • Has the 8ns delay, mentioned in the paper, anything to do with anything?

  • What kind of pattern would one get at D0 if we completely remove everything on the idler side (from the PS and on)?

  • Is the entanglement just a 'tool' to get everything "in phase" like your "two identical sine waves"?
 
  • #47
Ken G said:
This is the point I'm making, that when you call something "noise", you have no idea what "information" went into it. One man's noise is another man's information, the only difference is how they are slicing that information.

That sounds like politics to me... :wink:

Seriously, I think my way of expressing myself has caused some 'misunderstanding'. When I say "noise" I mean "pattern noise", or the lack of a "meaningful pattern". The actual measurements (counts/hits) of single photons are not noise to me (okay, there are of course real noise, but this is reduce by the coincidence counting). In other threads we have had 1,500+ comments on EPR-Bell and measurement loopholes, I don’t think I can take another round on this...

Ken G said:
You see "no interference pattern", and conclude that no interference occurred at all.

No, that’s not what I’m saying. I see the interference pattern, clearly. However, I don’t understand how this works and how it got there. (Until Cthugha is about to save my soul ;)

Ken G said:
Other experiments (like a delayed choice to "leave in" the which-way information rather than extract it by destroying the necessary coherences) might be able to extract that information, but not by changing anything that happened in the first detection-- but simply by looking at it in a different way.

I agree that we are not changing anything in already performed measurements, but to me this is not "the problem". Maybe I’m "looking at it" in the wrong way... :smile:

Ken G said:
There is no problem with looking at something in a different way any amount of time later-- what actually happened is never changing.

Agree 100%

Ken G said:
(That's what I meant by the causes of WWI, an analogy that doesn't seem to resonate for you. Too classical for your taste, probably.)

Well, here we disagree. To me there’s a HUGE difference between historical human events and physics, and that is repeatable empirical data that fits the theory, again and again and again and again... until someone come up with a brighter idea.

But the data never change, and Newton’s apple does not suspend itself in mid-air just because of Einstein, it will continue to fall the same way it always has.

Philosophy, psychology, economy, history, metaphysics, etc, don’t have this luxury and this makes it very different (to me).

Ken G said:
But what gets very subtle is when "what actually happened" is itself a construct of our intelligence, based on what information we have about the apparatus, and what expectations we have for physics. So looking underneath the lesson of DCQE, I think we see a deeper message: what actually happened is what the detector showed, period

Yes, the detector is all we have to hold on to...

Ken G said:
Gone are the days when we could imagine our measuring apparatus was a "fly on the wall" to what is really happening-- instead, we find that our measuring apparatus is a defining element of what is really happening-- regardless of when in time those defining choices are made. Thus we should not talk so much about what actually happened, or changing what actually happened, we should simply talk about what we can say about what happened, and when we can say it!

To us the apparatus must be real (even if theory eventually says otherwise on a more fundamental level). The value the apparatus shows must be real, and we must be able to agree on this value.

If it shows 120 photons, then it is 120 photons to everyone. Not with respect to this and that. This is what we got, period.

If we give this up, we have nothing, absolutely nothing, and science becomes 'philosophical gibberish' – "Please define 120!"


(I’m not saying every measurement is 'perfect', but I think we have to assume that they are 'reasonable' and as good as it gets, in case of real science.)
 
Last edited:
  • #48
DevilsAvocado said:
I think I’m almost there, but I’m slightly 'blinded' by years of discussing EPR-Bell, rotating polarizers, and stuff, so I must ask you this, before going any further:

OK I'll take a punt, and treat it as a test until someone with more knowledge can come back with the proper answers :wink:

[*]Has the 8ns delay, mentioned in the paper, anything to do with anything?
I think this is just the bit which puts the "Delayed" in DCQE. So we send our photons through the double slit, but then can choose to "erase" the which path information 8ns after the actual detection.

[*]What kind of pattern would one get at D0 if we completely remove everything on the idler side (from the PS and on)?
I would imagine the pattern would be completely unchanged as there is still the possibility, in principle, to obtain which path information.

[*]Is the entanglement just a 'tool' to get everything "in phase" like your "two identical sine waves"?
Again, this just enables us to retrieve or "erase" which path information *after* the signal photons have been detected, which wouldn't be possible just using single photons.
 
  • #49
DevilsAvocado said:
No, that’s not what I’m saying. I see the interference pattern, clearly. However, I don’t understand how this works and how it got there. (Until Cthugha is about to save my soul ;)
That's what I have been talking about, the answer to the "how it got there." The answer is, it was always there, it just wasn't discernable-- it looks like noise when the experiment is not able to separate it. It's a bit like a code-- if you see a coded message, it might look like complete gibberish, no rhyme or reason there and certainly not a message. But if you have the decoder, the message pops right out. You don't ask "where did this message come from, it was complete gibberish a moment ago-- have I done something that propagated a signal into the past and turned gibberish into a message being sent?" The time later that you decode the message is completely irrelevant, it could be 100 years later, because the message was always there. You didn't change it, you decoded it. That's what correlating the entanglements does. But you can "decode" it in several ways, based on what you choose to do with the entanglements. Do one thing, and the message is still gibberish-- you haven't extracted that information (you extracted some other information instead, perhaps some other message that now makes sense to you). Do something different with the entanglements, and it is like using a different cypher. The message pops right out, any amount of time later.
Well, here we disagree. To me there’s a HUGE difference between historical human events and physics, and that is repeatable empirical data that fits the theory, again and again and again and again... until someone come up with a brighter idea.
It's just an analogy, but I think it is a good one. The point is, if we stick to a "just the facts ma'am" approach, we have a bunch of events that led up to WWI, and we have a bunch of photons hitting detectors at various times. That's it, nothing more. But we are not satisfied, we want to seek reasons for why these events transpired, what was the "cause" of the presence or absence of patterns. Right away we are telling a story-- we've left the dry narrative of photons hitting detectors and people knocking off Archdukes, and we are saying "this led to that." It isn't history any more, it isn't physics any more-- yet we still call it history, and physics, because in fact this is what we want to know about history and physics. But our means of analysis has entered the picture-- we no longer are dealing in irrefutable empirical data, we have invoked a process of description, and it need not be unique.

That's the key point, the different things we do with the entangled pairs, long afterward, are like choosing different processes for describing what happened in the original data. No matter which process we choose, we still have to explain the same initial data, but the way we explain it can be very different. That's quantum erasure, and it's also historical analysis-- at least, that is the similar features to them. There are of course also differences!
But the data never change, and Newton’s apple does not suspend itself in mid-air just because of Einstein, it will continue to fall the same way it always has.
Ah, but the first half of your sentence has nothing to do with the second! The data never change, true, but whether or not the apple continues to fall is a description of what happened to the apple, it isn't data! Newton says the apple fell, Einstein says the apple ceased to be accelerated by the branch. A totally different story about what happened to the apple, both consistent with the data. So the data did not change when Einstein came along, but what "happened to the apple" certainly did change with Einstein! Because what happened is a construct, and changes in information, centuries later, can change that construct dramatically.
Philosophy, psychology, economy, history, metaphysics, etc, don’t have this luxury and this makes it very different (to me).
But physics is actually not so different-- it doesn't have that luxury either. All that is different is the precision that is possible, and the scale where we encounter just where that "luxury" breaks down.

To us the apparatus must be real (even if theory eventually says otherwise on a more fundamental level). The value the apparatus shows must be real, and we must be able to agree on this value.
Yes, the value the apparatus shows-- but not why it shows it. Not whether or not interference occured, not which slit the particle went through. Those are not part of the data until we make the choice to make them part of the data-- at which point our description of what happened also changes, even long after the original experiment in which the happening happened.
If it shows 120 photons, then it is 120 photons to everyone.
Certainly, but that's not "what happened". We don't say "120 photons hit a detector in this here pattern", we say "no two-slit interference". The latter is not 120 photons, it is a kind of judgement about what happened, and that's what quantum erasure shows is not a unique thing, and can change a century later without actually changing anything at all but our mode of analysis of the original happening. This is no minor point-- quantum mechanics is extending to physics the much more general rule that our descriptions of what happened are dependent on our means of establishing what happened.
 
Last edited:
  • #50
Joncon said:
OK I'll take a punt, and treat it as a test until someone with more knowledge can come back with the proper answers :wink:

Thanks, much appreciated!

Joncon said:
I think this is just the bit which puts the "Delayed" in DCQE. So we send our photons through the double slit, but then can choose to "erase" the which path information 8ns after the actual detection.

Okay... interesting... but if I got this right; the entanglement does not affect the outcome at D0 one bit, right? So what’s in fact is 'delayed' is the choice to measure "which path", or not, in "cloned twin beam", right?

If I understand this right, non-locality is not the crucial thing here, but a "clone copy" of the signal beam, right?

Joncon said:
I would imagine the pattern would be completely unchanged as there is still the possibility, in principle, to obtain which path information.

This is, I think, the 'Gordian Knot' to me... Why do we get a mixture of interference/non-interference pattern in D0? What causes it? There’s no "flip-flopping gate" at the double slit, is it? I don’t get it? In a normal experiment we would get an interference pattern or no interference pattern, not both, right??

Joncon said:
Again, this just enables us to retrieve or "erase" which path information *after* the signal photons have been detected, which wouldn't be possible just using single photons.

This I get, but as you see in Cthugha answer we could also do it with another mechanism of two "cloned waves"... I think I’m 'over-interpreting' the part of entanglement in this experiment... I don’t know...


(Ken G, get back later...)
 
Back
Top