Call for experiment : Delayed Choice and Quantum Zeno Effect

In summary, under historical experiments, the Quantum Zeno Effect has been observed where a quantum system remains in a particular state if measurements are taken quickly. The Delayed Choice Quantum Eraser experiment confirms that the choice to measure can be delayed. A proposal for an experiment combining both QZE and DCQE involves using ultracold beryllium gas in a Penning Trap, with the hypothesis that erasing the information in the measurement pulse will show a decline or absence of the QZE effect. The consequence of these results would be to test whether the physical act of measurement is the causative factor in the suppression of state transitions, or if it is the leaking of information into the larger environment. References to specific experiments, such as the
  • #1
hyksos
37
12
TL;DR Summary
Delayed Choice Quantum Eraser and the Quantum Zeno effect are usually tested in isolation. Here we propose an experiment that combines both of them. This experiment could yield testable results.
Under several historical experiments, measurement back-action has exhibited the ability the suppress a system's transitions to other states, especially when measurements are taken at a high frequency in time. This phenomena has become known as the Quantum Zeno Effect. In short, a quantum system will remain locked into a particular state, if successive measurements of that state are performed quickly. In practice this means cold gases do not transition (or "decay") out of their excited state.

The Delayed Choice Quantum Eraser (hereby DCQE) is another experiment. It confirms that a choice to measure can be delayed. In particular, entangling the results of the measurement with a third system can "erase" the information that signal was holding about the original system's measurement. If the originally-measured system were undergoing unitary evolution, it will continue to due so , provided the information-storing system is "erased" downstream. DCQE is a contentious topic, since the original system was physically measured.

This is a proposal for an experiment which combines both DCQE and QZE in the same apparatus.

For the QZE portion, begin with ultracold beryllium gas in a Penning Trap, following Wineland/Itano/et al experiment in 1990. In that apparatus, an ultraviolet pulse constituted a "measurement" of the gas. The "state" is the transition probability from discrete fluorescent states of Be+. The results showed a smooth (quadratic?) falloff in the transition probabilities as a function of "measurement" frequency. In other words, the frequency of the measurement was obviously suppressing the state transitions of the Be+.

For the DCQE portion, we note that the measurement pulse in the Wineland apparatus scatters a small number of photons. Roughly on the order of 100. (Wineland reported 72). Those photons will need to be "erased" somehow. In principle, the Kim/Yu et al apparatus could be used, with idler photons, signal photons, and a coincidence counter. Although there could be problems involving coherence of the light's direction, and this may require two identical Penning Traps rather than one. These details would be hashed out by experimentalists and other experts in optics.

Hypothesis : This is a controlled experiment. The control we have is whether or not the ultraviolet pulse information is erased downstream or not erased. We expect that the act of erasing the information in the ultraviolet pulse will show a decline in the QZE effect, or better yet its complete absence. This is a testable prediction. We could get data from this and report on findings. (The possibility of publishing is real, and this experiment could be carried out with help from grants.)

Import and discussion : The consequence of these experimental results is to test whether the physical act of measurement is the causative factor in the suppression of the state transitions. We presume that this is not the case. We propose that the leaking of the information about the Be+ atom's states out into the larger environment is the true causative factor for this suppression. If this information is erased prior to leaking out, the Quantum Zeno effect will disappear. By means of rigorous hypothesis testing, we could show that whether or not the Be+ gas was measured has no effect on the transition probabilities. Only when the information stored in the ultraviolet photons is somehow recorded will the state transitions show suppression.

Your thoughts?
 
Last edited:
Physics news on Phys.org
  • #2
First: A DCQE does not teach us what you are implying. The quantum expectation in any Delayed Choice experiment - such as in either Delayed Choice Entanglement Swapping or your DCQE - is that the outcome does NOT depend on whether the choice is delayed or not. In other words, theory and experiment are in sync. The only independent element you can test is whether or not some quantum state can be restored to a prior point in time (quantum erasure of an apparent measurement). For a Zeno experiment, I'm not sure what that buys you.

Second: Normally, I would expect to see references to specific experiments to start a thread like this. I found your 1990 reference, but it was behind a paywall. This 2006 article by Itano is not, and it is a pretty good summary of the legacy of the 1990 experiment.

https://arxiv.org/abs/quant-ph/0612187
 
  • Like
Likes mattt, PeterDonis and PeroK
  • #4
hyksos said:
[..] whether the physical act of measurement is the causative factor in the suppression of the state transitions. [..] We propose that the leaking of the information [..] into the larger environment is the true causative factor for this suppression.

But measurement *is* the leaking of information into the larger environment...? You're saying it's not caused by measurement it's caused by measurement.

Anyways, I would consider this experiment to be equivalent to a straightforward application of the deferred measurement principle and its generalizations. In particular, when a measurement's result is not used (which is the case for measurements used for the zeno effect), that measurement can be replaced by a CNOT onto a fresh ancilla qubit in the |0> state without changing the output statistics of a quantum circuit. You can prove this as a theorem. Your experiment is just checking this theorem in a specific case that happens to involve the zeno effect and delayed choice.
 
  • Like
Likes hyksos
  • #5
Strilanc said:
But measurement *is* the leaking of information into the larger environment...? You're saying it's not caused by measurement it's caused by measurement.
(This is kind of a sidebar here but) how far down can we isolate the exact phenomena of "measurement" ?

Let me toss out two plausible answers off the top of my head just to spur discussion. Feel free to reject, correct, and criticize as you see fit.

(1) 'measurement' occurs when an exact duplicate copy of a quantum state is made. e.g. Feynman's assertion that measurement is copying.

(2) 'measurement' occurs when information from a formerly isolated system reacts causally with a large system undergoing a thermodynamically IRreversible process.

Or perhaps (1) and (2) are equivalent in some strange way. Your thoughts?
 
  • #6
hyksos said:
(1) 'measurement' occurs when an exact duplicate copy of a quantum state is made.
This is impossible by the no cloning theorem.

hyksos said:
e.g. Feynman's assertion that measurement is copying.
Reference, please?

hyksos said:
(2) 'measurement' occurs when information from a formerly isolated system reacts causally with a large system undergoing a thermodynamically IRreversible process.
This is more or less the modern decoherence view, yes.
 
  • Like
Likes vanhees71
  • #7
I come from quantum computing, where measurement is *extremely* well delineated. In your circuit diagrams, it's the box that has an M in it :biggrin:. It's sometimes defined as specifically the operation that reports whether the qubit was ON or OFF (instead of something more general). More general types of measurement, like Pauli product measurements and weak measurements and POVMs and whatnot, are then derived from this.

From this starting point, you can make a whole bunch of identities about how to rewrite circuits containing measurements while preserving the circuit's observable statistics. Things like "Measure(qubit) == Reset(ancilla) then ControlledNot(qubit, ancilla) then Measure(ancilla)" and the deferred measurement principle. You then gradually internalize that measurement is indistinguishable from controlled operations targeting low-entropy qubits. Which makes the many-worlds interpretation more and more compelling, because it basically amounts to asserting that equivalence.
 
  • Like
Likes protonsarecool
  • #8
PeterDonis said:
Reference, please?

{{ This correlating many copies of the same thing, is really called "measurement". Because when we measure something we are able to make records and write it down somewhere else. So if we wanted to get the answer we could look at the writing. So making a copy is analogous to making a record. I could play around with these boxes and throw them away and I can still find out what this one was by measuring anyone of the boxes. They will all agree, so one of the ones that is still left.

So this copying process -- in this particular case a copying process which sustains the correlation in button number one -- is called "measuring" what the result of one is. And the measurement of one precludes a perfect correlation in (box) two. It insists, that if you've made such a thing, that number 2 has three quarters of the time green and one quarter of the time red. It says the same thing : Instead of making the same button measurement on the same box all the time, just making many copies.

I just want to compare making a measurement with these copies. You say, "What does that got to do with it? What's the matter with pushing this button and looking at the light? Pushing the button and looking at the light really was making copies. Because a light -- a real physical large bulb -- puts photons out in all directions. You looking at the bulb can see that it's red and I'm standing over here and I also see that it's red. He can see it's red. He can see it's red. Everybody can see it's red. That's the example of the correlation. It doesn't matter which box you look at, which photons, over there or over there. They're always the same. That's this kind of correlation.

What we used to talk about, crudely, of "pushing a button and looking at a bulb" was really a mechanism for making this type of correlated copy. }}
( -- Richard Feynman. Eselan Institute Workshop. 1985. )

With blessings of the moderators, I will post a link to the video of this Feynman lecture, where he says the above. It is possible this Eselan video already exists in this forum's archives.
 
  • #9
addendum:

I mis-spoke earlier in my claim that we are making an exact copy of an entire quantum state. I should have been more precise in my wording. I meant to communicate that an attempt to make exact copies will give you a set of correlated copies in nature, who will be dis-correlated consistent with the Uncertainty Principle. The act of attempting to do this (and failing on exactness, due to U.P.) is what is called "measurement".

I cannot edit posts after a timer on this forum, but I would happy to perform that edit if given the chance. Thank you.
 
  • #10
hyksos said:
With blessings of the moderators, I will post a link to the video of this Feynman lecture, where he says the above.
Don't. This is not a peer-reviewed paper or textbook so, as entertaining as Feynman's videos are, it is not a valid reference here. If you want to back up your claims about what "measurement" is (which to me are doubtful, while some measurements might be usefully viewed as attempting to make correlated copies, I doubt that all measurements can be so viewed), you will need to find a valid reference (textbook or peer-reviewed paper) that supports your claim.
 
  • #11
PeterDonis said:
I doubt that all measurements can be so viewed), you will need to find a valid reference (textbook or peer-reviewed paper) that supports your claim.
Funny that you should mention textbooks and peer-review. I found your request funny, because if we just stick to textbook+peer review, we can't really talk about "measurement" in any watercooler conversational sense.

Is it mostly the case that textbooks are not going to mention measurement? I mean, if we reserve our attention exclusively to the raw mathematical formalism , there is nothing in there about a discontinuous collapse of a wave function.
 
  • #12
hyksos said:
Is it mostly the case that textbooks are not going to mention measurement? I mean, if we reserve our attention exclusively to the raw mathematical formalism , there is nothing in there about a discontinuous collapse of a wave function.
I have to disagree with that - just about every textbook goes beyond the formalism here. Any discussion that involves "collapse", "wave function reduction", "state preparation", the Born rule, or even eigenstates of the observable is going to either a) include a discontinuous collapse; or b) explicitly select a collapse-free interpretation such as MWI. And of course there is no shortage of peer-reviewed papers fishing in these waters.
 
  • Like
Likes vanhees71
  • #13
hyksos said:
Is it mostly the case that textbooks are not going to mention measurement?
How many QM textbooks have you actually read? Every one I have read talks about measurement. And as @Nugatory has said, there are also plenty of peer-reviewed papers that discuss measurement.
 
  • Like
Likes vanhees71
  • #14
Strilanc said:
I come from quantum computing, where measurement is *extremely* well delineated.
Which is the same starting point that lead Mermin to write Copenhagen Computation: How I Learned to Stop Worrying and Love Bohr.

Strilanc said:
From this starting point, you can make a whole bunch of identities about how to rewrite circuits containing measurements while preserving the circuit's observable statistics. Things like "Measure(qubit) == Reset(ancilla) then ControlledNot(qubit, ancilla) then Measure(ancilla)" and the deferred measurement principle.
And recently computer scientists showed "that subject to a few caveats, anything calculable with intermediate measurements can be calculated without them. Their proof offers a memory-efficient way to defer intermediate measurements ..."

Strilanc said:
You then gradually internalize that measurement is indistinguishable from controlled operations targeting low-entropy qubits. Which makes the many-worlds interpretation more and more compelling, because it basically amounts to asserting that equivalence.
But then you gradually stop to publish your improved understanding in easily understandable images. First you stop publishing on your blog, and later you might even stop formulating it in words understandable by educated listeners. Of course, this is an unfair game to a certain extent. You know that your understanding is nothing special, but mostly shared by "your community" with a similar background. The popularizers somehow seem to have a free license to propagate misunderstandings. If for example Sabine Hossenfelder publishes something like The Trouble with Many Worlds, her misunderstanding of the many-worlds interpretation will have a much bigger impact than some blog post by a young researcher making it clear with easily understandable examples why that way to understand MWI must be wrong.In my opinion, a huge part of the funding for quantum computing research was public money, and part of the hoped for return on investment was to get an improved understanding of quantum mechanics. And this includes perspectives of young researchers, like your approach to distinguish between “before-hand experience” description vs. “in-the-moment experience” descriptions:
Although I think of my approach as "beforehand instead of in-the-moment" thinking, as opposed to "density matrices".

Consider a markov process (a simple state machine with probabilistic transitions). The in-the-moment experience of running a markov process is different from the before-hand experience of describing expected states. In-the-moment, nothing never settles down. You're forever bouncing from state to state. Before-hand, the description *does* settle down. Every markov process approaches some equilibrium distribution. Before hand your description of the process settles, but in the moment the process never settles. Part of understanding markov processes is understanding the power of these two views, and when to use one or the other. Also it involves realizing you're using two subtly different meanings for "settle".

Quantum systems have the same settle-vs-no-settle dichotomy, but even more so due to entanglement. And I think you see glimmers of which view people lean on when they write about things and name things. "Quantum steering" has a very in-the-moment view of entanglement. Whereas the no-communication theorem comes from a before-hand view.

Pop science articles are basically always written from the in-the-moment view, which is unfortunate because that makes it very difficult to explain or integrate before-hand concepts such as the no communication theorem.
Of course, this is basically just the minimal statistical interpretation, in a certain sense. But it is formulated in images and words that are understandable by a much larger audience. And it might provide another perspective on the minimal statistical interpretation that might allow to resolve certain disagreements, or at least allow for a clearer analysis of contentious points.
 
  • Like
Likes vanhees71
  • #15
Strilanc said:
I come from quantum computing, where measurement is *extremely* well delineated. In your circuit diagrams, it's the box that has an M in it :biggrin:. It's sometimes defined as specifically the operation that reports whether the qubit was ON or OFF (instead of something more general). More general types of measurement, like Pauli product measurements and weak measurements and POVMs and whatnot, are then derived from this.
This is wildly interesting. We need a whole new thread on this alone.
 
  • #16
gentzen said:
If for example Sabine Hossenfelder publishes something like The Trouble with Many Worlds, her misunderstanding of the many-worlds interpretation
What misunderstanding of the MWI do you think Hossenfelder makes in this article? As far as I can tell, her main point is that the MWI does not solve the measurement problem. I think she is correct.

gentzen said:
some blog post by a young researcher making it clear with easily understandable examples why that way to understand MWI must be wrong.
I don't see how that post shows how anything Hossenfelder said is wrong. Can you be more specific?
 
  • Like
Likes vanhees71
  • #17
PeterDonis said:
I don't see how that post shows how anything Hossenfelder said is wrong. Can you be more specific?
Sabine Hossenfelder said:
many worlds people say, every time you make a measurement, the universe splits into several parallel worlds, one for each possible measurement outcome
Hugh Everett didn't say that. Bryce DeWitt said such things. Apparently already Everett tried to explain to DeWitt why this picture is "misleading" (source: David Deutsch). Everett's and Deutsch's "explanation" revolves around the potential of the worlds for interference, and why that is important. (Not sure whether this is related to ideas like the Elitzur-Vaidman bomb tester.) That post by a young researcher comes from the other side: it assumes that this branching would be literally true, and shows how this would go wrong.
(One way how I personally interpret this "going wrong" is that there would be "too much" randomness if this branching were literally true. I mention this here, because Sabine Hossenfelder's proposed experimental tests for violations of QM are about comparing the amount of observed randomness against that predicted by QM.)

The many worlds people explain this as follows. Of course you are not supposed to calculate the probability for each branch of the detector. Because when we say detector, we don’t mean all detector branches together. You should only evaluate the probability relative to the detector in one specific branch at a time.
The measurement postulate says: Update probability at measurement to 100%. The detector definition in many worlds says: The “Detector” is by definition only the thing in one branch. Now evaluate probabilities relative to this, which gives you 100% in each branch.
What the many worlds people are now trying instead is to derive this postulate from rational choice theory. But of course that brings back in macroscopic terms, like actors who make decisions and so on.
Those are more claims by Sabine Hossenfelder of what the "many world people" would say or try to do. Let me address just the last one: What rational choice theory is actually used for is to provide an interpretation of the meaning of probabilities in the context of a deterministic evolution of a universal wavefunction. This is important, independent of whether "the Born rule" can be derived in MWI, or must be postulated.
 
  • #18
gentzen said:
Hugh Everett didn't say that. Bryce DeWitt said such things.
Ah, I see. Yes, I agree the "splitting" language is misleading; I argue against it myself fairly often in these forums.

gentzen said:
That post by a young researcher comes from the other side: it assumes that this branching would be literally true, and shows how this would go wrong.
This I'm not so sure about. I don't see any link to the actual math of QM in that post. As I have pointed out in a number of threads in these forums, there is no "splitting" in the actual math of QM at all; the time evolution is unitary. As for interference, the examples given in that post don't involve measurement, so there is no interpretation of QM that would say interference can't occur between what the post is calling "branches" (for example, different possible distributions of trajectories of particles in a gas).
 
  • Like
Likes vanhees71 and gentzen
  • #19
PeterDonis said:
This I'm not so sure about.
In fact, that same blog post, at the end, says that the only kind of "branching" that is relevant for the MWI vs. other interpretations, "branching" when a permanent, irreversible record is made, does lead to permanent branch splits--those branches don't interfere. The post does avoid using the "multiple worlds" or "parallel worlds" language that Hossenfelder uses, so it is an improvement in that respect.
 
  • Like
Likes vanhees71

1. What is the delayed choice experiment?

The delayed choice experiment is a thought experiment in quantum physics that explores the concept of wave-particle duality. It involves a photon being sent through a series of mirrors and detectors, with the ability for the observer to choose whether to measure the photon's wave-like behavior or particle-like behavior after it has already passed through the mirrors.

2. What is the quantum Zeno effect?

The quantum Zeno effect is a phenomenon in quantum mechanics where a system's evolution is prevented or slowed down by frequent measurements. This effect is based on the idea that the act of measuring a system affects its behavior, and therefore, frequent measurements can prevent or delay its evolution.

3. How are the delayed choice experiment and quantum Zeno effect related?

The delayed choice experiment and quantum Zeno effect are related because they both involve the role of observation and measurement in the behavior of quantum systems. In the delayed choice experiment, the observer's choice to measure the photon's behavior affects its past behavior, while in the quantum Zeno effect, frequent measurements can prevent or slow down a system's evolution.

4. What is the significance of studying the delayed choice experiment and quantum Zeno effect?

Studying the delayed choice experiment and quantum Zeno effect can provide insight into the fundamental principles of quantum mechanics, such as wave-particle duality and the role of observation in quantum systems. It can also have practical applications in fields such as quantum computing and quantum information processing.

5. How is the delayed choice experiment and quantum Zeno effect being studied in modern science?

The delayed choice experiment and quantum Zeno effect are being studied through various experiments in quantum physics, including using photons, atoms, and other quantum systems. Researchers are also using advanced technologies, such as quantum computers and detectors, to further explore these phenomena and their implications in quantum mechanics.

Similar threads

Replies
4
Views
816
Replies
8
Views
903
Replies
15
Views
1K
Replies
19
Views
959
Replies
3
Views
1K
  • Quantum Physics
Replies
1
Views
785
Replies
23
Views
2K
  • Quantum Physics
Replies
2
Views
284
  • Quantum Physics
Replies
6
Views
1K
Replies
2
Views
695
Back
Top