Single photon interference time threshold (double slit experiment)

In summary: In other words, if you know the state of one particle, you can also know the state of any other particle that is in the same state. However, if the particles are considered to be real, then they cannot be in a superposition. This is why, in the double-slit experiment, you cannot see interference if you know the state of one of the photons.
  • #1
my_wan
868
3
In the paper (AIP Conf. Proc. 810 (2006), 360):
[URL=http://arxiv.org/abs/quant-ph/0312026]What is a quantum really like? said:
Abstract. The hypothesis of quantum self-interference is not directly observable, but has at least three necessary implications. First, a quantum entity must have no less than two open paths. Second, the size of the interval between any two consecutive quanta must be irrelevant. Third, which-path information must not be available to any observer. All of these predictions have been tested and found to be false. A similar demonstration is provided for the hypothesis of quantum erasure. In contrast, if quanta are treated as real particles, acting as sources of real waves, then all types of interference can be explained with a single causal mechanism, without logical or experimental inconsistencies.

Mardari outlines three predictions of quantum self-interference, all of which empirical evidence is presented that appear to refute all three. The implications of the paper is interesting but of these I am particularly interested in the experiments falsifying number two in the paper: "The size of the interval between any two consecutive quanta must be irrelevant".

What objections could be formulated against this time threshold between individual photon emissions after which the photons can no longer be seen to 'apparently' self interact? And is this convincing evidence that self interaction does not occur and that the apparent self interaction is the result of overlap of the pulse width of separate photons?
 
Physics news on Phys.org
  • #2
The paper is interesting, but I am quite skeptical of its conclusions. I haven't looked at it in any detail, but one thing struck me right away-- it is pretty obvious why you would get an interference pattern if you shine two different lasers at two different slits, as it is a completely classical result. One should not expect self-interference to be involved in that situation, so there should never be a rule that says all interference is self-interference-- Maxwell's equations already violate that claim. It is particularly clear in the laser case, because the classical fields of those lasers are coherent, and if the lasers have the same frequency (a crucial need which I'll bet that experiment obeyed), then the classical E&M fields in the two slits have a single (if arbitrary) phase relationship throughout the detection of a photon. Such a single phase relationship is alll you need to get interference of classical E&M fields, and by the correspondence principle in reverse, that requires you get an interference pattern in repeated quantum experiments. A prediction from this analysis is that if they used lasers of two different colors, they would not see an interference pattern by shining them on different slits.
 
  • #3
The problem in QM when you start talking about interference between separable bosons is that it is not supposed to happen. This is why you lose interference patterns when you have path information. Without self interaction it also begs the question of how energy is conserved. If separable photons interact and mix such that two photons cancel what happened to the energy associated with them?

Here is a paper referenced in the above paper:
http://arxiv.org/abs/quant-ph/0304086"
Also in: JOSA B, Vol. 22, Issue 2, pp. 493-498 (2005)
[URL=http://arxiv.org/abs/quant-ph/0304086]Quantum interference with distinguishable photons through indistinguishable pathways[/URL] said:
Here, we report an interference experiment in which the detected photons retain their distinguishing information. The photons approach the beamsplitter at different times, with different polarizations, and may even have different wavelengths. They propagate directly to the detectors without passing through compensating/masking elements and retain their distinguishing properties until being absorbed by the detectors. Nonetheless, interference is observed in the coincidence rate. We explain this counterintuitive result as the interference between two-photon wavepackets.

This is the kind of stuff I am trying to track down. I am not sure where to stand on these issues either. Here is another that I am even more ambivalent about:
http://arxiv.org/abs/physics/0504166"
Also discussed here:
http://arxiv.org/abs/0710.4873"
Abstract: [URL=http://arxiv.org/abs/0710.4873]The shadow of light: evidences of photon behaviour contradicting known electrodynamics[/URL] said:
We report the results of a double-slit-like experiment in the infrared range, which evidence an anomalous behaviour of photon systems under particular (energy and space) constraints. The statistical analysis of these outcomes (independently confirmed by crossing photon beam experiments in both the optical and the microwave range) shows a significant departure from the predictions of both classical and quantum electrodynamics.

I have my own provisional attitude about these things, but I am more interested in the issues as they relate to the standard model.
 
Last edited by a moderator:
  • #4
my_wan said:
The problem in QM when you start talking about interference between separable bosons is that it is not supposed to happen. This is why you lose interference patterns when you have path information. Without self interaction it also begs the question of how energy is conserved. If separable photons interact and mix such that two photons cancel what happened to the energy associated with them?

One should note that two-photon interference is something completely different than self-interference. You also never have the situation of two photons cancelling each other out. You get a dip in the joint detection rate at one position and a peak at a different one. It is also not necessary that two photons are indistinguishable to see two-photon interference. The important question is whether the probability amplitudes leading to two joint two-photon detection events are indistinguishable.

This has been demonstrated experimentally for example in:

Pittman et al., "Can Two-Photon Interference be Considered the Interference of Two Photons?", Phys. Rev. Lett. 77, 1917–1920 (1996)
Bennett et al., "Interference of dissimilar photon sources", Nature Physics 5, 715 - 717 (2009) and
Flagg et al, "Interference of Single Photons from Two Separate Semiconductor Quantum Dots", Phys. Rev. Lett. 104, 137401 (2010).

my_wan said:
Mardari outlines three predictions of quantum self-interference, all of which empirical evidence is presented that appear to refute all three. The implications of the paper is interesting but of these I am particularly interested in the experiments falsifying number two in the paper: "The size of the interval between any two consecutive quanta must be irrelevant".

I did not read the whole paper. I stopped when he starts discussing reference number 3, the experiment by Charles Santori and turns the experimental results around and claims the difference of what actually happened. In the original experiment the two-photon interference was used as an argument that the two consecutive photons are indistinguishable. Now he just claims that they are distinguishable without giving a good argument and takes that as a proof that distinguishable photons interfere in general. This way of reasoning is ridiculous.
 
Last edited:
  • #5
Thanks It will take me a bit to work through those papers but that is what I was looking for. As well as any criticisms of the above papers. I am a bit bleary eyed working on a program I think I finally figured out how to get working. I will read through more this evening.
 
  • #6
my_wan said:
The problem in QM when you start talking about interference between separable bosons is that it is not supposed to happen. This is why you lose interference patterns when you have path information.
You make a good point, but I think I see the answer now. I'll note that in radio astronomy, for example, it is routine to measure phase-correlated information from two radio dishes that are more widely separated than the photon coherence length. It's a classical interference effect, it wouldn't be there one photon at a time. I expect the same is true about the laser experiment-- if they turned their intensities down so that the two lasers, aimed at different slits, were sending photons through one at a time, there would be no interference pattern. I guess this does expose an interesting cool person in the correspondence principle. I suspect the answer lies in the indistinguishability of the photons-- the analysis that suggests you would have to have self-interference treats the photons as if they had their own individual wave functions, but if the two lasers have the same frequency (a requirement to get the interference pattern), the photons they emit are indistinguishable. That I suspect is also why which-path information destroys the pattern-- that information is tantamount to tagging the photon and breaking the indistinguishability.

Without self interaction it also begs the question of how energy is conserved. If separable photons interact and mix such that two photons cancel what happened to the energy associated with them?
You know there's no energy associated with the interference pattern, because that comes out classically-- the classical equations show the interference pattern, and conserve energy.

I have my own provisional attitude about these things, but I am more interested in the issues as they relate to the standard model.
I think it's just a question of looking deep enough at the hidden assumptions that are unwarranted.

ETA: I now see that Cthugha mentions indistinguishability as well!
 
  • #7
Ken G said:
You know there's no energy associated with the interference pattern, because that comes out classically-- the classical equations show the interference pattern, and conserve energy.
But the destructive interference I am talking about here there is no interference pattern. Photons are their own antiparticles, hence an identical photon 180 degrees out of phase should leave no photon (or interference pattern) at all if their location is matched and they interact. Something more akin to an anti-laser, where energy is conserved and the photons drained off as heat.

The problem comes up in the context of the wavefunction not being defined in ontological terms. In classical fields with an ontological real particulate substructure of the waves conserving energy with canceled waves is trivial. When the substructure is only a mathematical device lacking ontological real substance the disappearance of a wave leaves no substance with which to define what happened to the energy associated with it.
Ken G said:
I think it's just a question of looking deep enough at the hidden assumptions that are unwarranted.
That is why I am so interested in these phenomena. It looks like there is, at least in principle, enough data there to to clean out some otherwise possible assumptions. Yet the vagaries of the data leave me grasping at uncertainties. I do not even know how much to trust some of the empirical data in some cases, leaving the interpretations of various authors even more suspect.
Ken G said:
ETA: I now see that Cthugha mentions indistinguishability as well!
Yes, this is a core issue which relates to your first paragraph I did not quote, but have a few things to say about it here. You make the reasonable judgment that two lasers must have the same frequency to get interference pattern. Classically this is not required and directional sound transmission has been done using the nonlinear interaction of ultrasound which cannot be heard. Yet, when pointed at somebody, the nonlinear interactions produce a sound that can be heard. It sounds as if the air around your head is talking to you. It is also reported in the link quoted above that different frequencies can interfere or modulate each other. Specifically:
(In PDF) [URL=http://arxiv.org/abs/quant-ph/0304086]Quantum interference with distinguishable photons through indistinguishable pathways[/URL] said:
Here, we report an interference experiment in which the detected photons retain their distinguishing information. The photons approach the beamsplitter at different times, with different polarizations, and may even have different wavelengths. They propagate directly to the detectors without passing through compensating/masking elements and retain their distinguishing properties until being absorbed by the detectors. Nonetheless, interference is observed in the coincidence rate. We explain this counterintuitive result as the interference between two-photon wavepackets.

This is why I am trying to narrow down the empirical data as tightly as I can. The assumptions presented in many of these papers tends to cross over a range of interpretations from different experimental settings.

I like Neumaier's interpretation for a broad range of reasons, that are also relevant here. Yet it was constructed in such a way that it merely retains validity with the standard model. It begs some questions I would like to see some answers to that can only be given empirically. It looks to me like, at least potentially, some of the above can likely do that, or some variations of the above experiments.
 
  • #8
my_wan said:
But the destructive interference I am talking about here there is no interference pattern. Photons are their own antiparticles, hence an identical photon 180 degrees out of phase should leave no photon (or interference pattern) at all if their location is matched and they interact. Something more akin to an anti-laser, where energy is conserved and the photons drained off as heat.
This reminds me of a question I once had about beam splitters-- can't you set them up so the split beams completely cancel, and then how do the photons know not to come out of the laser? (I don't think there's any heat, destructive interference just means the photons don't come out at all.)

But I believe the answer to this is that no matter what you do, mirrors can't make an interference pattern that is zero everywhere. Mirrors can only make the kind of interference that changes where the photon can go, not whether it can go or not. Absorbers make the kind of interference that can give the photons nowhere to go, but then it's clear where the energy is going.

If you created a laser and an anti-laser, and put them in the same place, that would be the case where the photons just don't come out at all. But that requires you maintain the 180 degree difference-- if the difference wandered, over time it would be just like two independent energy sources, the constructive and destructive interference would cancel out when time averaged, rather than when space averaged.

When the substructure is only a mathematical device lacking ontological real substance the disappearance of a wave leaves no substance with which to define what happened to the energy associated with it.
That's why I like to think of the wavefunction as "instructions for where the particle goes." When the wavefunction cancels out, the instructions say "don't go." That's what is happening on the other side of the mirror from the side you are looking at it from.

I do not even know how much to trust some of the empirical data in some cases, leaving the interpretations of various authors even more suspect.
That does make it tricky, though generally I see cases where the data is right, but the conclusions being drawn have overlooked the correct answer so are making all kinds of strange claims. That's certainly easy to do.
Yet, when pointed at somebody, the nonlinear interactions produce a sound that can be heard. It sounds as if the air around your head is talking to you. It is also reported in the link quoted above that different frequencies can interfere or modulate each other.
Subtle things can happen, but they generally have a classical explanation that, by the correspondence principle, also shows up quantum mechanically, it can just be hard to find. You mention nonlinear interactions, that sounds like situations where the medium is doing something that changes the propagation, so the superposition principle breaks down. Quantum mechanically, that sounds like loops in the Feynman diagram that represent interactions with virtual modes. If we overlook the virtual modes that can appear, we might not see the quantum mechanical explanation to the classical behavior. I'm just guessing, not knowing the specifics. Also, wave packets can do surprising things, like solitons in nonlinear media, or all this "slow light" business, or even making features in the wave packet travel faster than c. These things just get really subtle, but it often has a classical analog so is not as strange as it first seems.
 
  • #9
I have absolutely no doubt energy is conserved no matter what ontological status some fundamental aspect has. My thinking is not geared around conserved or not. Rather since it is conserved how much trouble can this make for the correspondence principle? It depends non-trivially on what is measurable and testable and the degree to which the role of indistinguishability can be maintained.

The destructive interference issue is not so much about whether you can cancel an entire waveform or not, but even if some part of it cancels without defining what happened to the energy associated with that part, it is just as big an issue as a complete cancellation. You cannot have the 'total' energy just fluctuating willy nilly with whatever constructive/destructive interference may be in progress at some point in time. At least not beyond what the Uncertainty Principle allows. Perhaps it is related to the vacuum catastrophe?

Another area this may play a role is the Casimir effect, which also has a good classical explanation. When you suppress the fluctuations between two plates what exactly becomes of the energy associated with the fluctuations? Obviously it goes into producing the force pulling the plates together. But is that not a problem for virtual fluctuations, mathematical devices, to contain measurable forces?

Could this also be related to energy pseudo-tensors in GR, where for any given point in space a coordinate choice (differential) can result in the energy associated with that point to be zero? What strikes me is that there is also an analogous situation with the kinetic energy associated with a macroscopic inertial mass, which can be made to go away simply by choosing a coordinate system in which the inertial mass is motionless. But note that the inertial masses that were motionless prior to the coordinate change now must have a definable kinetic energy. This entails a nonlocal energy conservation with anything going FTL.

This leads me to the notion that if a wavefunction (minus the ensemble or wavefunction collapses) has a definite associated energy and you can choose a coordinate to make energy of some points in the field go away, then this by definition entail that same energy appearing somewhere else in the wavefunction "nonlocally". Yet physically it would be no different from increasing the kinetic energy of the moon by flying toward it. A purely relativistic (kinematic) effect. The present formulation assumes that the energy can go away simply because the waveform was never real to begin with. Perhaps all that really went away was the equivalent of kinetic energy?

Ken G said:
That's why I like to think of the wavefunction as "instructions for where the particle goes." When the wavefunction cancels out, the instructions say "don't go." That's what is happening on the other side of the mirror from the side you are looking at it from.
I take the opposite tack, something more akin to Neumaier's thermal interpretation. In saying this there are some caveats Neumaier spoke of concerning ensembles. The wavefunction as defined would not only include the wavefunction associated with localized waves, but also contain an ensemble of every possible location that localized wave can exist under the constraints provided. Just like a Gibbs ensemble describing a dice as many dice with 6 possible states. So the formalism would consist of a real wave which is distributed over some area of space, no 'particle' at the center, superimposed over all the possible states like the dice. Hence the wave really do superimpose and distribute across spaces (states), but in a much more limited fashion than a Gibbs ensemble of such waves appears to describe. Hence the wavefunction collapse only collapses the Gibbs ensemble of the already well defined wave and not the wave itself, just like a dice. Yet the wave description is just a valid.

In this picture there is nothing there to tell to go here or go there. A localized wave, like a phonon, already has a 'reasonably' well defined location and that is all the particle is. Yet it can still constructively and destructively interfere like any wave. And when we choose a coordinate system where the energy in some part of the waveform disappears, it does not mean the wave must have been an imaginary mathematical device, rather it just means the equivalence of the kinetic energy went to zero. Just like you can make the relativistic kinetic energy of a moving car disappear simply by pacing next to it at the same speed and direction in your car.


Those are my gut feelings about the situation. What can and cannot be demonstrated or logically maintained empirically is another question altogether. Neumaier's interpretation adds a larger domain of consistency than what I could provide on my own, but I still want more direct empirical constraints with a more detailed model at the sub-Planck level. Nature is not obligated to oblige me or anybody else though. I want to know if it can be empirically broken it anywhere. This is why I am so interested in the empirical data.
 
  • #10
my_wan said:
The destructive interference issue is not so much about whether you can cancel an entire waveform or not, but even if some part of it cancels without defining what happened to the energy associated with that part, it is just as big an issue as a complete cancellation. You cannot have the 'total' energy just fluctuating willy nilly with whatever constructive/destructive interference may be in progress at some point in time.
Well, one thing you could do is, similarly how the "particle" concept is not taken literally, the "energy" concept could not be taken literally either. What ontological status does energy have anyway? Just say energy is a kind of expectation value, that's very much in the spirit of the virtual particle concept and the HUP.

Could this also be related to energy pseudo-tensors in GR, where for any given point in space a coordinate choice (differential) can result in the energy associated with that point to be zero? What strikes me is that there is also an analogous situation with the kinetic energy associated with a macroscopic inertial mass, which can be made to go away simply by choosing a coordinate system in which the inertial mass is motionless. But note that the inertial masses that were motionless prior to the coordinate change now must have a definable kinetic energy. This entails a nonlocal energy conservation with anything going FTL.
Yes, the energy concept is a bit nebulous in relativity, so that's another reason to give it no ontological status of its own.
The present formulation assumes that the energy can go away simply because the waveform was never real to begin with. Perhaps all that really went away was the equivalent of kinetic energy?
Maybe so.
I take the opposite tack, something more akin to Neumaier's thermal interpretation. In saying this there are some caveats Neumaier spoke of concerning ensembles. The wavefunction as defined would not only include the wavefunction associated with localized waves, but also contain an ensemble of every possible location that localized wave can exist under the constraints provided.
That is certainly a subtle and nuanced approach. I personally see no problems with it, though it is some tough sledding. It may well have descriptive and intepretive power.

In this picture there is nothing there to tell to go here or go there. A localized wave, like a phonon, already has a 'reasonably' well defined location and that is all the particle is. Yet it can still constructively and destructively interfere like any wave. And when we choose a coordinate system where the energy in some part of the waveform disappears, it does not mean the wave must have been an imaginary mathematical device, rather it just means the equivalence of the kinetic energy went to zero. Just like you can make the relativistic kinetic energy of a moving car disappear simply by pacing next to it at the same speed and direction in your car.
Yes, I would not worry too much about objections that focus on the ontological status of energy, it really doesn't have one.
I want to know if it can be empirically broken it anywhere. This is why I am so interested in the empirical data.
And there is no better science that that!
 
  • #11
I see a problem with removing the ontological status of energy, even if it is just a direct proxy for something else. Such as the equivalence between energy and momentum conservation in SR (energy-momentum 4-vector). How do you have conservation laws for something lacking an ontological status or proxy for something that does not actually exist? Wavefunctions are not conserved quantities, they merely describe in some way how conserved quantities evolve.

There has been some issue about the role of conservation laws in wavefunction collapse models:
Philip Pearle, Found.Phys. 30 (2000) 1145-1160
http://arxiv.org/abs/quant-ph/0004067"


If we take the thermal interpretation were the wave contains ontological expectation values then the nonexistence of energy where the waves destructively interferes is no different from the nonexistence of kinetic energy of a stationary mass in classical physics. Yet the energy-momentum 4-vector would be fully conserved at all times in the thermal interpretation. It is only when you start removing the ontological status of this or that when conservation laws get messy, even in situations where it appears, through a frame shift, as if a mass (energy-momentum 4-vector) disappeared over here and nonlocally reappeared over there. It would physically be no different from the (nonlocal) shift in the kinetic energy of distant objects when you accelerate your car.

It appears to me that this is less nuanced not more, and hinges on standard physics only. I also got to thinking about the indistinguishability issue in quantum measurements. When two classical waves overlap, to what extent can we maintain that the waves are separable? We certainly cannot say which classical molecules belong to which wave. Hence, even in a purely classical regime, the notion of wave interactions without inseparability of some variables is not possible. Inseparability defines interactions even in purely classical waves. So it appears that inseparability must in some way always be empirically maintained even if the conditions under which inseparability can be be achieved has to be relaxed. Which is apparently all these various two photon experiments is doing, with other describing how inseparability was involved.

That then fully justifies this paper reference provided by Cthugha (Online PDF link added):
Pittman et al., "http://physics.nist.gov/Divisions/Div844/publications/migdall/psm96_twophoton_interference.pdf" ", Phys. Rev. Lett. 77, 1917–1920 (1996)


It is only when particles are assigned to distinct points, and the ontological status of the wave (which is the particle in the thermal interpretation) is removed, that the nuances get weird. The one experiment then, the one I am most ambivalent about, is the Cardone et al. experiment: http://arxiv.org/abs/physics/0504166" . Yet is does not appear to be intrinsically different from the phase-correlated information from two radio dishes mentioned in radio astronomy. The Cardone et al. experiment is something I would like to repeat myself, as well as a whole slew of variations. It seems to me this could make the clearest case where the wavefunction itself must transport energy between photons (induce inseparability) even in cases where there was otherwise no direct interaction to induce inseparability.
 
Last edited by a moderator:
  • #12
It should be noted for clarity that the "self-interaction" constraint in QM does not mean two photons cannot interact. It means that two photons that interact must be correlated in such a way that they are not separate quantum entities (inseparability). Hence the two photons become the same particle in some specific ways.

I added this for clarity because I suspected "self-interaction" may be misinterpreted strictly in terms of single quantum particles taking multiple paths, such as in the double slit experiment.
 

FAQ: Single photon interference time threshold (double slit experiment)

1. What is a single photon interference time threshold?

A single photon interference time threshold refers to the minimum amount of time between the emission of two photons in the double slit experiment. This threshold is necessary for the phenomenon of interference to occur, where the photons exhibit wave-like behavior and interfere with each other.

2. How is the single photon interference time threshold determined?

The single photon interference time threshold is determined by the distance between the two slits and the wavelength of the photons being used. This can be calculated using the formula t = d/c, where t is the time threshold, d is the distance between the slits, and c is the speed of light.

3. What happens if the time between photon emissions is less than the interference time threshold?

If the time between photon emissions is less than the interference time threshold, the photons will not have enough time to interfere with each other and will behave as individual particles rather than waves. This will result in a pattern of two distinct lines on the detector screen.

4. Can the single photon interference time threshold be changed?

Yes, the single photon interference time threshold can be changed by adjusting the distance between the two slits or by using photons with a different wavelength. However, the threshold is a fundamental property of the experiment and cannot be completely eliminated.

5. Why is the single photon interference time threshold important in the double slit experiment?

The single photon interference time threshold is important because it demonstrates the wave-particle duality of light. It shows that even though photons are particles, they can also exhibit wave-like behavior and interfere with each other, similar to how waves in water interfere with each other. This has significant implications in the field of quantum mechanics and our understanding of the nature of light.

Back
Top