# Single Photon/Indistinguishabiliity principle experiment

• dmoney123

#### dmoney123

If a single photon light source shoots a photon through the "attached" path. What is the percentage of the photon will make it to each detector. The light source is the box, the photon trajetory is yellow, beamsplitter is blue, mirror is black, detectors are purple, and obstacle is red. I think because of the indistinguishabillity principle, the photon will go 50% to each detector because there is a physical disturbance in the trajectory(obstacle). Any thoughts?

#### Attachments

• experiment.jpg
8.1 KB · Views: 352

In case of perfect beamsplitters: 25% photons would go to each detector and 50% would be lost.

Not because of the "indistinguishabillity principle", but just because there is no interference in such setup, and each BS just divides the stream of photons into two with equal probabilities.
There is nothing more in such experiment than first BS reflects half of the photons toward the obstacle, and the second sends half of the remaining photons to one detector and the rest to other.

Last edited:
Thank you! So if the obstacle was taken away then 100% of the photons would end up being reflected-transmitted/transmitted-reflected and thus all ending up hitting the detector on the right. This is due to interference. Right?

Yeah i was thinking the same thing 25% in each, but was not sure!

also would it be the same if you had multiple photons?

Yes, if you analyze what is happening at the level of probability amplitudes which can interfere, then it doesn't matter if those same amplitudes are referring to more photons in the same states, they'd just be larger amplitudes doing all the same things. But photons in different states could mess up the coherences and destroy the interference pattern, which is why this sort of experiment is done with lasers (the photons are then in the same state).

Thank you! So if the obstacle was taken away then 100% of the photons would end up being reflected-transmitted/transmitted-reflected and thus all ending up hitting the detector on the right. This is due to interference. Right?
This is certainly common interpretation that interference determines photon path at beamsplitter.

But just as certain you would run into serious problems if you will try to interpret this experiment the same way:
http://physics.nist.gov/Divisions/Div844/publications/migdall/psm96_twophoton_interference.pdf" [Broken]

Alternative would be to say that 50% of photons ends up in one detector and 50% of photons end up in other detector and interference just changes how much photons are detected.
In a sense it's just the same particle-wave duality. If you would try to find out what path photon is taking after beamsplitter interference will be lost. ;)

Last edited by a moderator:
This is certainly common interpretation that interference determines photon path at beamsplitter.
Yes, the way I would put it instead is, if the experiment does not determine the "photon path at the beamsplitter", then there simply is no such thing.

Yes, the way I would put it instead is, if the experiment does not determine the "photon path at the beamsplitter", then there simply is no such thing.
How would you analyze experiments without such concept as "photon path"?
For example if you block one arm of interferometer like in OP picture you have different result than in non-blocked case. In non-blocked case you assign two different descriptions for two paths and then determine relative parameter (phase difference) between two descriptions. I can't see how you can do that without resorting to "path".

Hmm, or maybe you would abandon "photon"?
Then let's take a look at this paper that covers other interesting experiments:
http://physics.nist.gov/Divisions/Div844/publications/migdall/apopts41.pdf" [Broken]
I believe you can think of some other mechanism to describe correlations in detections between two downconverted light beams. But if you will agree that it is meaningful to extrapolate results to perfect efficiency detectors then your mechanism will be empirically equivalent to particle ("photon") description.

Last edited by a moderator:
How would you analyze experiments without such concept as "photon path"?
For example if you block one arm of interferometer like in OP picture you have different result than in non-blocked case.
If you block one arm, then the experiment establishes photon path-- the other arm.
In non-blocked case you assign two different descriptions for two paths and then determine relative parameter (phase difference) between two descriptions. I can't see how you can do that without resorting to "path".
If you have two paths, then the experiment does not establish which path, and so the (unique) path taken simply does not exist.
I believe you can think of some other mechanism to describe correlations in detections between two downconverted light beams. But if you will agree that it is meaningful to extrapolate results to perfect efficiency detectors then your mechanism will be empirically equivalent to particle ("photon") description.
I have no problem with the particle description, the particle is established in the experiment. Quantum mechanics always seems to support pretty strongly that what we can talk about meaningfully in any experiment is precisely what is established by that experiment, and nothing else.

Yes, the way I would put it instead is, if the experiment does not determine the "photon path at the beamsplitter", then there simply is no such thing.
I reread your reply and just noticed that you were speaking about experiment and what it "tells" us while I was speaking about interference and how it affects photon behavior at beamsplitter.

If you have two paths, then the experiment does not establish which path, and so the (unique) path taken simply does not exist.
If a person tosses two coins and tells me that both where showing the same side but doesn't tell me what side it was should I conclude that those coins didn't show (unique) side at all?

How would you analyze experiments without such concept as "photon path"? [...] Hmm, or maybe you would abandon "photon"?
I like A.Zeilinger's non-realism: never talk about photons except of the very act of emission and detection. All what happen between those is a wave propagation. Thus 'photon path' should never be used. If you describe the experiment in terms of 'wave paths' you'd never fall into paradoxes.

If a person tosses two coins and tells me that both where showing the same side but doesn't tell me what side it was should I conclude that those coins didn't show (unique) side at all?
The difference between coins and photons is that coins do not interfere nor exhibit other quantum behaviour, so realistic approach to them is pretty justified.

I like A.Zeilinger's non-realism: never talk about photons except of the very act of emission and detection. All what happen between those is a wave propagation. Thus 'photon path' should never be used. If you describe the experiment in terms of 'wave paths' you'd never fall into paradoxes.
Yes you can do that. At least as long as you do not insist that you can detect all emitted photons. However if you insist on that then you either have to have wave model that is empirically indistinguishable from particle model or your detected photons should not correspond to emitted photons (you should have imperfect coincidence for downconverted photons).

The difference between coins and photons is that coins do not interfere nor exhibit other quantum behaviour, so realistic approach to them is pretty justified.
Ah, but in my analogy coins do not have to show interference. It is observer plus coins system that shows interference.

Yes you can do that. At least as long as you do not insist that you can detect all emitted photons. However if you insist on that then you either have to have wave model that is empirically indistinguishable from particle model or your detected photons should not correspond to emitted photons (you should have imperfect coincidence for downconverted photons).

You are examining only the limited case of a rather incoherent light field where the photon number per coherence volume does not get much larger than unity as it is the case for thermal light or PDC light or such stuff. In this case the wave model indeed is almost indistinguishable from a particle model because the coherence volumes of interest are so small that you can assume the corresponding quanta of the light field to be rather localized. This does not work for laser beams or similar stuff where the photon number per coherence volume is usually much larger than one. Here the fact that all photons inside the coherence volume are indistinguishable renders the concept of an identifiable and unique photon path pretty pointless.

You are examining only the limited case of a rather incoherent light field where the photon number per coherence volume does not get much larger than unity as it is the case for thermal light or PDC light or such stuff. In this case the wave model indeed is almost indistinguishable from a particle model because the coherence volumes of interest are so small that you can assume the corresponding quanta of the light field to be rather localized. This does not work for laser beams or similar stuff where the photon number per coherence volume is usually much larger than one. Here the fact that all photons inside the coherence volume are indistinguishable renders the concept of an identifiable and unique photon path pretty pointless.
Not sure I understand this.
Are you saying that very low intensity light beam can not be coherent?
Or rather that light beam can be coherent only if there is more than one photon within coherence length?

Oh, no. That was not what I intended to say. Maybe that post was a bit short.

I was just intending to contrast the different types of light fields in their most common form:

a) rather incoherent light sources like thermal light or PDC light:

Here the coherence volume is usually rather small and typically the photon numbers per coherence volume and single mode are rather small. The coherence volume basically means two things: the probability to have a detection event for an ideally efficient detector is basically 1 inside the coherence volume and 0 on the outside. It is therefore something like an "upper bound" for the spatial extent of the area where a photon can be detected and could be cautiously referred to as the upper bound of the photon volume. However, it should not be referred to as such. Second, all photons inside the coherence volume are indistinguishable. One could therefore think of the concept of a photon path as rather intuitive in the case of having not more than one photon present per coherence volume and having a rather small coherence volume. The possible volume where a detection can happen is rather localized. Of course this leads to problems when using beam splitters, but there are plenty of interpretations out there covering this issue, I suppose.

In this regime wave-like and particle-like theories are pretty similar.

b) laser light having coherence volume as large as meters or kilometers and lots of photons per coherence volume however is extremely difficult to describe if you want to keep a naive intuitive photon path concept. You have millions of indistinguishable detection events which can indeed not be tracked back to some single well defined emission event, you have a total photon number which is not well defined and - for example - if you decrease the mean photon number such that you get one photon per coherence volume, you will notice that in interference experiments this one photon cannot be localized better than some dozen meters - depending on the type of laser of course. It is pretty complicated to find some interpretation which keeps the concept of a realistic photon path and are consistent with experiments in this regime. Pretty much any of them I know end up having a photon path which is there, but not well defined or impossible to determine in principle which is - in my opinion - not of much help.

If you instead examine thermal light which has a large photon number per coherence volume, you may find traces of two-photon interference if you perform the right measurements. These are also rather difficult to describe using just single photons as these photons are not statistically independent of each other.

(Zeilinger's non-reality) Yes you can do that. At least as long as you do not insist that you can detect all emitted photons. However if you insist on that then you either have to have wave model that is empirically indistinguishable from particle model or your detected photons should not correspond to emitted photons (you should have imperfect coincidence for downconverted photons).
If you speak about "entangled" experiments - I can't use Huygens' wave optics, I must consider joint wavefunction of both particles - that's case different from the one considered in this thread.

As long as you consider one-photon experiments (like OP's one - in both variations: Mach-Zender's and blocked path) you may describe it perfectly with simple waveoptics, just interpreting square of the amplitude as a probability density of photon hit. It works perfectly also for setups, where all photons are detected (e.g. idealised Mach-Zender).

Of course, you always must be aware of imperfectness of detectors, opacity, some absorbption on mirrors, etc., so I never insist on registering all photons, just contrary - I often insist to consider experimental limitations.

If a person tosses two coins and tells me that both where showing the same side but doesn't tell me what side it was should I conclude that those coins didn't show (unique) side at all?
The way I would put this is, any experiment involving coins does establish which side shows (even if the information is not made available to us). There is no need to look carefully at the apparatus to tell if "which side" language is appropriate or not, because there are so many interactions that the outcomes are decohered and our description of the reality will be that an outcome has occurred. But that isn't the case with individual particles, there our description of the reality will be that no such outcome has occurred, if the coherences are preserved. If it sounds odd that we are making choices about how we will describe reality, rather than something that is the "real" reality, I think the former is appropriate and the latter is a fantasy. Reality has always been a kind of choice we make.

The way I would put this is, any experiment involving coins does establish which side shows (even if the information is not made available to us). There is no need to look carefully at the apparatus to tell if "which side" language is appropriate or not, because there are so many interactions that the outcomes are decohered and our description of the reality will be that an outcome has occurred. But that isn't the case with individual particles, there our description of the reality will be that no such outcome has occurred, if the coherences are preserved.
Basically you are saying that if the person is telling you that two coins show the same side you are inclined to trust him because you have personal experience with coins. But if the person tells you that two properties of "something" are the same you will not trust him if you don't have personal experience with that "something".

Hmm, that might be a problem because if we approach this question from empirical stance our ability to talk about relative properties in a consistent way implies that they fairly represent some "absolute" properties.

If it sounds odd that we are making choices about how we will describe reality, rather than something that is the "real" reality, I think the former is appropriate and the latter is a fantasy. Reality has always been a kind of choice we make.
Our choice what descriptions of reality to pick is ruled by principle that they should fit together in a consistent way. If that happens we say that they fairly represent reality. That is empiricism.

In other words "map is not the territory" doesn't mean that territory isn't real. On the contrary. Exactly because territory is real we can meaningfully speak about good maps and bad maps.

Oh, no. That was not what I intended to say. Maybe that post was a bit short.

I was just intending to contrast the different types of light fields in their most common form:

a) rather incoherent light sources like thermal light or PDC light:
...
b) laser light having coherence volume as large as meters or kilometers and lots of photons per coherence volume ...
If we can describe wave-like behavior using particle-like model in case a) why would case b) make any difference?

I suppose that you have impression that particle-like model can't describe wave-like behavior in case of inefficient detection (when ensemble of photons is reduced on the way from emission to detection). Or you don't?

If you speak about "entangled" experiments - I can't use Huygens' wave optics, I must consider joint wavefunction of both particles - that's case different from the one considered in this thread.

As long as you consider one-photon experiments (like OP's one - in both variations: Mach-Zender's and blocked path) you may describe it perfectly with simple waveoptics, just interpreting square of the amplitude as a probability density of photon hit. It works perfectly also for setups, where all photons are detected (e.g. idealised Mach-Zender).
This is untested prediction of CI. And you don't have to have ideal setup to test this prediction.

What I am saying is that if we assume that this prediction is true then there are other experimentally testable consequences that are not addressed in CI.

Of course, you always must be aware of imperfectness of detectors, opacity, some absorbption on mirrors, etc., so I never insist on registering all photons, just contrary - I often insist to consider experimental limitations.
And yet in experiments covered by paper I mentioned in post #10 these issues where successfully addressed. So this is not fundamental limitation.

If we can describe wave-like behavior using particle-like model in case a) why would case b) make any difference?

Well, there is not too much wave-like behavior in a) (ok, you can get interference using the right experiment, but mostly the behavior is particle-like). My point was rather that it is easier to get particle-like behavior as a limit of a wave-like theory than the other way round.

I suppose that you have impression that particle-like model can't describe wave-like behavior in case of inefficient detection (when ensemble of photons is reduced on the way from emission to detection). Or you don't?

I am not saying that this is impossible, but dealing with uncertainty in the photon numbers, single photon interference in an interferometer where the path distance is on the order of hundreds of meters and multi-photon interference in a pure particle picture is not trivial. Most of the theories allowing to do this end up being "conspiracy" theories which are more or less superdeterministic. While this is certainly ok from the philosophy point of view, there is not much physical insight gained from such theories and not much inspiration for new experiments.

Well, there is not too much wave-like behavior in a) (ok, you can get interference using the right experiment, but mostly the behavior is particle-like). My point was rather that it is easier to get particle-like behavior as a limit of a wave-like theory than the other way round.

I am not saying that this is impossible, but dealing with uncertainty in the photon numbers, single photon interference in an interferometer where the path distance is on the order of hundreds of meters and multi-photon interference in a pure particle picture is not trivial. Most of the theories allowing to do this end up being "conspiracy" theories which are more or less superdeterministic. While this is certainly ok from the philosophy point of view, there is not much physical insight gained from such theories and not much inspiration for new experiments.
Hmm, you are talking about inspiration for new experiments. Let me try.

I will describe two variants of experiment for case b) that can test interference as unfair sampling hypothesis.

Common part for both variants is Mach-Zender interferometer. After second beamsplitter beam of light goes directly to detector without any intermediate equipment (to prevent reduction of photon ensemble). Size of detector should be big enough so that practically all light hits detector (again to prevent reduction of photon ensemble).

First variant: at two outputs of second beamsplitter there are two (ordinary, not avalanche) photon detectors with different quantum effciency. One is around 10% QE but second over 50% QE. In case of fair sampling visibility of interference should be the same for both detectors. But in case of unfair sampling (interference does not affect photon path at beamsplitter but bias detection instead) interference visibility would be reduced for high QE detector.
Have to say that I don't know much about ordinary photodiodes. Therefore I don't know if QE is well known for them (how much of photon energy generates photocurrent and how much is absorbed as a heat).

Second variant: at one output of beamplitter we place detector that measures two things at the same time - photocurrent and heating of detector. In case of fair sampling both measurements should vary the same way as we change optical path difference. But in case of unfair sampling variations in photocurrent and thermal measurements should vary in opposite phase i.e. constructive interference peak in photocurrent is observed together with destructive interference crest in thermal measurement.

First variant: at two outputs of second beamsplitter there are two (ordinary, not avalanche) photon detectors with different quantum effciency. One is around 10% QE but second over 50% QE. In case of fair sampling visibility of interference should be the same for both detectors. But in case of unfair sampling (interference does not affect photon path at beamsplitter but bias detection instead) interference visibility would be reduced for high QE detector.
Have to say that I don't know much about ordinary photodiodes. Therefore I don't know if QE is well known for them (how much of photon energy generates photocurrent and how much is absorbed as a heat).

Hmm, I see your point. All I can say is that QE for common non-avalanche photo diodes can be pretty high, routinely up to and above 80% for commercially available standard diodes when picking the right wavelength range. However, their sensitivity is pretty bad, but that is not much of a problem if you work at high enough intensities.