Is the Hong-Ou-Mandel effect a violation of the fair sampling assumption?

  • Thread starter zonde
  • Start date
  • Tags
    Sampling
In summary, the Hong-Ou-Mandel effect is a manifestation of "more is different" in which two identical photons entering a 50:50 beam splitter will always exit in the same, but random, output mode. This is due to the indistinguishability of two-photon amplitudes and not the photon bunching effect of individual photon wavepackets. However, this distinction between photon bunching and the HOM effect has evolved over time and may not necessarily affect the concept of fair sampling. Additionally, the HOM effect is not commonly observed in many experiments as it requires indistinguishable two-photon probability amplitudes.
  • #1
zonde
Gold Member
2,961
224
I am considering http://en.wikipedia.org/wiki/Hong%E2%80%93Ou%E2%80%93Mandel_effect"

Wikipedia explanation says that "Therefore, when two identical photons enter a 50:50 beam splitter, they will always exit the beam splitter in the same (but random) output mode."
And yet before this it says: "The Hong–Ou–Mandel effect is indeed due to indistinguishability of two-photon amplitudes but not due to the photon bunching effect of individual photon wavepackets."
It seems a bit awkward i.e. two photons are always in the same output mode yet they are not bunching.

If we look at this experiment:
http://arxiv.org/abs/0809.3991"
it becomes a bit more clear. Paper says:
"A two-fold coincidence detection event between either [tex]D_{Q1H}[/tex] and [tex]D_{Q2V}[/tex] or [tex]D_{Q1V}[/tex] and [tex]D_{Q2H}[/tex] indicates a projection on [tex]\psi^-[/tex]. On the other hand, a coincidence detection event between either [tex]D_{Q1H}[/tex] and [tex]D_{Q1V}[/tex] or [tex]D_{Q2H}[/tex] and [tex]D_{Q2V}[/tex] indicates a projection on [tex]\psi^+[/tex]."
Here [tex]D_{Q1H}[/tex] and [tex]D_{Q1V}[/tex] are detectors after PBS that is behind one port of beam splitter but [tex]D_{Q2H}[/tex] and [tex]D_{Q2V}[/tex] are behind other port of beam splitter.
So we have coincidences equally behind the same port of beam splitter or behind different ports of beam splitter.
So we can say that photons really appear in different ports of beam splitter and yet if we make measurement by placing detectors directly after ouputs of beam splitter (without PBSes) we do not detect coincidences. As it is demonstrated in Fig.2 of the same paper.

Does it seems good so far?

And now there is the question about fair sampling assumption. It seems like such properties of HOM dip utterly contradict fair sampling assumption as applied to photon detection.
Or maybe there is alternative viewpoint that justifies fair sampling assumption?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
zonde said:
I am considering http://en.wikipedia.org/wiki/Hong%E2%80%93Ou%E2%80%93Mandel_effect"

Wikipedia explanation says that "Therefore, when two identical photons enter a 50:50 beam splitter, they will always exit the beam splitter in the same (but random) output mode."
And yet before this it says: "The Hong–Ou–Mandel effect is indeed due to indistinguishability of two-photon amplitudes but not due to the photon bunching effect of individual photon wavepackets."
It seems a bit awkward i.e. two photons are always in the same output mode yet they are not bunching.

This distinction between photon bunching and the HOM effect results from the "historical" development of the meaning of photon bunching. The first photon bunching experiments, including the Hanbury-Brown-Twiss experiment and others evidenced this effect by seeing a tendency for photons to arrive in pairs by measuring coincidence count rates larger than the expectation values for statistically independent photons. While this can in principle be explained classically, a quantum explanation can be found which describes this behavior in terms of interfering probability amplitudes for indistinguishable two-(or more) photon processes, see for example: Fano, U. (1961). "Quantum theory of interference effects in the mixing of light from phase independent sources". American Journal of Physics 29: 539.
At that time some people considered the explanations in terms of "two photons interfering" and "two probability amplitudes interfering" as identical.

Now the paper cited in the wikipedia article and the one cited at the bottom of the wikipedia article (T. B. Pittman et al., "Can Two-Photon Interference be Considered the Interference of Two Photons?", PRL 77, 1917 (1996)) show that there is indeed a difference: For this interference to occur, it is sufficient that two whole processes of photon emisssion, some process at a beam splitter and photon detection are indistinguishable. It is not necessary that the photons arrive at the beam splitter simultaneously for that interference to happen. In that sense, the HOM-effect is occurring without bunching as the photons do not arrive simultanously at the beam splitter. However, since then the meaning of photon bunching has somewhat changed and a lot of people use it in the sense of having an increased (or decreased, depending on your setup) coincidence count rate of some two-photon detection event, whether simultaneous or involving some time delay, compared to the case of statistical independence. At least that is how I think about that term when writing publications and most of the people I talked with, share a similar view.

However, I do not see how this would affect fair sampling. The HOM- efffect is just another manifestation of "more is different" showing that a state of N indistinguishable particles is usually not just the product of N individual single particle states. This is the same physics that determines the Pauli exclusion principle, just having an opposite sign for bosons. Accordingly you would not even expect these photons to be detected statistically independent of each other.

Also, one would not expect to see this effect in many other experiments. It is rather difficult to get two-photon-probability amplitudes to be indistinguishable from an experimental point of view. For pretty much any experimental situation and photon detection which is not explicitly designed to see this effect, it will not occur, especially as this effect gets weaker and does not occur for large numbers of indistinguishable photons emitted from a laser.
 
Last edited by a moderator:
  • #3
Cthugha said:
However, I do not see how this would affect fair sampling. The HOM- efffect is just another manifestation of "more is different" showing that a state of N indistinguishable particles is usually not just the product of N individual single particle states. This is the same physics that determines the Pauli exclusion principle, just having an opposite sign for bosons. Accordingly you would not even expect these photons to be detected statistically independent of each other.
Thanks for your comment Cthugha.

Now about fair sampling. It's not just HOM-effect but two things together.
First thing is HOM-effect when we attach detectors right after beam splitter i.e. no coincidences.
Second is that coincidences show up when we attach polarization beam splitters to the outputs of beam splitter. And I want to emphasize that coincidences appear not only in outputs of single polarization beam splitter (corresponding to single channel of beam splitter) but in outputs of two different polarization beam splitters (corresponding to different channels of beam splitter). That is demonstrated in this entanglement swapping experiment.

Maybe it can be understood more easily if we consider polarizers instead of polarization beam splitters. Coincidences appear when we insert two polarizers with opposite settings (meaning H/V) before detectors. So it appears that polarizers change "detectability" of photons and that is the thing that contradicts fair sampling assumption (even if we consider weaker alternative formulations).
 
  • #4
zonde said:
Maybe it can be understood more easily if we consider polarizers instead of polarization beam splitters. Coincidences appear when we insert two polarizers with opposite settings (meaning H/V) before detectors. So it appears that polarizers change "detectability" of photons and that is the thing that contradicts fair sampling assumption (even if we consider weaker alternative formulations).

I must admit I do not get your point. In which manner does introducing polarizing beam splitters change the "detectability" of photons? The total detected number is the same with or without the beam splitters inserted. These just make the difference whether interference phenomena take place or not. It is not really that different from inserting polarizers with opposite settings at the two slits of a double slit. The number of detected photons stays the same, but the position of their detection gets redistributed. Most people would not consider this to contradict fair sampling.
 
  • #5
Cthugha said:
I must admit I do not get your point. In which manner does introducing polarizing beam splitters change the "detectability" of photons? The total detected number is the same with or without the beam splitters inserted. These just make the difference whether interference phenomena take place or not. It is not really that different from inserting polarizers with opposite settings at the two slits of a double slit. The number of detected photons stays the same, but the position of their detection gets redistributed. Most people would not consider this to contradict fair sampling.
We place detectors directly behind two outputs of beam splitter. We observe that we do not make coincident detection in two detectors within coincidence time window (few ns).
We make fair sampling assumption (sample of detected photons fairly represent sample of all photons arriving at detector) and conclude that if two photons arrive at beam splitter within short time interval then second photon always takes the same path as first photon.

Now we insert polarizers between beam splitter outputs and detectors. And now we observe concident detections between two detectors.
So we can say that our conclusion from the first case
"if two photons arrive at beam splitter within short time interval then second photon always takes the same path as first photon"
is wrong.

Two setups are identical up to the place where there are polarizers.
So we have to say that either:
1) our conclusion is wrong in the first case as well (fair sampling assumption does not hold)
2) or some photons have jumped from one path to another path when they arrive at polarizer
3) or some photons have been delayed in polarizer so that they appear in coincidence window with some other photon from other path
4) or photons arrving at beam splitter are "informed" about polarizer further down the path and bahave differently at beam splitter in the two cases.

Do you see some other option?
 
  • #6
zonde said:
We place detectors directly behind two outputs of beam splitter. We observe that we do not make coincident detection in two detectors within coincidence time window (few ns).
We make fair sampling assumption (sample of detected photons fairly represent sample of all photons arriving at detector) and conclude that if two photons arrive at beam splitter within short time interval then second photon always takes the same path as first photon.
[...]
Two setups are identical up to the place where there are polarizers.
So we have to say that either:
1) our conclusion is wrong in the first case as well (fair sampling assumption does not hold)

This conclusion is wrong, but the experimental data you get in your scenario does not allow to draw that conclusion. If you repeat the same experiment using two light sources which are not synchronized or of different wavelength or polarization or whatever, you will also see coincidences. The conclusion you can draw is that if two IDENTICAL photons arrive at the beam splitter within a short time interval then the second photon always takes the same path as first photon. This conclusion is not invalidated by your second scenario involving polarizers.
 
  • #7
Cthugha said:
This conclusion is wrong, but the experimental data you get in your scenario does not allow to draw that conclusion. If you repeat the same experiment using two light sources which are not synchronized or of different wavelength or polarization or whatever, you will also see coincidences. The conclusion you can draw is that if two IDENTICAL photons arrive at the beam splitter within a short time interval then the second photon always takes the same path as first photon.
Of course you are correct in pointing out that there are special conditions for observation of HOM effect. I just implied that these conditions are met but it would be more correct to state this explicitly in description of experiment.

Cthugha said:
This conclusion is not invalidated by your second scenario involving polarizers.
Can we go in more details about this statement?
Do you agree that explanation is limited to cases I stated? Or do you see another option?
If you agree that explanation is limited to my list which option do you consider as physically most reasonable?
 
  • #8
Maybe it is easier to analyze this situation if we approach it from different side.
We can ask whether interference occurs at beam splitter or at detector.

And I can propose such considerations:
If interference occurs at beam splitter then nothing happening after beam splitter should affect result of this interference.
However if interference occurs at detector then result of interference is still undetermined after beam splitter and presence of polarizer can still change it's result.

On the other side if interference occurs at beam splitter we can consider only two photons arriving at beam splitter in short time interval.
But if interference occurs at detector we should consider other earlier photons as two photons under consideration can appear on different paths after beam splitter.
 
  • #9
Oh, this point is discussed in the two papers you and I mentioned which are linked in the wikipedia article. Interference is not happening at the beam splitter. If two possible paths from the emitters via the beamsplitter to the detectors are indistinguishable, interference will occur even if the photons do not arrive simultaneously at the beam splitter. Note that one should take the notion of "photon arriving somewhere" with a grain of salt. It can be very misleading to think of photons as similar to classical point particles as soon as coherence matters are involved.
 
  • #10
Now fair sampling says that if we would replace our real detectors with hypothetical ideal 100% efficient detectors we would observe exactly the same visibility of HOM effect.

So do you agree with that assumption without doubt or would you like to see some experimental results where HOM effect is measured with different detection efficiencies before agreeing with this assumption?
 
  • #11
Well, in this case (assuming background noise is not an issue) I do not see any reason to doubt. The absence of coincidences is usually rather robust in terms of comparing detectors of different efficiency. An enhancement of coincidences is more complicated, as avalanche photo diodes have a certain time resolution and a certain dead time which makes it impossible to distinguish between the several detection of one or more photons arriving at one detector and therefore the quantum efficiency of the detectors can have large effects under some conditions. Of course you could just use more beam splitters and more detectors as a workaround, but that gets pretty cumbersome for more than 3-4 detectors.

Of course it never hurts to perform that experiment, but I would not expect a different outcome with more efficient detectors.
 
  • #12
Cthugha said:
The absence of coincidences is usually rather robust in terms of comparing detectors of different efficiency.
I find this statement rather very interesting. That's because my point is exactly opposite.
I have tried to find something in experiments that would help to estimate likelihood of variations in correlation visibilities. And I have found nothing very solid so far but there are things that suggest that it might be so (that absence of coincidences is not so robust for high efficiencies).

So can you give more details about what experience or findings are behind your statement?


And there are some considerations about what it means to consider detector as 100% efficient. I will try to outline them in my next post.
 
  • #13
So what it means to consider 100% efficient detectors.
Modern detector efficiency calibration procedures uses entangled photons. It takes unbalanced PDC source meaning that in one channel photons are collected from wider angles and in other channel photons are collected over such angles that they definitely fall into corresponding area of first channel. And then efficiency is found as proportion between coincidence count rate and singlet count rate in that other channel (covering smaller collection area).
So if we have 100% efficient detector it means that coincidence count rate should be rather close to singlet count rate. For this to happen 100% efficient detector should detect really all photons regardless if they undergo constructive or destructive interference.
Or if we do not allow detection of photons that undergoing destructive interference then we should conclude that 50% efficiency for detectors is theoretical maximum.

P.S. I have seen the claim of some detector manufacturer they have detectors with efficiency over 50%.
 
  • #14
zonde said:
I find this statement rather very interesting. That's because my point is exactly opposite.
I have tried to find something in experiments that would help to estimate likelihood of variations in correlation visibilities. And I have found nothing very solid so far but there are things that suggest that it might be so (that absence of coincidences is not so robust for high efficiencies).

So can you give more details about what experience or findings are behind your statement?

If one wants detectors with rather high quantum efficiency one will usually use single-photon avalanche photodiodes (SPADS). Their main problem is that they distort the measured photon number distribution because after every detection there will be a dead time. So SPADs can distinguish between 0 photons detected and more than one photon detected inside a timespan of interest. Usually that "more" is interpreted as one photon and it is insured that the mean photon count rate per unit timespan is low and multi-photon events should be rare, but for light showing large photon number noise resulting in a large degree of correlation, the results can be falsified drastically. You can use more detectors and measure the correlations between their detections to gain some more insight, but that does not help that much for thermal, superradiant or squeezed states with larger photon number noise and short coherence times. See for example G. Li et al, "Photon statistics of light fields based on single-photon-counting modules", Phys. Rev. A 71, 023807 (2005) for a very detailed discussion.

The main part of my PhD was to find a workaround for such noisy states of the light field by using a detector with better time-resolution which is basically one of the most inefficient detectors ever used. It works quite good for such states. However, for states showing suppressed correlations like Fock states, the results are already pretty good using SPADs and no improvement is seen.
Therefore I would define an ideal detector as one which measures the actual photon number and accordingly also the photon number variance and higher order moments correctly.

zonde said:
Modern detector efficiency calibration procedures uses entangled photons. It takes unbalanced PDC source meaning that in one channel photons are collected from wider angles and in other channel photons are collected over such angles that they definitely fall into corresponding area of first channel. And then efficiency is found as proportion between coincidence count rate and singlet count rate in that other channel (covering smaller collection area).

You can do so, but that method is not used very widely. In most cases you want to get the wavelength-dependence of your detector quantum efficiency. Entangled photon sources do not help in that respect. In most cases you will use a precalibrated broadband light source of known intensity to determine your detector quantum efficiency.

zonde said:
So if we have 100% efficient detector it means that coincidence count rate should be rather close to singlet count rate. For this to happen 100% efficient detector should detect really all photons regardless if they undergo constructive or destructive interference.
Or if we do not allow detection of photons that undergoing destructive interference then we should conclude that 50% efficiency for detectors is theoretical maximum.

I do not see the point. The calibration using entangled photons does not introduce any interference at all. You have your PDC crystal and two detectors. Unless you have one very small detecor and place it in the Fourier plane of the emission in one arm, you should not get any interference. Even if you did that, I do not get your point about photons undergoing destructive interference. I mean - the photons do not vanish if interference happens. There are more detections in zones of constructive interference and less in zones of destructive interference. The number is constant.

zonde said:
P.S. I have seen the claim of some detector manufacturer they have detectors with efficiency over 50%.

Yes, there are many companies offering SPADs with more than 50% QE ins ome wavelength regions. However, SPADs with good QE usually have bad temporal resolution and vice versa.
 
  • #15
Cthugha said:
If one wants detectors with rather high quantum efficiency one will usually use single-photon avalanche photodiodes (SPADS). Their main problem is that they distort the measured photon number distribution because after every detection there will be a dead time. So SPADs can distinguish between 0 photons detected and more than one photon detected inside a timespan of interest. Usually that "more" is interpreted as one photon and it is insured that the mean photon count rate per unit timespan is low and multi-photon events should be rare, but for light showing large photon number noise resulting in a large degree of correlation, the results can be falsified drastically. You can use more detectors and measure the correlations between their detections to gain some more insight, but that does not help that much for thermal, superradiant or squeezed states with larger photon number noise and short coherence times. See for example G. Li et al, "Photon statistics of light fields based on single-photon-counting modules", Phys. Rev. A 71, 023807 (2005) for a very detailed discussion.

The main part of my PhD was to find a workaround for such noisy states of the light field by using a detector with better time-resolution which is basically one of the most inefficient detectors ever used. It works quite good for such states. However, for states showing suppressed correlations like Fock states, the results are already pretty good using SPADs and no improvement is seen.
Therefore I would define an ideal detector as one which measures the actual photon number and accordingly also the photon number variance and higher order moments correctly.
So your experience is mainly related to low efficiency detection with focus on time-resolution. So if we talk about high efficiency detectors you are mainly guessing, would that be right?

Cthugha said:
I do not see the point. The calibration using entangled photons does not introduce any interference at all. You have your PDC crystal and two detectors. Unless you have one very small detecor and place it in the Fourier plane of the emission in one arm, you should not get any interference. Even if you did that, I do not get your point about photons undergoing destructive interference. I mean - the photons do not vanish if interference happens. There are more detections in zones of constructive interference and less in zones of destructive interference. The number is constant.
Yes, photons do not vanish. But detections show lower rate in zone of destructive interference.
Just the same in zone of constructive interference photons do not pop up but there are more detections.

So my point is that constructive or destructive interference for photons results from higher or lower detection efficiency but not from more photons appearing in zone of constructive interference and less photons appearing in zone of destructive interference.
So for those changes to be observable we should detect less than 100% of photons.

Did I explained my point satisfactory?
 
  • #16
zonde said:
So your experience is mainly related to low efficiency detection with focus on time-resolution. So if we talk about high efficiency detectors you are mainly guessing, would that be right?

Well, that depends on what is high for you. We checked conventional single photon modules also. In particular we have Perkin-Elmer single photon detection modules with a quantum efficiency of roughly 65% at 650 nm and fast modules from Id-quantique with not-so-good QE of 10-25%. So I have experience up to roughly 65%. Up to that point, there are no strange phenomena when having a look at decreased correlations (Fock states).

zonde said:
So my point is that constructive or destructive interference for photons results from higher or lower detection efficiency but not from more photons appearing in zone of constructive interference and less photons appearing in zone of destructive interference.
So for those changes to be observable we should detect less than 100% of photons.

Well, it seems strange to me to assume that the performance of the detector depends on the way I build my setup and on whether I have indistinguishable or distinguishable probability amplitudes. Also, nobody observed such phenomena for SPADs having QE beyond 50%. If interference was a matter of altering detector efficiencies instead of photon distributions, you would expect to see something strange when you switch from no interference to total destructive/constructive interference using detectors with QE of over 50% as the QE at the constructive interference position cannot exceed 100%. As HOM and similar experiments are often done using detectors with QE over 50% and such results do not occur, explanations of interference experiments in terms of varying QE are pretty much ruled out.
Also, QE is mostly governed by parameters like the absorption of some element or semiconductor in some wavelength range. It seems strange to think about such parameters as depending on the way I build my setup.
 
  • #17
Cthugha said:
Well, that depends on what is high for you. We checked conventional single photon modules also. In particular we have Perkin-Elmer single photon detection modules with a quantum efficiency of roughly 65% at 650 nm and fast modules from Id-quantique with not-so-good QE of 10-25%. So I have experience up to roughly 65%. Up to that point, there are no strange phenomena when having a look at decreased correlations (Fock states).

I am guessing that QE can be estimated by simply taking the ratio of 2 fold coincidences (within some time window) divided by the hits on one detector alone?
 
  • #18
Yes, you can use a reference detector to do the coincidence counting and the detector to test, if you use a state of the light field that is immune to loss.

That means you could for example take an entangled photon pair and guide one photon to each detector or you could use a coherent beam of light and a beam splitter. However, that should only give a rough (but sufficient for most situations) estimate.

It does not work if the detection of a photon in one arm would give you also information about the photon number distribution in the other arm. That would be the case for photon number states (here the detection of one photon lowers the total photon number and therefore decreases the probability to detect a photon at the other detector) or using blackbody radiation/thermal light like for example sunlight (here the photon number fluctuates strongly and the detection of a photon at one detector makes it likely that the momentary photon number is very high, making it more probable to also detect a photon at the other detector) under certain experimental conditions.
 
  • #19
Cthugha said:
Well, that depends on what is high for you. We checked conventional single photon modules also. In particular we have Perkin-Elmer single photon detection modules with a quantum efficiency of roughly 65% at 650 nm and fast modules from Id-quantique with not-so-good QE of 10-25%.
65% is definitely very high. 25% is quite high but might not be high enough if you don't look for variation specifically.

Cthugha said:
So I have experience up to roughly 65%. Up to that point, there are no strange phenomena when having a look at decreased correlations (Fock states).
65% detection efficiency should show decrease of visibility for negative correlations. However for that to be the case you should have no equipment between beam splitter and detector that can introduce looses.
But as I understand it' s quite common to insert bandwidth interference filters right before detectors.
Actually as I had no opportunity to discuss this with someone who has real hands-on experience I have tried to find something about the effect of bandwidth filter in descriptions of experiments.
Only thing that I have found is this single phrase:
"The visibilities for the polarization correlations are about 98.1% for |H>/|V> basis and 92.6% for |+45>/|−45> basis, without the help of narrow bandwidth interference filters."
from http://arxiv.org/abs/1005.0802" paper.
You have to take in account that |H>/|V> basis is polarization measurement of photons with uncorrelated interference but measurement in |+45>/|−45> basis is pure interference correlation measurement (as I see it). So it means that visibility of interference correlation is considerably lower than that of polarization correlation.
In this experiment coincidence count rate for source was indeed quite high - around 25% in respect to singlet count rate.

So I would appreciate it very much if you can tell from your own experience how common it is to place bandwidth filers right before detectors.

Cthugha said:
Well, it seems strange to me to assume that the performance of the detector depends on the way I build my setup and on whether I have indistinguishable or distinguishable probability amplitudes. Also, nobody observed such phenomena for SPADs having QE beyond 50%. If interference was a matter of altering detector efficiencies instead of photon distributions, you would expect to see something strange when you switch from no interference to total destructive/constructive interference using detectors with QE of over 50% as the QE at the constructive interference position cannot exceed 100%. As HOM and similar experiments are often done using detectors with QE over 50% and such results do not occur, explanations of interference experiments in terms of varying QE are pretty much ruled out.
Also, QE is mostly governed by parameters like the absorption of some element or semiconductor in some wavelength range. It seems strange to think about such parameters as depending on the way I build my setup.
Well maybe it would be easier to talk about this if we would use phrase "detection probability of individual photon" instead of speaking about "detection efficiency" as this later phrase implies something more like statistical average of detected photons.
At least in case of HOM effect it is correlation in detection probability of individual photons rather then some average detection efficiency.

Speaking about your doubt that interference should be interpreted as unbalanced photon distributions rather then correlations in detection probabilities of individual photons. As we with our discussion have gone through analysis of HOM effect I would think that it should be rather clear that it can't be unbalanced photon distributions.
Let's review on what we have agreed and what it implies (from my perspective).

1) Interference does not take place in beam splitter. That is one thing we have agreed upon.
2) From my perspective this implies that two photons after beam splitter take the same path half of the time and different paths half of the time.
3) But we do not observe coincidences in two detectors placed in different paths.
4) My statement in 2) agrees with observation that we recover coincidences when we place polarizers with specific settings before detectors. Also this requires assumption that polarizers do not provide any feedback to beam splitter but this can be discussed separately.
5) From above it follows that detection probabilities of two photons appearing in different paths are anti-correlated.

And I want to emphasize that I view this rather as correspondence between theoretical explanation using interference and physical situation of correlated detection probabilities of individual photons. And not as alternative explanation.

Another matter that I have not yet touched is how detectors can acquire that correlated state that gives rise to correlated detection probabilities of individual photons. As I understand part of your concern is about this particular aspect.
 
Last edited by a moderator:
  • #20
zonde said:
65% detection efficiency should show decrease of visibility for negative correlations. However for that to be the case you should have no equipment between beam splitter and detector that can introduce looses.
But as I understand it' s quite common to insert bandwidth interference filters right before detectors.
[...]
So I would appreciate it very much if you can tell from your own experience how common it is to place bandwidth filers right before detectors.

It is rather common. Most SPADs are very sensitive over a broader spectral range and even when placed inside dark boxes using just a small hole to get the signal in and taping all the holes where the cables get out of the box, the dark count rate due to the background light present even in a dark room is significant without using narrow bandwidth filters inside the boxes. One might also try to use very tiny pinoles at the entrance slit to these boxes and focus the signals there, but I suppose that would be a nightmare to adjust and I am not sure whether it would work well.

How much loss there will be due to these filters cannot be generally answered as there are too many different kinds of filters which can be used. Narrow band thin-film interference filters can give you a loss of up to 20%, while spectrally broader glass-based filters can have losses below 1%.

zonde said:
Well maybe it would be easier to talk about this if we would use phrase "detection probability of individual photon" instead of speaking about "detection efficiency" as this later phrase implies something more like statistical average of detected photons.
At least in case of HOM effect it is correlation in detection probability of individual photons rather then some average detection efficiency.

Ah, ok. I see a bit clearer now. QE is usually indeed used to describe the mean photon percentage of incoming photons that will be detected by a detector. I see you mean something different.

zonde said:
Let's review on what we have agreed and what it implies (from my perspective).

1) Interference does not take place in beam splitter. That is one thing we have agreed upon.

Ok.

zonde said:
2) From my perspective this implies that two photons after beam splitter take the same path half of the time and different paths half of the time.
3) But we do not observe coincidences in two detectors placed in different paths.
4) My statement in 2) agrees with observation that we recover coincidences when we place polarizers with specific settings before detectors. Also this requires assumption that polarizers do not provide any feedback to beam splitter but this can be discussed separately.
5) From above it follows that detection probabilities of two photons appearing in different paths are anti-correlated.

I disagree with point 2. For me, interference not happening strictly at the BS does not necessarily require photons to take paths the way you describe. From my point of view interference does not really happen localized at some spot, but is a consequence of the wave nature of light. At every position where you have a superposition of two fields and have established some fixed phase relationship, you will find some kind of interference. What one now usually does in HOM experiments is to establish (or destroy) such fixed phase relationships and then to see the absence of coincidence counts (or presence of them) Alternatively one picks partial components of these fields by using polarizers and chooses those which have a fixed phase relationship (showing up in excess or absent coincidence counts) or those which do not have it (showing up as a statistically independent number of coincidence counts).

Of course one can now argue, whether this wavelike thing is the photon, resulting in the photon being rather nonlocal, whether this wavelike thing governs the probability to detect a photon (resulting in a rather nonrealistic view of photons) or whether that wavelike stuff and the photon exist both and one overns the other. The answer to this question will of course also alter the answer to your question whether photons leave the BS in a well defined manner.

However, I do not care too much about this kind of question. The math is the same in all 3 cases.
 
  • #21
Cthugha said:
I disagree with point 2. For me, interference not happening strictly at the BS does not necessarily require photons to take paths the way you describe. From my point of view interference does not really happen localized at some spot, but is a consequence of the wave nature of light. At every position where you have a superposition of two fields and have established some fixed phase relationship, you will find some kind of interference. What one now usually does in HOM experiments is to establish (or destroy) such fixed phase relationships and then to see the absence of coincidence counts (or presence of them) Alternatively one picks partial components of these fields by using polarizers and chooses those which have a fixed phase relationship (showing up in excess or absent coincidence counts) or those which do not have it (showing up as a statistically independent number of coincidence counts).
There was one experiment (in the references of that Wikipedia article) where this phase relationship was established only some time later after beam splitter (I think it was PBS in this case and a bit different kind of HOM effect).
But I think it does not affect your reasoning. What I don't like in that type of reasoning is that it seems exactly like photons popping up and vanishing. And in this case you can not meaningfully talk about fair sampling as you don't really have static full sample.
But of course we can agree to disagree.

Cthugha said:
Of course one can now argue, whether this wavelike thing is the photon, resulting in the photon being rather nonlocal, whether this wavelike thing governs the probability to detect a photon (resulting in a rather nonrealistic view of photons) or whether that wavelike stuff and the photon exist both and one overns the other. The answer to this question will of course also alter the answer to your question whether photons leave the BS in a well defined manner.
Umm, this second option does not have to be nonrealistic because as long as you have absorbed but non detected photons they can provide that required bias for detection (or filtering) in whatever equipment it happens. This is a bit more physical view of ensemble interpretation.
And for that third option to be viable it would have to invoke some biased detection as well because photons still have to maintain their strict trajectories.

Cthugha said:
However, I do not care too much about this kind of question. The math is the same in all 3 cases.
Well, it would be wise to end this discussion before we start running in circles. If that is the thing that you have on your mind then thanks for your valuable comments and information.
 

1. What is HOM dip in fair sampling?

HOM dip, or Hong-Ou-Mandel dip, is a phenomenon observed in quantum optics experiments where two identical photons are sent into a beam splitter and interfere with each other, resulting in a dip in the detection rate at the center of the output ports.

2. How is HOM dip used in fair sampling?

HOM dip is used in fair sampling to ensure that all possible input states are equally represented in the output, allowing for unbiased sampling and accurate measurement of quantum states.

3. What is fair sampling and why is it important?

Fair sampling is a technique used in quantum optics experiments to ensure that all possible input states are equally represented in the output, thus avoiding any bias in measurements and allowing for accurate characterization of quantum states.

4. How is HOM dip and fair sampling related to quantum computing?

HOM dip and fair sampling are important techniques in the field of quantum computing, as they allow for accurate measurement and characterization of quantum states, which is crucial for the development and advancement of quantum computers.

5. Can HOM dip and fair sampling be applied to other fields besides quantum optics and computing?

While HOM dip and fair sampling are primarily used in quantum optics and computing, the principles behind them can be applied to other fields that involve the measurement and characterization of quantum states, such as quantum cryptography and quantum metrology.

Similar threads

  • Quantum Physics
Replies
5
Views
1K
Replies
1
Views
1K
Replies
1
Views
1K
  • Quantum Physics
Replies
20
Views
6K
Replies
8
Views
2K
Replies
99
Views
15K
  • Advanced Physics Homework Help
Replies
1
Views
3K
Replies
128
Views
31K
Back
Top