# Time period measured in double slit experiment

## Main Question or Discussion Point

In the standard double slit experiment, what time period is measured between the emission ($t=0$) and detection ($t=T$) of a single photon?

I ask because clearly the photon does not take a single, well-defined path, as it would if it were a classical particle. Suppose that I set up an experiment to measure the time $T$ between the emission of a single photon at an emitter and its subsequent absorption at a (thin) detector. The detector is placed at some arbitrary point on the screen such that there is a non-negligible difference in the lengths of the two possible paths which the photon may take in travelling from the emitter to this detector. I have labelled these paths $l_1$ and $l_2$ on the attached diagram.

I tweak the variables of the experiment ($a$, $\lambda$, the position of the detector etc.) so as to make the path difference, $\Delta l$ reasonably large. The difference in time that a photon would take in traversing one path vs. the other should then be so large as to be easily measurable by a clock of finite accuracy. I then run this experiment many times. I pass photons one at a time through the double slits and, when one chances to be absorbed by the detector, I measure the time $T$ between emission and absorption to the greatest accuracy possible.

Now I should be clear: I realise that the photon is not a bullet, or a tiny ball. I accept (without deep understanding) the law of uncertainty. No matter what tweaks I make, the results of the experiment must turn out inconclusive. The photon did not take a simple classical straight-line path; it propagated as a probability wave, all paths of which must be taken into account (somehow, I don't know) and superposed to enable the calculation of the probability of the photon arriving at the detector's particular location. (Or something like this?)

I now plot all my experimentally measured values of $T$ to obtain a probability distribution (presumably ~Gaussian). What I am looking for of course is evidence of there being two distinct probability distributions, slightly overlapping, with mean values centred around:

$${\bar T_1} = \frac{{{l_1}}}{c}$$

$${\bar T_2} = \frac{{{l_2}}}{c}$$

And what I would actually find is that the two probability distributions overlap so completely that they are unable to be resolved as two separate distributions—right? (There would be an uncertainty in when the photon was emitted, an uncertainty in when it was detected, an uncertainty in the clock's ability to keep time, etc.)

My question is: this probability distribution of $T$ must be centred around some value, ${\bar T}$. Is it as simple as $\bar T = {\textstyle{1 \over 2}}\left( {{{\bar T}_1} + {{\bar T}_2}} \right)$? (The "bar" doesn't seem to be showing up for me but these are supposed to be Tbar, T1bar, T2bar.)

What about when more slits are added? Not necessarily symmetrically on either side of the two initial slits. Suppose that many new slits are added on one side only (say on the right hand side) so that the light source is now pointing at the far left side say of a long diffraction grating. The addition of the extra slits added many new possible paths for a photon, but all of them are longer paths than the initial two, $l_1$ and $l_2$. If I repeated the experiment now, would I measure a longer Tbar? If so, does it make any difference whether the light source is pointing at the centre of the diffraction grating or to one side?

My not very developed instincts say that yes, a longer Tbar ought to be measured, for there are now more paths available to the photon, and all of these paths are longer than the original two (longer $l \Rightarrow$ longer $T$). Since the photon has a probability to take any of these new long paths, must not my measured value of Tbar be skewed somewhat towards $+t$—in other words, will it not be larger? My instincts say this makes sense, and yet also it doesn't make sense: as the number of slits goes to infinity, the number of possible paths goes to infinity and it is as if there were no wall between the slits at all—empty space. My reasoning can't be correct because it would imply that a photon faced with a single slit travels faster than a photon faced with empty space. Where have I made mistakes, and what would the results of these experiments be in real life?

EDIT:

Whoops, a couple of corrections: the paths $l_1$ and $l_2$ extend all the way from the emitter (not shown on diagram), through the slits and terminate at the screen. From my diagram it looks like $l_1$ and $l_2$ extend just from the slits to the screen.

Also, on second thoughts I doubt that $\bar T = {\textstyle{1 \over 2}}\left( {{{\bar T}_1} + {{\bar T}_2}} \right)$ is really true. The greater the angular distance between a certain slit and the point at which the light source is aimed, the lower the probability ("amplitude"?) of the photon taking that path, right? My guess equation does not take angle into account. For example, it would be ridiculous to think that for, say, 100 slits one should sum the time values $\left( {\frac{{{l_1}}}{c},\frac{{{l_2}}}{c}...\frac{{{l_{100}}}}{c}} \right)$ and then divide by 100 to give Tbar. As more and more distant slits were added, Tbar would tend towards infinity, which is definitely wrong. I would think instead that, due to the decreasing probability of the photon passing through ever more distant slits (which are at larger and larger angular distance), Tbar would tend towards some finite value. Is this correct—does Tbar tend towards some value? If so, what value, and how do you calculate it? If not, why not?

EDIT 2:

I believe I was making an over-simplification when I said:
...as the number of slits goes to infinity...it is as if there were no wall between the slits at all—empty space.
(What I meant to say was: as the number of slits goes to infinity, and the distance between slits goes to zero, then this is the same as empty space. That's what I meant to say; I don't know if it is actually a true statement.)

As the number of slits, $N$ is increased, the diffraction pattern becomes sharper and the regions of the screen in which there is ≈zero probability of detecting the photon expand. Therefore it becomes more important to place the detector at a "bright spot". So let's assume that we always place the detector at such a bright spot. However, as the distance, $a$ between neighbouring slits is decreased, eventually there will come a point when $a<\lambda$. Is it true to say that for $a<\lambda$ there can only be one single bright spot, at $\theta=0$? If so then is it also true that as $N \to \infty$ and $a \to 0$ the single bright spot becomes less and less "blurred" and approaches a high sharpness? (I am sure I remember this or a similar phenomenon being explained in class.) The only sensible place to position the detector would then be at $\theta=0$ (see diagram 2). If this is correct then this seems to disprove my earlier guess that an infinite diffraction grating of infinitely thin slits is the same (to a photon) as empty space. This is because if there were no diffraction grating, and just the single slit on the left of the picture, we would just see the blurry diffraction pattern characteristic of a single slit, wouldn't we?

This brings me to the question I was going to ask after a few replies had been posted. I am aware that the atoms in a crystalline material can be seen to behave as giant arrays of diffraction gratings. Earlier I wondered if more diffraction slits $\Rightarrow$ longer Tbar. Do any of my above questions about increasing Tbar etc. have relevance to refractive index and the speed of light in different materials?

#### Attachments

• 11.9 KB Views: 390
• 47.1 KB Views: 398
Last edited: