Time period measured in double slit experiment

In summary, the conversation discusses the standard double slit experiment and the measurement of time between the emission and detection of a single photon. The photon does not take a single, well-defined path and its propagation is described as a probability wave. The experiment involves passing photons through the slits and measuring the time T between emission and detection. The probability distribution of T is expected to show evidence of two distinct distributions, but due to uncertainties, they may overlap and cannot be resolved as separate distributions. When more slits are added, the paths available to the photon increase, potentially resulting in a longer T value. The equation \bar T = {\textstyle{1 \over 2}}\left( {{{\bar T}_1} + {{\
  • #1
m.e.t.a.
111
0
In the standard double slit experiment, what time period is measured between the emission ([itex]t=0[/itex]) and detection ([itex]t=T[/itex]) of a single photon?

I ask because clearly the photon does not take a single, well-defined path, as it would if it were a classical particle. Suppose that I set up an experiment to measure the time [itex]T[/itex] between the emission of a single photon at an emitter and its subsequent absorption at a (thin) detector. The detector is placed at some arbitrary point on the screen such that there is a non-negligible difference in the lengths of the two possible paths which the photon may take in traveling from the emitter to this detector. I have labelled these paths [itex]l_1[/itex] and [itex]l_2[/itex] on the attached diagram.

I tweak the variables of the experiment ([itex]a[/itex], [itex]\lambda[/itex], the position of the detector etc.) so as to make the path difference, [itex]\Delta l[/itex] reasonably large. The difference in time that a photon would take in traversing one path vs. the other should then be so large as to be easily measurable by a clock of finite accuracy. I then run this experiment many times. I pass photons one at a time through the double slits and, when one chances to be absorbed by the detector, I measure the time [itex]T[/itex] between emission and absorption to the greatest accuracy possible.

Now I should be clear: I realize that the photon is not a bullet, or a tiny ball. I accept (without deep understanding) the law of uncertainty. No matter what tweaks I make, the results of the experiment must turn out inconclusive. The photon did not take a simple classical straight-line path; it propagated as a probability wave, all paths of which must be taken into account (somehow, I don't know) and superposed to enable the calculation of the probability of the photon arriving at the detector's particular location. (Or something like this?)

I now plot all my experimentally measured values of [itex]T[/itex] to obtain a probability distribution (presumably ~Gaussian). What I am looking for of course is evidence of there being two distinct probability distributions, slightly overlapping, with mean values centred around:

[tex]{\bar T_1} = \frac{{{l_1}}}{c}[/tex]

[tex]{\bar T_2} = \frac{{{l_2}}}{c}[/tex]

And what I would actually find is that the two probability distributions overlap so completely that they are unable to be resolved as two separate distributions—right? (There would be an uncertainty in when the photon was emitted, an uncertainty in when it was detected, an uncertainty in the clock's ability to keep time, etc.)

My question is: this probability distribution of [itex]T[/itex] must be centred around some value, [itex]{\bar T}[/itex]. Is it as simple as [itex]\bar T = {\textstyle{1 \over 2}}\left( {{{\bar T}_1} + {{\bar T}_2}} \right)[/itex]? (The "bar" doesn't seem to be showing up for me but these are supposed to be Tbar, T1bar, T2bar.)

What about when more slits are added? Not necessarily symmetrically on either side of the two initial slits. Suppose that many new slits are added on one side only (say on the right hand side) so that the light source is now pointing at the far left side say of a long diffraction grating. The addition of the extra slits added many new possible paths for a photon, but all of them are longer paths than the initial two, [itex]l_1[/itex] and [itex]l_2[/itex]. If I repeated the experiment now, would I measure a longer Tbar? If so, does it make any difference whether the light source is pointing at the centre of the diffraction grating or to one side?

My not very developed instincts say that yes, a longer Tbar ought to be measured, for there are now more paths available to the photon, and all of these paths are longer than the original two (longer [itex]l \Rightarrow [/itex] longer [itex]T[/itex]). Since the photon has a probability to take any of these new long paths, must not my measured value of Tbar be skewed somewhat towards [itex]+t[/itex]—in other words, will it not be larger? My instincts say this makes sense, and yet also it doesn't make sense: as the number of slits goes to infinity, the number of possible paths goes to infinity and it is as if there were no wall between the slits at all—empty space. My reasoning can't be correct because it would imply that a photon faced with a single slit travels faster than a photon faced with empty space. Where have I made mistakes, and what would the results of these experiments be in real life?



EDIT:

Whoops, a couple of corrections: the paths [itex]l_1[/itex] and [itex]l_2[/itex] extend all the way from the emitter (not shown on diagram), through the slits and terminate at the screen. From my diagram it looks like [itex]l_1[/itex] and [itex]l_2[/itex] extend just from the slits to the screen.

Also, on second thoughts I doubt that [itex] \bar T = {\textstyle{1 \over 2}}\left( {{{\bar T}_1} + {{\bar T}_2}} \right) [/itex] is really true. The greater the angular distance between a certain slit and the point at which the light source is aimed, the lower the probability ("amplitude"?) of the photon taking that path, right? My guess equation does not take angle into account. For example, it would be ridiculous to think that for, say, 100 slits one should sum the time values [itex]\left( {\frac{{{l_1}}}{c},\frac{{{l_2}}}{c}...\frac{{{l_{100}}}}{c}} \right)[/itex] and then divide by 100 to give Tbar. As more and more distant slits were added, Tbar would tend towards infinity, which is definitely wrong. I would think instead that, due to the decreasing probability of the photon passing through ever more distant slits (which are at larger and larger angular distance), Tbar would tend towards some finite value. Is this correct—does Tbar tend towards some value? If so, what value, and how do you calculate it? If not, why not?


EDIT 2:

I believe I was making an over-simplification when I said:
...as the number of slits goes to infinity...it is as if there were no wall between the slits at all—empty space.

(What I meant to say was: as the number of slits goes to infinity, and the distance between slits goes to zero, then this is the same as empty space. That's what I meant to say; I don't know if it is actually a true statement.)

As the number of slits, [itex]N[/itex] is increased, the diffraction pattern becomes sharper and the regions of the screen in which there is ≈zero probability of detecting the photon expand. Therefore it becomes more important to place the detector at a "bright spot". So let's assume that we always place the detector at such a bright spot. However, as the distance, [itex]a[/itex] between neighbouring slits is decreased, eventually there will come a point when [itex]a<\lambda[/itex]. Is it true to say that for [itex]a<\lambda[/itex] there can only be one single bright spot, at [itex]\theta=0[/itex]? If so then is it also true that as [itex]N \to \infty [/itex] and [itex]a \to 0[/itex] the single bright spot becomes less and less "blurred" and approaches a high sharpness? (I am sure I remember this or a similar phenomenon being explained in class.) The only sensible place to position the detector would then be at [itex]\theta=0[/itex] (see diagram 2). If this is correct then this seems to disprove my earlier guess that an infinite diffraction grating of infinitely thin slits is the same (to a photon) as empty space. This is because if there were no diffraction grating, and just the single slit on the left of the picture, we would just see the blurry diffraction pattern characteristic of a single slit, wouldn't we?

This brings me to the question I was going to ask after a few replies had been posted. I am aware that the atoms in a crystalline material can be seen to behave as giant arrays of diffraction gratings. Earlier I wondered if more diffraction slits [itex]\Rightarrow[/itex] longer Tbar. Do any of my above questions about increasing Tbar etc. have relevance to refractive index and the speed of light in different materials?
 

Attachments

  • diagram1.png
    diagram1.png
    5.8 KB · Views: 499
  • diagram2.png
    diagram2.png
    27.2 KB · Views: 519
Last edited:
Physics news on Phys.org
  • #2
To get interference the coherence length must be long enough. In that case the emission time is not well-defined enough to get a which-way information from the arrival time. If you can determine the emission time precisely enough then you won't get interference with a long length difference between the slits - even if you don't measure the arrival time.

This makes double-slit experiments with sunlight challenging, for example, because it has a short coherence length. A laser has a longer coherence length.
 

What is the purpose of measuring the time period in a double slit experiment?

The time period in a double slit experiment is measured to determine the wavelength of the light passing through the slits. This helps to understand the behavior of light and its properties such as diffraction and interference.

How is the time period measured in a double slit experiment?

The time period is measured by observing the pattern of bright and dark fringes on a screen placed behind the slits. The distance between the fringes is measured and used to calculate the time period using the equation T = d/s, where T is the time period, d is the distance between the slits, and s is the distance between the slits and the screen.

What factors can affect the time period measured in a double slit experiment?

The time period can be affected by the wavelength of the light, the distance between the slits, the distance between the slits and the screen, and any obstructions or disturbances in the path of the light. Changes in these factors can alter the pattern of fringes and therefore affect the accuracy of the time period measurement.

Why is it important to measure the time period in a double slit experiment?

Measuring the time period in a double slit experiment allows for the calculation of the wavelength of light, which is crucial in understanding the properties of light. It also helps to validate the wave nature of light and provides evidence for the principles of diffraction and interference.

Can the time period be measured for other types of waves besides light in a double slit experiment?

Yes, the time period can be measured for other types of waves such as sound waves or water waves in a double slit experiment. However, the equations used to calculate the time period may differ depending on the type of wave being studied.

Similar threads

  • Quantum Physics
Replies
9
Views
721
  • Quantum Physics
2
Replies
36
Views
1K
Replies
3
Views
832
Replies
5
Views
744
  • Quantum Physics
Replies
18
Views
1K
Replies
32
Views
2K
Replies
3
Views
679
  • Quantum Physics
Replies
14
Views
1K
  • Quantum Physics
2
Replies
49
Views
3K
  • Quantum Physics
Replies
4
Views
1K
Back
Top