m.e.t.a.
- 111
- 0
For this question I am considering a slit diffraction experiment set up as follows:
{Monochromatic source} ------> {Single slit} ------> {Diffraction grating with N slits} ------> {Screen with small movable detector}
The monochromatic light source emits photons one at a time. The principal interference maximum occurs at position x=0 on the screen. The detector is placed at some point, x, on the screen where the probability of detecting the photon is non-zero (also: x \ne 0). The detector detects all photons which arrive between positions x and x + \Delta x.
Photons are emitted one by one at a slow rate. Every time a photon is emitted, a stopwatch is started. If the photon is detected at the detector then the stopwatch is stopped and that time measurement, T is logged. If the photon is not detected then no measurement is recorded and the experiment is run again with a new photon.
The experiment is repeated many times. Finally, a probability distribution is plotted: {T} vs. {probability of T}. (I presume that this probability distribution will be approximately Gaussian in shape, although its exact shape is not important here.) This probability distribution will be centred around some mean value of T, {T_{mean}}.
Suppose that the experiment is run three times with different numbers of slits:
(i) N=1
(ii) N=2
(iii) N \to \infty
My question: will {T_{mean}} vary in each case? And if it will vary, how so?
(This is a stripped-down version of a longer question I posted a few days ago, https://www.physicsforums.com/showthread.php?p=2695689#post2695689.)
{Monochromatic source} ------> {Single slit} ------> {Diffraction grating with N slits} ------> {Screen with small movable detector}
The monochromatic light source emits photons one at a time. The principal interference maximum occurs at position x=0 on the screen. The detector is placed at some point, x, on the screen where the probability of detecting the photon is non-zero (also: x \ne 0). The detector detects all photons which arrive between positions x and x + \Delta x.
Photons are emitted one by one at a slow rate. Every time a photon is emitted, a stopwatch is started. If the photon is detected at the detector then the stopwatch is stopped and that time measurement, T is logged. If the photon is not detected then no measurement is recorded and the experiment is run again with a new photon.
The experiment is repeated many times. Finally, a probability distribution is plotted: {T} vs. {probability of T}. (I presume that this probability distribution will be approximately Gaussian in shape, although its exact shape is not important here.) This probability distribution will be centred around some mean value of T, {T_{mean}}.
Suppose that the experiment is run three times with different numbers of slits:
(i) N=1
(ii) N=2
(iii) N \to \infty
My question: will {T_{mean}} vary in each case? And if it will vary, how so?
(This is a stripped-down version of a longer question I posted a few days ago, https://www.physicsforums.com/showthread.php?p=2695689#post2695689.)