# Does Destructive Interference Affect Starlight in a Double Slit Experiment?

• B
• Devin-M
In summary: There is nowhere near twice as much red value when adding the second slit. So where did the missing energy go?
Devin-M
In this MIT course video, he shows on an optical table that in certain cases, when destructive interference occurs the light is going back to the laser (fast forward to 5:22 in the video below)

My question is when I do a double slit experiment with starlight by putting 2 slits in front of my objective lens as shown below, is the light in the areas showing destructive interference being reflected back to the star to maintain conservation of energy?

*H-Alpha narrowband filter (656nm +/- 3nm was used in front of the sensor)

Last edited:
"Where does the energy go" implies it was somewhere else and somehow moved/ That is usually not a good way to think about multiple configurations.

That said, if you want to know where does the energy go in the dark bands, where does it come from in the bright bands.

Couchyam and vanhees71
That is usually not a good way to think about multiple configurations.
To expand on that, the reason it is not a good idea is that the energy may not be the same between different configurations. Conservation of energy implies that if there is no work/heat exchange with the environment then the energy of a system remains the same over time. It does not imply that different systems have the same energy.

If I let more light in (2 slits instead of 1), some areas are darker than before… So my question is where does the additional energy get deposited?

If we compare 1 and 4, below the primary (central) band in 4, there is a dark band that doesn’t appear in 1.

Motore
@Devin-M, the interferometer uses a beam-splitter and mirrors. As a result, some light is reflected back towards the source.

This is a very different setup to a simple 2-slit etup which contains no beam-splitter or mirrors.

For the simple 2 (or more)-slit setup, energy is simply spatially redistributed. Energy is reduced in regions of destructive interference; this is exactly balanced by increases in energy in regions of constructive interference. No energy is reflected back to the source in the simple 2-slit setup.

Steve4Physics said:
For the simple 2 (or more)-slit setup, energy is simply spatially redistributed. Energy is reduced in regions of destructive interference; this is exactly balanced by increases in energy in regions of constructive interference. No energy is reflected back to the source in the simple 2-slit setup.
If that’s the case, if I add up the red values of a vertical strip from 1) and then 4) I would expect the total red values from 4) when added together should be approximately double the total red values from 1), since the aperture size was doubled between measurements.

Devin-M said:
If that’s the case, if I add up the red values of a vertical strip from 1) and then 4) I would expect the total red values from 4) when added together should be approximately double the total red values from 1)
What are these 'red values' of which you speak? For example, are they proportional to intensity?

Devin-M said:

Steve4Physics said:
What are these 'red values' of which you speak? For example, are they proportional to intensity?

Yes, unless a pixel is saturated (ie reaches the maximum value) the red values should be directly proportional to intensity.

Devin-M said:
Yes, unless a pixel is saturated (ie reaches the maximum value) the red values should be directly proportional to intensity.
That sounds interesting. Presumably the effects of other parameters (e.g. exposure times, slit areas) will be taken into account.

Of course, if you wanted to perform the experiment in a more controlled/reproducible manner, you could use your own light source, rather than Polaris!

Slits were the same size, exposure length the same— each exposure was 5 minutes. We just have double the area with 2 slits. So if the photons are spatially redistributed with the double slit I expect twice the total red value with the double slit.

If destructive interference is preventing detection of some of the photons, on the other hand, I’d expect less than twice as much total red value with the double slit.

On the left is a portion of the single slit interference pattern and the right is the comparable portion of the double slit interference pattern (no pixels in these comparable portions are saturated)....

Steve4Physics said:
For the simple 2 (or more)-slit setup, energy is simply spatially redistributed. Energy is reduced in regions of destructive interference; this is exactly balanced by increases in energy in regions of constructive interference. No energy is reflected back to the source in the simple 2-slit setup.
The results are:

Single Slit Red Value Sum: 7764
Double Slit Red Value Sum: 9548

There is nowhere near twice as much red value when adding the second slit. So where did the missing energy go? With the double slit, there was twice as much aperture size on the same target for the same amount of time.

Last edited:
Motore
Now when I compare those results to the test I did previously with non-parallel slits (1 horizontal & 1 diagonal slit) and I add up the red values, I see almost no signs of destructive interference:

Red Values Sums:

Single Slit: 7764
Double Slit (Parallel): 9548 (1.22x higher than single slit)
Horizontal + Diagonal Slit: 15017 (1.93x higher than single slit)

In other words the horizontal and diagonal slit deposit significantly more energy in the fringes on the sensor than the parallel double slits.

Last edited:
"Where does the energy go" implies it was somewhere else and somehow moved/ That is usually not a good way to think about multiple configurations.

That said, if you want to know where does the energy go in the dark bands, where does it come from in the bright bands.

Dale said:
To expand on that, the reason it is not a good idea is that the energy may not be the same between different configurations. Conservation of energy implies that if there is no work/heat exchange with the environment then the energy of a system remains the same over time. It does not imply that different systems have the same energy.

Essentially the question boils down to why you get less energy in the fringes with the double parallel slit than the horizontal + diagonal slits. With horizontal & diagonal slits, I get twice as much energy as expected in the fringes (compared to single), but with the double parallel slits, about 39% of the expected energy is missing.

Motore
Devin-M said:
Essentially the question boils down to why you get less energy in the fringes with the double parallel slit than the horizontal + diagonal slits. With horizontal & diagonal slits, I get twice as much energy as expected in the fringes (compared to single), but with the double parallel slits, about 39% of the expected energy is missing.
Different systems have different energies. The question isn't why is there missing energy. There isn't any missing energy, there is just different energies for different systems.

The question is why do you personally expect the energy of those two different systems to be the same?

Last edited:
It’s unexpected as it implies if I do 5 minutes through both parallel slits at the same time then compare with 5 minutes through one parallel slit only followed by 5 minutes through the other parallel slit, I’ll get ~62% more energy on the sensor in the later case.

It is not unexpected. They are different setups and the energy with different setups is not expected to be the same.

If it is unexpected to you personally then you personally need to explain your expectation in detail. There is some reasoning step that you are doing wrong in your personal expectation, but if you don't describe your personal reasoning behind your expectation then we cannot help you.

Dale said:
you personally need to explain your expectation in detail.
This. Numbers will help.

I was expecting if I measure the # of photons N through each slit separately for X amount of time, and they are both the same quantity, that if I measure through both slits simultaneously for X time, that the # of measured photons would be 2N. But based on the previous unexpected results, when I do that test I now expect 1.22N, but I can’t explain the discrepency.

weirdoguy
I'm sorry. I meant ratioanle for the numbers will help. Equations, assumptions, etc. that sort of thing,

I would not add photons to the mix.

I'm sorry. I meant ratioanle for the numbers will help. Equations, assumptions, etc. that sort of thing,

I would not add photons to the mix.

Yes, sorry. What I’ve been measuring and calling “Red Values” and “Red Value Sums,” are more technically referred to as “ADUs” or analog to digital units since I am measuring them out of the raw sensor data.

@collinsmark can explain better than me but basically, since I used a H-Alpha narrowband filter in from of the sensor (656nm +/- 3nm), so the sensor is only receiving one wavelength of incident light, the ADUs can be multiplied by a constant to get the photon count.

collinsmark said:
I'll use JWST my first example, and my own camera as a second example:

JWST:

First, you can find the specs here:
https://jwst-docs.stsci.edu/jwst-ne...detector-overview/nircam-detector-performance

Note the "Gain (e-/ADU)" and the "Quantum efficiency (QE)" in Table 1.

As you can see, JWST uses two different gain settings, one for short wavelengths and another for long.

The Quantum Efficiency (QE) is naturally dependent on the wavelength.

So let's say that were presently using JWST's F405M filter, which is a long wavelength filter with a passband at 4 microns.

Going back to Table 1, we see that
QE: 90% [e-/photon]We can convert that to photons by dividing the two:

So for when JWST is using its F405M filter, it takes on average about 2 photons to trigger an ADU increment.

My Deep Sky Camera: ZWO ASI6200MM-Pro:

You can find the specs here:
https://astronomy-imaging-camera.com/product/asi6200mm-pro-mono

So, for example, if I'm using my Hydrogen-alpha (Hα) filter, which as a wavelength of 656.3 nm, we can see that the QE is about 60%.

But if I set my camera gain to 0, corresponding to around 0.8 [e-/ADU], that gives

So with this gain setting and when using the Hα filter, it takes on average about 1.33 photons to increment an ADU.

(I like to set my camera gain to something higher for better read noise, but I'll just use a gain setting of 0 here for the example.)

Had I instead set the gain setting to around 25, which corresponds to somewhere close to 0.6 [e-/ADU], that would correspond to roughly 1 photon/ADU on average.

Devin-M said:

Devin-M said:
Yes, sorry. What I’ve been measuring and calling “Red Values” and “Red Value Sums,” are more technically referred to as “ADUs” or analog to digital units since I am measuring them out of the raw sensor data.

@collinsmark can explain better than me but basically, since I used a H-Alpha narrowband filter in from of the sensor (656nm +/- 3nm), so the sensor is only receiving one wavelength of incident light, the ADUs can be multiplied by a constant to get the photon count.
Ok, this is a good description of what you are doing, but it tells us nothing about why you personally expect double the photons. We need you to explain your reasoning, not your setup. Your setup is great and it shows that your conclusion is wrong, but gives us no insight into the source of your confusion.

“I was expecting if I measure the # of photons N through each slit separately for X amount of time, and they are both the same quantity, that if I measure through both slits simultaneously for X time, that the # of measured photons would be 2N because _________”

Last edited:
Dale said:

“I was expecting if I measure the # of photons N through each slit separately for X amount of time, and they are both the same quantity, that if I measure through both slits simultaneously for X time, that the # of measured photons would be 2N because _________”

...because:

"The amount of light captured by a lens is proportional to the area of the aperture, equal to:

https://en.wikipedia.org/wiki/Aperture

...and the double slit has double the aperture.

I performed a new test, this time where I measured each slit individually, and then both at the same time:

Each test was targeting the star Polaris with a 5 minute exposure, 600mm, f/9, 6400iso, with a H-Alpha narrowband filter (656nm) in front of the sensor on a Nikon D800 DSLR, and the camera was mounted on an equatorial mount to compensate for the Earth's rotation.

Last edited:
Devin-M said:
I performed a new test, this time where I measured each slit individually, and then both at the same time:

And how did you perform the background correction? Did you take a picture with all slits closed as the background and then subtracted this data from your other pictures pixel by pixel or did you subtract some flat background?

Motore and vanhees71
The pictures are so grainy I have a hard time believing you can conclude anything meaningful from them.
We all know that the pattern for the single-slit and the double-slit at the same place cannot be the same, so I would expect a different light intensity there, but not necessarily double the intensity of a single-slit.
Also, why don't you take the start of the measurement from the center (as it seems to me the double-slit measurements have the center much brighter than the single-slit measurements), as that can account for the missing light.

Devin-M said:
"The amount of light captured by a lens is proportional to the area of the aperture
OK, so here is the error. That rule is based on the concepts of ray optics. Ray optics in turn is based on the assumption that diffraction effects are negligible. That is not the case here. So you have to calculate it using different methods. Specifically you have to account for diffraction and interference. The approximations of ray optics don’t hold, so neither do the conclusions of ray optics.

vanhees71 and PeroK
Cthugha said:
And how did you perform the background correction? Did you take a picture with all slits closed as the background and then subtracted this data from your other pictures pixel by pixel or did you subtract some flat background?
Devin-M said:
I've now (at your suggestion) looked at how much noise is in equal area with no signal...

Noise ADUs in area w/ no signal: 2634
Slit 1 ADUs w/ noise subtracted: 4531
Slit 2 ADUs w/ noise subtracted: 4442
Double Slit ADUs w/ noise subtracted: 6705 (1.47x higher than slit 1)

Noise Sample:

@Devin-M are you looking at the whole picture, or only a part? I'm not sure you can only look at the diffraction spikes and get a meaningful result. I'd suggest get a good average background for each frame and then subtracting this from every pixel in the image before summing the ADU's together. Basically, measure the total light that hit the sensor, not just a part of it. Note that this might mean that you need to take more images and be very careful not to saturate any pixel.

Motore and collinsmark
Devin-M said:
@collinsmark can explain better than me but basically, [...]

@Devin-M, Sorry, I haven't been following this thread until now. But yes, I agree that ideally, if things are kept simple, I would expect the total ADU count of the double-slit should be pretty close to twice that of the single slit, all else being the same, if you were to add up all the (red) pixel values across a single line in each pattern. That's of course after dark frame subtraction. It's also assuming that your Nikon D800 isn't trying to modify the results in-camera.

Here's a few tips to help make sure your camera is not inadvertently messing up the results:

• If possible, store the images in RAW (NEF) format. When converting the Raw (NEF) file to some easily readable format, don't apply a white-balance, or if you have to, apply a neutral white balance (e.g., direct sunlight).
• If your workflow won't work with storing the data in RAW (NEF) format, then store the files in TIFF format. But don't store data as JPEGs.
• Do not use "Auto" white balance. Set it to something like "Direct sunlight," but the important thing is to make sure it's not set to auto.
• The Nikon D800 has selectable "Image Enhancements." Make sure your "Set Picture Control" option is set to "Neutral." See page 163 of your Nikon D800 manual for details.
• The Nikon D800 supports something called "Active D-Lighting." Make sure this is turned off. See page 174 of the manual for details.
• The Nikon D800 supports something called "Auto ISO Sensitivity Control." Make sure this is turned off. See page 111 of the manual for details.
• For that matter, try to make sure nothing is "auto," if you can think of anything else not mentioned here. Everything should be set to manual.

There's a lot of variables in what you're trying to do. A lot of stuff can go wrong, causing misleading results. So the above advice might serve as just a start.

Last edited:
collinsmark said:
• If possible, store the images in RAW (NEF) format. When converting the Raw (NEF) file to some easily readable format, don't apply a white-balance, or if you have to, apply a neutral white balance (e.g., direct sunlight).
• If your workflow won't work with storing the data in RAW (NEF) format, then store the files in TIFF format. But don't store data as JPEGs.
• Do not use "Auto" white balance. Set it to something like "Direct sunlight," but the important thing is to make sure it's not set to auto.
• The Nikon D800 has selectable "Image Enhancements." Make sure your "Set Picture Control" option is set to "Neutral." See page 163 of your Nikon D800 manual for details.
• The Nikon D800 supports something called "Active D-Lighting." Make sure this is turned off. See page 174 of the manual for details.
• The Nikon D800 supports something called "Auto ISO Sensitivity Control." Make sure this is turned off. See page 111 of the manual for details.
• For that matter, try to make sure nothing is "auto," if you can think of anything else not mentioned here. Everything should be set to manual.
I shot in RAW NEF format, white balance while shooting was set to sunlight, picture control was on standard instead of neutral (i'll retest), active d-lighting was off, auto ISO was off, everything was manual, all in camera noise reduction was turned off, RAWs were converted to 16 bit TIFs w/ all sharpening/noise reduction off and without any other modifications.

PS… according to this, picture control doesn’t affect the RAW files, only the JPGs or raw file conversion if you use the Nikon raw conversion software, which I didn’t do… I used adobe lightroom: https://www.dpreview.com/forums/thread/3600835

Last edited:
Devin-M said:
I was expecting if I measure the # of photons N through each slit separately for X amount of time, and they are both the same quantity, that if I measure through both slits simultaneously for X time, that the # of measured photons would be 2N. But based on the previous unexpected results, when I do that test I now expect 1.22N, but I can’t explain the discrepency.
A classical probability argument would agree with you. Let's assume that with one slit open, the probability that a photon passes though the slit is ##p##. With two slits open the probability must be ##2p##. This probability translates to total intensity on the detector screen.

With that explanation, however, the double-slit experiment with one photon at a time (or one electron at a time) would yield no interference.

The QM explanation for double-slit interference calculates probabilities (hence intensities) as the square of intermediate probability amplitudes. In particular, the classical proposition regarding double the probability/number of photons fails in the case of the double-slit. The number of photons that are transmitted by a double-slit is not necessarily twice the number transmitted by each slit in isolation.

That is, in a way, the essence of QM in a slightly unfamiliar context.

The aperture formula, I suggest, is applicable only where the apertures are large enough and far enough apart for QM effects to be negligible.

Last edited:
vanhees71
@Devin-M what we are talking about here is the transmission coefficient for the double-slit relative to twice the transmission coefficient for the single slit. Your data might be quite instructive in this respect. You might try researching this online. E.g.

https://www.nature.com/articles/s41598-020-76512-5

The answer to your original question is that in the double-slit experiment almost as much light gets absorbed by the barrier as for a single slit!

Last edited:
Drakkith
The non-parallel slits seem to have a higher transmission coefficient than the parallel slits in the fringes, because with the non-parallel slits you have twice the area of fringes that have the same intensity as a single slit, but with the parallel slits, you have the same area of fringes as a single slit, and those fringes have less than twice the transmission coefficient of a single individual slit.
Devin-M said:

Last edited:
PeroK

• Quantum Physics
Replies
32
Views
2K
• Quantum Physics
Replies
4
Views
1K
• Quantum Physics
Replies
1
Views
775
• Quantum Physics
Replies
19
Views
1K
• Quantum Physics
Replies
3
Views
871
• Quantum Physics
Replies
4
Views
1K
• Quantum Physics
Replies
14
Views
2K
• Quantum Physics
Replies
3
Views
813
• Quantum Physics
Replies
2
Views
541
• Quantum Physics
Replies
1
Views
585