B Does Destructive Interference Affect Starlight in a Double Slit Experiment?

Devin-M
Messages
1,069
Reaction score
765
In this MIT course video, he shows on an optical table that in certain cases, when destructive interference occurs the light is going back to the laser (fast forward to 5:22 in the video below)



My question is when I do a double slit experiment with starlight by putting 2 slits in front of my objective lens as shown below, is the light in the areas showing destructive interference being reflected back to the star to maintain conservation of energy?
13-jpg-jpg.jpg

*H-Alpha narrowband filter (656nm +/- 3nm was used in front of the sensor)
 
Last edited:
Physics news on Phys.org
"Where does the energy go" implies it was somewhere else and somehow moved/ That is usually not a good way to think about multiple configurations.

That said, if you want to know where does the energy go in the dark bands, where does it come from in the bright bands.
 
  • Like
Likes Couchyam and vanhees71
Vanadium 50 said:
That is usually not a good way to think about multiple configurations.
To expand on that, the reason it is not a good idea is that the energy may not be the same between different configurations. Conservation of energy implies that if there is no work/heat exchange with the environment then the energy of a system remains the same over time. It does not imply that different systems have the same energy.
 
  • Like
Likes Vanadium 50 and vanhees71
If I let more light in (2 slits instead of 1), some areas are darker than before… So my question is where does the additional energy get deposited?
beb6d57c-6bed-473e-8558-e321416ac426-jpeg.jpg


13-jpg-jpg-jpg.jpg
 
If we compare 1 and 4, below the primary (central) band in 4, there is a dark band that doesn’t appear in 1.
14-jpg-jpg.jpg
 
@Devin-M, the interferometer uses a beam-splitter and mirrors. As a result, some light is reflected back towards the source.

This is a very different setup to a simple 2-slit etup which contains no beam-splitter or mirrors.

For the simple 2 (or more)-slit setup, energy is simply spatially redistributed. Energy is reduced in regions of destructive interference; this is exactly balanced by increases in energy in regions of constructive interference. No energy is reflected back to the source in the simple 2-slit setup.
 
Steve4Physics said:
For the simple 2 (or more)-slit setup, energy is simply spatially redistributed. Energy is reduced in regions of destructive interference; this is exactly balanced by increases in energy in regions of constructive interference. No energy is reflected back to the source in the simple 2-slit setup.
If that’s the case, if I add up the red values of a vertical strip from 1) and then 4) I would expect the total red values from 4) when added together should be approximately double the total red values from 1), since the aperture size was doubled between measurements.

RAW Files: https://u.pcloud.link/publink/show?code=kZcjE2VZR1eBfOCgGAYfuGOv9EMgPH3KIR07
 
Devin-M said:
If that’s the case, if I add up the red values of a vertical strip from 1) and then 4) I would expect the total red values from 4) when added together should be approximately double the total red values from 1)
What are these 'red values' of which you speak? For example, are they proportional to intensity?
 
  • #10
Devin-M said:
Yes, unless a pixel is saturated (ie reaches the maximum value) the red values should be directly proportional to intensity.
That sounds interesting. Presumably the effects of other parameters (e.g. exposure times, slit areas) will be taken into account.

Of course, if you wanted to perform the experiment in a more controlled/reproducible manner, you could use your own light source, rather than Polaris!
 
  • #11
Slits were the same size, exposure length the same— each exposure was 5 minutes. We just have double the area with 2 slits. So if the photons are spatially redistributed with the double slit I expect twice the total red value with the double slit.
 
  • #12
If destructive interference is preventing detection of some of the photons, on the other hand, I’d expect less than twice as much total red value with the double slit.
 
  • #13
On the left is a portion of the single slit interference pattern and the right is the comparable portion of the double slit interference pattern (no pixels in these comparable portions are saturated)....
22.gif

23.gif
 
  • #14
Steve4Physics said:
For the simple 2 (or more)-slit setup, energy is simply spatially redistributed. Energy is reduced in regions of destructive interference; this is exactly balanced by increases in energy in regions of constructive interference. No energy is reflected back to the source in the simple 2-slit setup.
The results are:

Single Slit Red Value Sum: 7764
Double Slit Red Value Sum: 9548

red_values.jpg

red_value_sums.jpg


24.gif

There is nowhere near twice as much red value when adding the second slit. So where did the missing energy go? With the double slit, there was twice as much aperture size on the same target for the same amount of time.
 
Last edited:
  • #15
Now when I compare those results to the test I did previously with non-parallel slits (1 horizontal & 1 diagonal slit) and I add up the red values, I see almost no signs of destructive interference:

Red Values Sums:

Single Slit: 7764
Double Slit (Parallel): 9548 (1.22x higher than single slit)
Horizontal + Diagonal Slit: 15017 (1.93x higher than single slit)

In other words the horizontal and diagonal slit deposit significantly more energy in the fringes on the sensor than the parallel double slits.

29.jpg

28.jpg

27.gif

26.gif


24.gif

22.gif


12-jpg-jpg.jpg

13-jpg-jpg.jpg

beb6d57c-6bed-473e-8558-e321416ac426-jpeg-jpg.jpg
 
Last edited:
  • #16
Vanadium 50 said:
"Where does the energy go" implies it was somewhere else and somehow moved/ That is usually not a good way to think about multiple configurations.

That said, if you want to know where does the energy go in the dark bands, where does it come from in the bright bands.

Dale said:
To expand on that, the reason it is not a good idea is that the energy may not be the same between different configurations. Conservation of energy implies that if there is no work/heat exchange with the environment then the energy of a system remains the same over time. It does not imply that different systems have the same energy.

Essentially the question boils down to why you get less energy in the fringes with the double parallel slit than the horizontal + diagonal slits. With horizontal & diagonal slits, I get twice as much energy as expected in the fringes (compared to single), but with the double parallel slits, about 39% of the expected energy is missing.

30.jpg
 
  • #17
Devin-M said:
Essentially the question boils down to why you get less energy in the fringes with the double parallel slit than the horizontal + diagonal slits. With horizontal & diagonal slits, I get twice as much energy as expected in the fringes (compared to single), but with the double parallel slits, about 39% of the expected energy is missing.
Different systems have different energies. The question isn't why is there missing energy. There isn't any missing energy, there is just different energies for different systems.

The question is why do you personally expect the energy of those two different systems to be the same?
 
Last edited:
  • #18
It’s unexpected as it implies if I do 5 minutes through both parallel slits at the same time then compare with 5 minutes through one parallel slit only followed by 5 minutes through the other parallel slit, I’ll get ~62% more energy on the sensor in the later case.
 
  • #19
It is not unexpected. They are different setups and the energy with different setups is not expected to be the same.

If it is unexpected to you personally then you personally need to explain your expectation in detail. There is some reasoning step that you are doing wrong in your personal expectation, but if you don't describe your personal reasoning behind your expectation then we cannot help you.
 
  • #20
Dale said:
you personally need to explain your expectation in detail.
This. Numbers will help.
 
  • #21
I was expecting if I measure the # of photons N through each slit separately for X amount of time, and they are both the same quantity, that if I measure through both slits simultaneously for X time, that the # of measured photons would be 2N. But based on the previous unexpected results, when I do that test I now expect 1.22N, but I can’t explain the discrepency.
 
  • #22
I'm sorry. I meant ratioanle for the numbers will help. Equations, assumptions, etc. that sort of thing,

I would not add photons to the mix.
 
  • #23
Vanadium 50 said:
I'm sorry. I meant ratioanle for the numbers will help. Equations, assumptions, etc. that sort of thing,

I would not add photons to the mix.

Yes, sorry. What I’ve been measuring and calling “Red Values” and “Red Value Sums,” are more technically referred to as “ADUs” or analog to digital units since I am measuring them out of the raw sensor data.

@collinsmark can explain better than me but basically, since I used a H-Alpha narrowband filter in from of the sensor (656nm +/- 3nm), so the sensor is only receiving one wavelength of incident light, the ADUs can be multiplied by a constant to get the photon count.

collinsmark said:
I'll use JWST my first example, and my own camera as a second example:

JWST:

First, you can find the specs here:
https://jwst-docs.stsci.edu/jwst-ne...detector-overview/nircam-detector-performance

Note the "Gain (e-/ADU)" and the "Quantum efficiency (QE)" in Table 1.

As you can see, JWST uses two different gain settings, one for short wavelengths and another for long.

The Quantum Efficiency (QE) is naturally dependent on the wavelength.

So let's say that were presently using JWST's F405M filter, which is a long wavelength filter with a passband at 4 microns.

Going back to Table 1, we see that
Gain: 1.82 [e-/ADU]
QE: 90% [e-/photon]We can convert that to photons by dividing the two:
1.82 [e−/ADU]0.9 [e−/photon]=2.02 photons/ADU

So for when JWST is using its F405M filter, it takes on average about 2 photons to trigger an ADU increment.

My Deep Sky Camera: ZWO ASI6200MM-Pro:

You can find the specs here:
https://astronomy-imaging-camera.com/product/asi6200mm-pro-mono

asi6200-performance-png.png


6200qe-png.png
So, for example, if I'm using my Hydrogen-alpha (Hα) filter, which as a wavelength of 656.3 nm, we can see that the QE is about 60%.

But if I set my camera gain to 0, corresponding to around 0.8 [e-/ADU], that gives

0.8 [e−/ADU]0.6 [e−/photon]=1.33 photons/ADU

So with this gain setting and when using the Hα filter, it takes on average about 1.33 photons to increment an ADU.

(I like to set my camera gain to something higher for better read noise, but I'll just use a gain setting of 0 here for the example.)

Had I instead set the gain setting to around 25, which corresponds to somewhere close to 0.6 [e-/ADU], that would correspond to roughly 1 photon/ADU on average.

Devin-M said:
 
  • #24
Devin-M said:
Yes, sorry. What I’ve been measuring and calling “Red Values” and “Red Value Sums,” are more technically referred to as “ADUs” or analog to digital units since I am measuring them out of the raw sensor data.

@collinsmark can explain better than me but basically, since I used a H-Alpha narrowband filter in from of the sensor (656nm +/- 3nm), so the sensor is only receiving one wavelength of incident light, the ADUs can be multiplied by a constant to get the photon count.
Ok, this is a good description of what you are doing, but it tells us nothing about why you personally expect double the photons. We need you to explain your reasoning, not your setup. Your setup is great and it shows that your conclusion is wrong, but gives us no insight into the source of your confusion.

Please fill in the blank:

“I was expecting if I measure the # of photons N through each slit separately for X amount of time, and they are both the same quantity, that if I measure through both slits simultaneously for X time, that the # of measured photons would be 2N because _________”
 
Last edited:
  • #25
Dale said:
Please fill in the blank:

“I was expecting if I measure the # of photons N through each slit separately for X amount of time, and they are both the same quantity, that if I measure through both slits simultaneously for X time, that the # of measured photons would be 2N because _________”

...because:

"The amount of light captured by a lens is proportional to the area of the aperture, equal to:

\mathrm {Area} =\pi \left({D \over 2}\right)^{2}=\pi \left({f \over 2N}\right)^{2}

https://en.wikipedia.org/wiki/Aperture

...and the double slit has double the aperture.

I performed a new test, this time where I measured each slit individually, and then both at the same time:

Slit 1 ADUs: 7165
Slit 2 ADUs: 7076
Double Slit ADUs: 9339

Each test was targeting the star Polaris with a 5 minute exposure, 600mm, f/9, 6400iso, with a H-Alpha narrowband filter (656nm) in front of the sensor on a Nikon D800 DSLR, and the camera was mounted on an equatorial mount to compensate for the Earth's rotation.

slits.jpg

diffraction.jpg


adus-px.jpg

total-adus.jpg

47F95CB7-23AF-4B45-A5F3-68B9D93A8891.jpeg

372CC1B7-BEA0-4A94-A57F-E7699C894138.jpeg

CB6D1CFC-EA07-4AEF-96D5-BD70C65DB705.jpeg
 
Last edited:
  • #26
Devin-M said:
I performed a new test, this time where I measured each slit individually, and then both at the same time:

Slit 1 ADUs: 7165
Slit 2 ADUs: 7076
Double Slit ADUs: 9339

And how did you perform the background correction? Did you take a picture with all slits closed as the background and then subtracted this data from your other pictures pixel by pixel or did you subtract some flat background?
 
  • Like
Likes Motore and vanhees71
  • #27
The pictures are so grainy I have a hard time believing you can conclude anything meaningful from them.
We all know that the pattern for the single-slit and the double-slit at the same place cannot be the same, so I would expect a different light intensity there, but not necessarily double the intensity of a single-slit.
Also, why don't you take the start of the measurement from the center (as it seems to me the double-slit measurements have the center much brighter than the single-slit measurements), as that can account for the missing light.
 
  • #28
Devin-M said:
"The amount of light captured by a lens is proportional to the area of the aperture
OK, so here is the error. That rule is based on the concepts of ray optics. Ray optics in turn is based on the assumption that diffraction effects are negligible. That is not the case here. So you have to calculate it using different methods. Specifically you have to account for diffraction and interference. The approximations of ray optics don’t hold, so neither do the conclusions of ray optics.
 
  • Like
  • Informative
Likes vanhees71 and PeroK
  • #29
Cthugha said:
And how did you perform the background correction? Did you take a picture with all slits closed as the background and then subtracted this data from your other pictures pixel by pixel or did you subtract some flat background?
Devin-M said:
Slit 1 ADUs: 7165
Slit 2 ADUs: 7076
Double Slit ADUs: 9339
I've now (at your suggestion) looked at how much noise is in equal area with no signal...

Noise ADUs in area w/ no signal: 2634
Slit 1 ADUs w/ noise subtracted: 4531
Slit 2 ADUs w/ noise subtracted: 4442
Double Slit ADUs w/ noise subtracted: 6705 (1.47x higher than slit 1)

noise-subtraction.jpg

Noise Sample:
noise.jpg
 
  • #30
@Devin-M are you looking at the whole picture, or only a part? I'm not sure you can only look at the diffraction spikes and get a meaningful result. I'd suggest get a good average background for each frame and then subtracting this from every pixel in the image before summing the ADU's together. Basically, measure the total light that hit the sensor, not just a part of it. Note that this might mean that you need to take more images and be very careful not to saturate any pixel.
 
  • Like
Likes Motore and collinsmark
  • #31
Devin-M said:
@collinsmark can explain better than me but basically, [...]

@Devin-M, Sorry, I haven't been following this thread until now. But yes, I agree that ideally, if things are kept simple, I would expect the total ADU count of the double-slit should be pretty close to twice that of the single slit, all else being the same, if you were to add up all the (red) pixel values across a single line in each pattern. That's of course after dark frame subtraction. It's also assuming that your Nikon D800 isn't trying to modify the results in-camera.

Here's a few tips to help make sure your camera is not inadvertently messing up the results:

  • If possible, store the images in RAW (NEF) format. When converting the Raw (NEF) file to some easily readable format, don't apply a white-balance, or if you have to, apply a neutral white balance (e.g., direct sunlight).
  • If your workflow won't work with storing the data in RAW (NEF) format, then store the files in TIFF format. But don't store data as JPEGs.
  • Do not use "Auto" white balance. Set it to something like "Direct sunlight," but the important thing is to make sure it's not set to auto.
  • The Nikon D800 has selectable "Image Enhancements." Make sure your "Set Picture Control" option is set to "Neutral." See page 163 of your Nikon D800 manual for details.
  • The Nikon D800 supports something called "Active D-Lighting." Make sure this is turned off. See page 174 of the manual for details.
  • The Nikon D800 supports something called "Auto ISO Sensitivity Control." Make sure this is turned off. See page 111 of the manual for details.
  • For that matter, try to make sure nothing is "auto," if you can think of anything else not mentioned here. Everything should be set to manual.

There's a lot of variables in what you're trying to do. A lot of stuff can go wrong, causing misleading results. So the above advice might serve as just a start.
 
Last edited:
  • #32
collinsmark said:
  • If possible, store the images in RAW (NEF) format. When converting the Raw (NEF) file to some easily readable format, don't apply a white-balance, or if you have to, apply a neutral white balance (e.g., direct sunlight).
  • If your workflow won't work with storing the data in RAW (NEF) format, then store the files in TIFF format. But don't store data as JPEGs.
  • Do not use "Auto" white balance. Set it to something like "Direct sunlight," but the important thing is to make sure it's not set to auto.
  • The Nikon D800 has selectable "Image Enhancements." Make sure your "Set Picture Control" option is set to "Neutral." See page 163 of your Nikon D800 manual for details.
  • The Nikon D800 supports something called "Active D-Lighting." Make sure this is turned off. See page 174 of the manual for details.
  • The Nikon D800 supports something called "Auto ISO Sensitivity Control." Make sure this is turned off. See page 111 of the manual for details.
  • For that matter, try to make sure nothing is "auto," if you can think of anything else not mentioned here. Everything should be set to manual.
I shot in RAW NEF format, white balance while shooting was set to sunlight, picture control was on standard instead of neutral (i'll retest), active d-lighting was off, auto ISO was off, everything was manual, all in camera noise reduction was turned off, RAWs were converted to 16 bit TIFs w/ all sharpening/noise reduction off and without any other modifications.

PS… according to this, picture control doesn’t affect the RAW files, only the JPGs or raw file conversion if you use the Nikon raw conversion software, which I didn’t do… I used adobe lightroom: https://www.dpreview.com/forums/thread/3600835
 
Last edited:
  • #33
Devin-M said:
I was expecting if I measure the # of photons N through each slit separately for X amount of time, and they are both the same quantity, that if I measure through both slits simultaneously for X time, that the # of measured photons would be 2N. But based on the previous unexpected results, when I do that test I now expect 1.22N, but I can’t explain the discrepency.
A classical probability argument would agree with you. Let's assume that with one slit open, the probability that a photon passes though the slit is ##p##. With two slits open the probability must be ##2p##. This probability translates to total intensity on the detector screen.

With that explanation, however, the double-slit experiment with one photon at a time (or one electron at a time) would yield no interference.

The QM explanation for double-slit interference calculates probabilities (hence intensities) as the square of intermediate probability amplitudes. In particular, the classical proposition regarding double the probability/number of photons fails in the case of the double-slit. The number of photons that are transmitted by a double-slit is not necessarily twice the number transmitted by each slit in isolation.

That is, in a way, the essence of QM in a slightly unfamiliar context.

The aperture formula, I suggest, is applicable only where the apertures are large enough and far enough apart for QM effects to be negligible.
 
Last edited:
  • #34
@Devin-M what we are talking about here is the transmission coefficient for the double-slit relative to twice the transmission coefficient for the single slit. Your data might be quite instructive in this respect. You might try researching this online. E.g.

https://www.nature.com/articles/s41598-020-76512-5

The answer to your original question is that in the double-slit experiment almost as much light gets absorbed by the barrier as for a single slit!
 
Last edited:
  • #35
The non-parallel slits seem to have a higher transmission coefficient than the parallel slits in the fringes, because with the non-parallel slits you have twice the area of fringes that have the same intensity as a single slit, but with the parallel slits, you have the same area of fringes as a single slit, and those fringes have less than twice the transmission coefficient of a single individual slit.
Devin-M said:
12-jpg-jpg-jpg.jpg

13-jpg-jpg-jpg.jpg
 
Last edited:
  • #36
The usual explanation for the dark minima in the case of Fraunhofer diffraction is that the path length difference between the edges of the slits differs by half a wavelength or an odd multiple of a half wavelength path difference at the minima location, so the arriving waves are out of phase at the detection screen at the locations of these minima.

Transmission coefficient on the other hand implies that the energy which doesn’t arrive at the detector (the 25% deficit per slit in the 2 slit case) is somehow blocked by the slit.

Is one of these interpretations more valid than the other? It’s hard to understand how opening a second slit would decrease the energy transmitted via the other slit.

When the two waves are in phase, i.e. the path difference is equal to an integral number of wavelengths, the summed amplitude, and therefore the summed intensity is maximal, and when they are in anti-phase, i.e. the path difference is equal to half a wavelength, one and a half wavelengths, etc., then the two waves cancel, and the summed intensity is zero. This effect is known as interference.

https://en.m.wikipedia.org/wiki/Fraunhofer_diffraction
 
Last edited:
  • #37
Devin-M said:
The usual explanation for the dark minima in the case of Fraunhofer diffraction is that the path length difference between the edges of the slits differs by half a wavelength or an odd multiple of a half wavelength path difference at the minima location, so the arriving waves are out of phase at the detection screen at the locations of these minima.

Transmission coefficient on the other hand implies that the energy which doesn’t arrive at the detector (the deficit per slit in the 2 slit case) is somehow blocked by the slit.

Is one of these interpretations more valid than the other? It’s hard to understand how opening a second slit would decrease the energy transmitted via the other slit.
They are two different aspects of the experiment. The diffraction pattern is the pattern. But, the intensity also varies with slit width and distance apart. If you do the double-slit experiment with very narrow slits, then very little light gets through.

The additional factor affecting the intensity is how the light interacts with a double-slit as opposed to a single slit. Classically, this would be simple. The double-slit would allow twice radiation through. But, in QM it's not so simple and it appears that much less than twice the light gets through.
 
  • #38
So in QM, the explanation for the minima isn’t that 2 waves arrive out of phase and cancel out, because of the half wavelength path length difference between the slit edges and minima location?
 
  • #39
Devin-M said:
So in QM, the explanation for the minima isn’t that 2 waves arrive out of phase and cancel out, because of the half wavelength path length difference between the slit edges and minima location?
The QM explanation involves the components of the wavefunction arriving out of phase. You should try Feynman's QED book:

https://en.wikipedia.org/wiki/QED:_The_Strange_Theory_of_Light_and_Matter
 
  • #40
Devin-M said:
The non-parallel slits seem to have a higher transmission coefficient than the parallel slits in the fringes, because with the non-parallel slits you have twice the area of fringes that have the same intensity as a single slit, but with the parallel slits, you have the same area of fringes as a single slit, and those fringes have less than twice the transmission coefficient of a single individual slit.
Simple Question:
Are you collecting 4π radians of scattering from the slits?
Then why do you expect these conservation laws?
Simple answer:
The energy spreads out variously and your camera catches only a forward part of it. This all can be calculated and there is no mystery here.
 
  • #41
I'm slowly working on reproducing the experiment. But my setup isn't quite ready yet. But I'm working on it. (As I write this, my secondary [i.e., old] astronomical camera is busy gathering temperature controlled dark frames for the experiment.) I have a new mount, new control system, new hardware, etc, that should be close to ideal for this experiment (in a quick-and-dirty, on-the-cheap, sort of way). But it's not all set up yet and calibrated.

I would be really surprised though if the energy measured by the sensor, after dark frame subtraction, by having both slits open is anything other than roughly twice the energy of a single slit (all else being equal [e.g., equal slit sizes, equal exposure times, etc.]). And by "roughly" I mean within my own experimental error.

Let's make sure to keep this thread open for replies; and give me a week or so of time, and some good weather, and I plan to have some results to post then.
 
Last edited:
  • Like
Likes Motore and PeroK
  • #42
Devin-M said:
It’s hard to understand how opening a second slit would decrease the energy transmitted via the other slit.
It doesn’t. When you open the second slit more energy goes through than when the slit was closed. However, unless you actually measure which slit it goes through there is no sense in which it went through one. So you cannot say that the energy through the first slit has been reduced.

You are assuming that the energy that goes through each slit is definite and independent. Neither of those is true, and the second should be obviously untrue. Since the double slit experiment is usually portrayed as showing that one photon goes through both slits it is clear that the energy going through the slits is not independent.
 
  • #43
Vanadium 50 said:
I'm sorry. I meant ratioanle for the numbers will help. Equations, assumptions, etc. that sort of thing,

I would not add photons to the mix.

I can only second that. This setup is so macroscopic that QM will not add anything to understanding it. And: yes, it is expected that for such a setup the total intensity behind the double slit will be the sum of the intensities of the single slits. This is tested frequently in student labs all over the world. I would strongly suggest to thoroughly quantify the detector before looking for esoteric solutions. Numbers indeed will help here.

Devin-M said:
Yes, unless a pixel is saturated (ie reaches the maximum value) the red values should be directly proportional to intensity.
The first thing I tell my students is to never trust your equipment and to never assume anything you are not sure of. We already found some of the deviation between your expectation and your data by having a closer look at how you treat the background noise. I would now go on and thoroughly characterize your detector before jumping to conclusions.

Do you have any means to linearly vary the incoming intensity in a controlled manner, e.g., by using neutral density filters, over a really wide range? If so, use them and check whether the curve of the detected intensity versus the incoming intensity is really linear. To be honest, even with most image corrections of modern cameras switched off, I would be heavily surprised if the response you get is really a completely linear response without any offset across all light levels. Also, averaging over several images will help to get more reliable data and reduce the graininess.

Edit: The manual already mentions a lot of features, such as software and hardware binning. Do you use some kind of binning? Is there any other kind of processing taking place (center of gravity detection to avoid blooming at low light levels or something like that)? What is the gain setting you actually use?
Other sources seem to imply that the camera has some kind of non-linear behavior at low light levels:
http://www.arciereceleste.it/articoli/translations/115-test-asi6200-en
 
Last edited:
  • Like
Likes Motore and PeterDonis
  • #44
I downloaded a piece of software called RawDigger which lets me inspect the RAW pixel values without needing to first convert the file from RAW (NEF) to TIF. The camera is a Nikon D800 by the way.

I sampled the center most 23x51 (ignoring the green and blue pixels) portion of each diffraction spike and looked at the average red pixel value.

Single Slit Red Average: 2143
Double Slit Red Average: 3537
Same Area Noise Average: 45
Single Slit Red Average Noise Subtracted: 2098
Double Slit Red Average Noise Subtracted: 3492 (1.66x higher)

I would like to do another test with a lower gain setting as a few of the pixels in the center of the double slit pattern saturated (was at 6400iso, want to retest at 1600iso), however I don't expect a break in the clouds for the next 7 days.

1.jpg

2.jpg

3.jpg
 
Last edited:
  • #45
Devin-M said:
I would like to do another test with a lower gain setting as a few of the pixels in the center of the double slit pattern saturated (was at 6400iso, want to retest at 1600iso), however I don't expect a break in the clouds for the next 7 days.
You can likely do the test inside. A flashlight shining through a tiny hole in a piece of cardboard or thick sheet of paper would probably serve just fine as a point-like source. Of just do away with the paper or cardboard if you don't think you need a point source and point it at the wall or something.
 
  • #46
This paper tests the linearity of the sensor response to varying degrees of illumination, the results look quite linear...

4.jpg

Source: https://www.researchgate.net/publication/23699681_Camera_calibration_for_natural_image_studies_and_vision_research
 
  • #47
This is for exposure times of 2 milliseconds and high luminances on the order of more than 1000 cd/m^2. I am not sure that these conditions are really that comparable to the conditions you use. At high light levels it is easy to have a linear response. At low light levels, where all kinds of noise source become prominent, it is a challenge to get something that resembles a linear response. I have been fighting with photodetectors at low light levels for long enough now to not trust them.

Possibly, it might help to reperform the measurement without saturation for the double slit compared to the sum of the spectra of the two individual slits. Possibly an artificial light source will already be sufficient as @Drakkith suggested. To me, it seems striking that for the green pixels, the dark count corrected ratio of the double slit intensity per pixel to the single slit intensity per pixel is on the order of 2.1 (454/216), which is quite well within the range of what is expected.
 
  • #48
Last edited:
  • #49
Devin-M said:
Yes the green channel is very close to double, however I don't trust the green (495–570 nm) detections are mostly monochrome light which might be necessary for destructive interference as that light appears to be leaking through the 656nm H-Alpha narrowband filter in front of the sensor.
I think it's more likely that the green pixels are merely picking up red light. I'd bet that the bayer filter on the CCD is letting some of that HA light through the green filters. But I'd have to check the data sheet to be sure.
 
  • #50
The blue channel double slit is also showing 2.05x the single slit after noise subtraction. But the blue values could also be non-monochromatic leakage through the narrowband filter rather than 656nm HA light activating the blue pixels… It’s very possible the red detections would be much closer to monochromatic overall than the green and blue values.
 
Back
Top