B Does Destructive Interference Affect Starlight in a Double Slit Experiment?

  • #51
Devin-M said:
This paper tests the linearity of the sensor response to varying degrees of illumination, the results look quite linear...
I did decades of work doing spectrophotometry/colorimetry at low light levels for precision medical instruments. This has involved fluorescence, reflectance, and transmittance using Photomultipliers, Avalanche photodiodes, photodiodes, and (sometimes photon counting) cameras.
I guarantee the photocurrent from silicon is rigorously linear over a tremendous range. In my experience a color camera is fraught with problems, most of which have been mentioned. Incidentally I have always liked ImageJ software (free from NIH) for analysis.
I am also still unsure as to your exact setup and a concise reiteration of it and your particular expectations would be useful.
I feel the explanation here is very likely that your expectations for the diffraction signal are incorrect but I really don't understand your setup (a diagram would be good) Also details of the the CCD color filtration can be an issue. This should not be too difficult with facts in hand.
 
Physics news on Phys.org
  • #52
I’m investigating whether situations with high contrast destructive interference give twice the total photon detections with double slit compared to individual slit, or is it less due to the destructive interference.

Here’s my setup Nikon D800 on equatorial mount with H-Alpha narrowband filter in front of sensor, 300mm f/4.5 lens and TC-301 teleconverter for 600mm f/9 doing 5min exposures on Polaris at 6400iso/gain setting:
0FACEE94-9400-4818-BB5E-A0384795229B.jpeg


A6A7A5A0-9234-4613-B036-1E02F14011A7.jpeg

60CF74FC-56BC-4721-9231-8B6467ECDA2A.jpeg
 
  • #53
Devin-M said:
But the blue values could also be non-monochromatic leakage through the narrowband filter rather than 656nm HA light activating the blue pixels…
I'm not so sure. Typical HA filters are nearly 100% effective at blocking wavelengths outside of a small range around the central wavelength. However, I believe the green and blue parts of the bayer filter on the Nikon D800 transmit about 5-10% of light at 656 nm.
 
  • Like
Likes collinsmark, hutchphd and Dale
  • #54
Devin-M said:
Here’s my setup
Thanks. Sorry if I am behind the curve here but what is the lens cap with slits?
 
  • #55
It’s a bahtinov mask, and I’ve blocked off most of the slits with masking tape and then took exposures through each slit separately and then both at the same time in hopes of showing some destructive interference since it’s a monochromatic point source.
b509eed2-b98d-4ad1-bc74-2561dc901a09-jpeg.jpg


5-jpg.jpg
 
  • #56
As promised (from post #41), here's my efforts to reproduce the experiment myself.

My Reproduction, Part I: Equipment, Data Acquisition, and Pre-processing

Equipment:

  • Orion Bahtinov mask
  • Blue, painter's tape (essentially masking tape)
  • Explore Scientific ED80-FCD100 telescope
  • Orion Field Flattener for Short Refractors
  • ZWO Electronic Filter Wheel (EFW)
  • An old Astronomik Hα filter (I think it's a ~7nm bandwidth)
  • ZWO ASI1600MM-Pro main camera (monochrome, cooled)
  • Electronic focusing system (Optec and Starlight Instruments mishmash)
  • Orion 50mm Guidescope with Helical focuser
  • ZWO ASI290MM-Mini camera for guidescope
  • Pegasus Astro UPBv2 for power distribution, USB hub, and dew heater control
  • Various dew straps (the good ones are from AstroZap)
  • Various mechanical dovetails, clamps, rings as such from PrimaLuce and ADM.
  • Minisforums (not to be confused with Physics Forums) mini PC
  • Cheapy, Nexigo webcam, just to keep a wide eye on things.
  • Sky-Watcher EQ6-R Pro mount

Instrumentation_SmallForPF.jpg

Figure 1. Equipment setup

BahtinovMaskCoveredForDoubleSlit_SmallForPF.jpg

Figure 2. Bahtinov mask. Although difficult to see, the entire inside of the Bahtinov mask is blocked out with blue, painters tape except for two of the slits (see vertical slits at the top).

LeftCovered_OnPatio_SmallForPF.jpg

Figure 3. Method for covering left or right slit. It was a simple matter of blocking out a slit by carefully placing a doubled-up strip of blue, painters tape over the top of Bahtinov mask. Or, in the case of both slits open, I just removed the strip and placed it on the table.

Acquisition Details:

  • Target: Aldebaran, because it's bright and was near the zenith at my location (less atmosphere to worry about)
  • Skies were clear
  • Seeing was average
  • ASI1600MM-Pro camera cooled to -10 deg C
  • Camera gain: 139 (offset 50) placing the camera right around unity gain (i.e., ~1 e-/ADU)
  • N.I.N.A. software was used for acquisition along PHD2 for autoguiding
  • Autofocus routine was performed immediately before inserting the Bahtinov mask and acquiring data
  • Left slit was covered and 40, 1.5 sec subframes were taken
  • Both slits were left open and 40, 1.5 sec subframes were taken
  • Right slit was covered and 40, 1.5 sec subframes were taken
  • Bahtinov mask was removed, and lens cover was placed over objective of ES ED80-FCD100 telescope, and 100, 1.5 sec "dark" frames were taken (camera still at -10 deg C)
  • Lens cover removed again, and using a flat panel, 30 "flat" frames were taken. With the lens cover back on again, another 30 "darkflat" frames were taken. However, I didn't end up using flat field calibration on this project (see below)

Preprocessing:

  • The 100 dark frames were stacked using PixInsight, creating a master dark frame.
  • In all cases (for this particular project), stacking was done using averaging (arithmetic mean), no weights (all weights = 1), and no normalization. However, Winsorized Sigma Clipping was in the stacking to eliminate statistical outliers (e.g., bad pixels caused by hot-pixels, cosmic rays, etc.)

Let me stop here and talk about flat frames for a moment. In the end, I decided not use flat frames in the calibration.

Case against flat frame calibration: Flat frame calibration is used to compensate for any a anomalies in the optics. However, in this experiment, the optics -- anomalies included -- are what we're trying to characterize in the first place. So flat frame calibration doesn't make sense in this experiment.​
Case for flat frame calibration: Optics aren't the only thing flat frames are good for. They can also compensate for any dust motes that may exist on my Hα filter for sensor.​

Well, I didn't notice any particularly significant dust mote signatures in the data, so I chose not to perform flat frame calibration (there's really no point). I have the flat frame data if I ever want to go back to it, but for now, flat frame calibration is skipped. Okay, moving on...
  • PixInsight was used to perform "dark" frame calibration on the "light" frames. This is done by subtracting the master dark frame from each of the "light" frames (i.e., the subframes containing the diffraction/interference pattern) on a pixel by pixel basis. This basically calibrates out the arithmetic mean of the in-camera noise such as thermal noise and read noise.
  • The 40 calibrated frames in each scenario (left slit covered, both slits open, right slit covered) were stacked, using PixInsight.
LC_Uncropped_SmallForPF.jpg

Figure 4. Example data after dark frame calibration and stacking. Shown here is the image with the left slit covered. Obviously some cropping needs to be done before continuing. (The above image was resized and "stretched" for display purposes here, but the data used in the actual calculations/evaluations did not use resized or stretched images.)

Figs. 5 and 6 show the data after appropriate cropping. In all cases, the crop was 519 pixels wide by 10 pixels high.

Strips_Raw.png

Figure 5. Data after cropping. From top to bottom: Left slit covered, Right slit covered, Both slits open.

Strips_Stretched.png

Figure 6. Same data as Fig. 5, but with a "stretch" so it's easier to see the details. (Note: Stretching was done for display purposes only: forthcoming calculations were done based on unstreched data.) From top to bottom: Left slit covered, Right slit covered, Both slits open.

To be continued (spoiler: results are pretty much exactly what you should expect) ...
 
Last edited:
  • Like
Likes jtbell, gentzen, Drakkith and 2 others
  • #57
... continued from previous post.

My Reproduction, Part II: Results and Conclusions

Results and Conclusions in a nutshell:

Now that we have nice crops, all we need to do is add up or average the pixels values in each scenario. PixInsight has an easy way to do this using the Statistics process, where it gives you the arithmetic mean of the pixel values in a given image. So that is what I used.

Arithmetic mean pixel value, Left Slit Covered: 190.468
Arithmetic mean pixel value, Right Slit Covered: 189.908
Arithmetic mean pixel value, Both Slits Open: 371.243

ResultsBarChart_SmallForPF.png

Figure 7: Results

According to ideal theory, the average pixel value with both slits open should be quite close to twice the average value of a single slit. Or if the slits are of slightly different size, the value with both slits open should be about the same as the single slit values added together.

Sum of single slit values:
190.468 + 189.908 = 380.376

So, my results differ from theory by about

\frac{380.376 - 371.243}{380.376} = 2.4 \%

I'd say 2.4% ain't bad, and is within my own margin of experimental error. I expect I could have even done better with more careful cropping.

So I'm going to be so bold as to claim that my results pretty much agree with theory (more-or-less).

Additional analysis:

I also wrote a C# program that sums all the pixel values together in each column to be used for plotting the 1-dimensional spacial results. Here are the resulting plots:

Double_PlotWithStrip_SmallForPF.png

Figure 8: Both slits open plot.

LC_PlotWithStrip_SmallForPF.png

Figure 9: Left slit covered plot.

RC_PlotWithStrip_SmallForPF.png

Figure 10: Right slit covered plot.

The shapes of the diffraction/interference patterns also agree with theory.

Notice the central peak of the both slits open plot is interestingly 4 times as high as either single slit plot. Yes, this agrees with theory and can be explained using classical theory or by quantum theory.

In the classical approach, the intensity of an electromagnetic wave is proportional to the square the amplitude of the electric field. So if you have two waves constructively interfering (doubling the electric field amplitude), the intensity increases by 2^2 = 4.

As others have pointed out, quantum theory is not necessary to explain the shapes. But you could use quantum theory if you really wanted to. If you did you'd find the probability density of a photon landing exactly on the central peak is approximately twice that for the double slit configuration than it is for the single slit configuration. So that's a factor of 2 right there from the probability. The other factor of 2 comes from the fact that there's two slits open in the double slit scenario, so twice as many photons arrive at the detector compared to only 1 slit open. And that's the other factor of 2. So 2 \times 2 = 4.

In any case, even though the central peak is 4 times as large in the both slits open scenario, the area under the curve is only twice as large, in part due to more frequent regions of destructive interference.

And that's about all I have on this.
 
Last edited:
  • Like
  • Love
Likes jtbell, gentzen, PeroK and 6 others
  • #58
I do have one question about your data. In the double slit case the data is dominated by the central peak and the 1st order diffraction spike. Suppose you sample only the central peak and 1st order spikes and compare to the same area single slit region. Is it double? So far so good. Now ignore this region and sample the region that only contains the much fainter 2nd, 3rd, 4th etc order spikes. Is it still double?
 
Last edited:
  • #59
Devin-M said:
I do have one question about your data. In the double slit case the data is dominated by the central peak and the 1st order diffraction spike. Suppose you sample only the central peak and 1st order spikes compare and compare to the same area single slit region. Is it double? So far so good. Now ignore this region and sample the region that only contains the much fainter 2nd, 3rd, 4th etc order spikes. Is it still double?

It's not expected to be double unless the sample area of the detector is sufficiently large. Ideally, you'd want the detector size to be large enough such that the diffraction/interference pattern is vanishingly small at the detector's edges.

And just to be clear, it's the total area under the curve that is expected to be double. But that doesn't necessarily apply to the peak values on the curve.

For example, the peak value of the central lobe in the case of both slits open isn't just double that of a single slit, it's approximately 4 times as large. (That's applied to this particular experiment setup.)

As far as other lobes go, that depends. It depends on the width of the slits and also on the separation of the slits. Sometimes a double slit destructive interference location might coincide with where a peak would be in a single slit scenario. Or maybe not. It depends on the slit widths and slit separations.

Also, if light comes in at an angle (not perpendicular to the Bahtinov mask) or if there is a refractive element placed on one of the slits that differs from the other slit (causing an effective phase difference between the slits), that can affect things too, and effectively shift the central peak to a different location.
 
  • Like
Likes Motore, PeroK and hutchphd
  • #60
I was just wondering if your measurements agreed with my original measurements that comparisons of the pattern outside the central area show far less than double the average intensity with the second slit.

Devin-M said:
...because:

"The amount of light captured by a lens is proportional to the area of the aperture, equal to:

\mathrm {Area} =\pi \left({D \over 2}\right)^{2}=\pi \left({f \over 2N}\right)^{2}

https://en.wikipedia.org/wiki/Aperture

...and the double slit has double the aperture.

I performed a new test, this time where I measured each slit individually, and then both at the same time:

Slit 1 ADUs: 7165
Slit 2 ADUs: 7076
Double Slit ADUs: 9339

Each test was targeting the star Polaris with a 5 minute exposure, 600mm, f/9, 6400iso, with a H-Alpha narrowband filter (656nm) in front of the sensor on a Nikon D800 DSLR, and the camera was mounted on an equatorial mount to compensate for the Earth's rotation.

View attachment 318927
View attachment 318935

View attachment 318928
View attachment 318929
View attachment 318932
View attachment 318933
View attachment 318934

Devin-M said:
I've now (at your suggestion) looked at how much noise is in equal area with no signal...

Noise ADUs in area w/ no signal: 2634
Slit 1 ADUs w/ noise subtracted: 4531
Slit 2 ADUs w/ noise subtracted: 4442
Double Slit ADUs w/ noise subtracted: 6705 (1.47x higher than slit 1)

View attachment 318972
Noise Sample:
View attachment 318973
 
  • #61
I’m talking about this region:

slits-jpg.jpg
 
  • #62
collinsmark said:
According to ideal theory, the average pixel value with both slits open should be quite close to twice the average value of a single slit.
OK, that is pretty solid. I guess my understanding was wrong. I didn’t expect it to be independent.
 
  • Like
Likes gentzen and collinsmark
  • #63
With the double slit if you think of the edges of the slits as separate light sources from the star (ie where the light reflected off the edges of the slits), with the central peak region you have 5 sources (odd number)… ie source 1) The light from the star that didn’t reflect that went classically straight through the slits to the central peak & sources 2) - 4) the light that reflected off the 2 edges of the 2 slits…

…but farther away from the central peak region the number of light sources switches from odd to even… you don’t have any light that went classically directly from the star and you only have the 4 edges as sources.

I wonder if there is less than double the number of photons detected in this region if it has something to do with having an even number of sources rather than an odd number.
 
  • #64
Devin-M said:
…but farther away from the central peak region the number of light sources switches from odd to even… you don’t have any light that went classically directly from the star and you only have the 4 edges as sources.
I fear you still do not understand. This is inherently a problem of global interference of waves. The mathematics is essentially the same for sound waves, gravity water waves, EM waves, or "probability waves". Bouncing balls will not provide the correct answer.
Also there is a ~simple "exact" solution available for two finite slits. For instance https://web.mit.edu/8.02t/www/802TEAL3D/visualizations/coursenotes/modules/guide14.pdf
All the answers are explicitly there. Plot the function and play with it.
 
  • #65
Devin-M said:
I was just wondering if your measurements agreed with my original measurements that comparisons of the pattern outside the central area show far less than double the average intensity with the second slit.
I still don't understand why you expect that regions outside of the central peak to have those intensities. You have to look at the whole difraction pattern, only then you will have double the intensity for the two slits vs single slit, because you will see the whole light that goes through. But why are you expecting there to be double the intensity in same specific regions for different difraction patterns is beyond me. Which I already said in post #27.
 
  • #66
If you have double the total mean intensity with 2 slits overall, and double the total mean intensity with 2 slits in the comparable area encompassing the 1st single slit maxima, then you should expect to find double the total mean intensity in the areas encompassing the outer maxima as well but that isn’t what my initial measurements showed.
 
  • #67
Devin-M said:
If you have double the total mean intensity with 2 slits overall, and double the total mean intensity with 2 slits in the comparable area encompassing the 1st single slit maxima, then you should expect to find double the total mean intensity in the areas encompassing the outer maxima as well but that isn’t what my initial measurements showed.
How well did you calibrate your frames before making the counts? Did you dark subtract? Did you remove the bias counts typically added to all images by the sensor? Does the sensor or camera automatically add or subtract values from each pixel during readout and processing?
 
  • #68
Drakkith said:
How well did you calibrate your frames before making the counts? Did you dark subtract? Did you remove the bias counts typically added to all images by the sensor? Does the sensor or camera automatically add or subtract values from each pixel during readout and processing?
Noise 39.7 avg
Single 115.2 avg - 39.7 noise avg = 75.5 avg
Double 173.9 avg - 39.7 noise avg = 134.2 avg (1.77x higher)

7.jpg

Readings straight from the RAW files:
8.jpg

9.jpg

10.jpg
 
Last edited:
  • #69
Devin-M said:
Noise 39.7 avg
Single 115.2 avg - 39.7 noise avg = 75.5 avg
Double 173.9 avg - 39.7 noise avg = 134.2 avg (1.77x higher)
Sorry, I don't know what this is supposed to show or how it answers my questions.

Devin-M said:
Readings straight from the RAW files:
But what did you do to calibrate those raw files? Even dedicated astrophotography cameras, which are built specifically for low-light, long-exposure applications have to be calibrated. Just see post 56 as an example.
 
  • #70
It was single 5 minute exposures at 6400 iso (double slit & single slit). Camera: Nikon D800 Lens: Nikon 300mm f/4.5 + TC-301 2x teleconverter for 600mm f/9. Original raw (NEF) files were inspected with RawDigger (with no modifications). Screenshots from that program are shown above. On the top right of those screen shots you can see an average pixel value for the selected areas. R AVG shows the values I discussed in the previous post. I also took a representative sample of the average noise and subtracted it from the raw values to give the final values. I selected the 1st and 2nd order maxima from the single slit and the same area for the double slit. Double slit R avg in the selected area was 1.77x higher than single slit.

RAW Files (Images 9205 & 9213): https://u.pcloud.link/publink/show?code=kZcjE2VZR1eBfOCgGAYfuGOv9EMgPH3KIR07
 
Last edited:
  • #71
So you didnt do any calibration? No dark or bias frame subtraction?

Devin-M said:
I also took a representative sample of the average noise and subtracted it from the raw values to give the final values.
Why?
 
  • #72
I did… I sampled the dark noise and subtracted it…

Avg noise per pixel was 39.7… that was then subtracted from the signal

Devin-M said:
Single 115.2 avg - 39.7 noise avg = 75.5 avg
Double 173.9 avg - 39.7 noise avg = 134.2 avg (1.77x higher)
8-jpg.jpg

9-jpg.jpg

10-jpg.jpg


7-jpg.jpg
 
  • #73
Devin-M said:
I did… I sampled the dark noise and subtracted it…
I've never heard of this calibration method. Why would you subtract the average background noise? Unless I'm mistaken, that's not going to get rid of the counts from the dark current nor the bias counts. Just so we're on the same page, noise is the random variation of counts or pixel values between pixels in an image or between the same pixel in multiple images.

The point of calibration is to remove counts that aren't due to capturing your target's light. I call these counts 'unwanted signal'. The major sources of unwanted signal are dark current, background light that falls onto the same area of the sensor as your target, and stray light from internal reflections or other optical issues. In addition, the camera itself typically adds some set value to each pixel, called an offset value or bias value.

The bias value itself isn't noise, as it's just a set value added to each pixel and is easily removed. But all sources of unwanted signal add noise because they are real sources and obey the same statistical nature as your target. The funny thing with noise is that it simply cannot be removed. What can be removed is the average dark current and the bias value (the other sources of unwanted signal can also be removed, but it is much more difficult and well beyond the scope of this post).

To remove dark current (not dark noise, which cannot be removed) one must take dark frames, combine them together to get a master dark frame, and then subtract this frame from each target exposure. This will get rid of both the bias value and the average dark current in each pixel. These should be the worst sources of unwanted counts in your images and their removal is just about all you need to do if you're just worried about counting pixel values. You could use flat frames, but as long as we're dealing with only a small part of the sensor and there aren't large dust particles on the filters and sensor window that are casting shadows on this area we don't really need to worry about flats.

Since measuring the average background noise and then subtracting this value from each pixel doesn't remove either dark current or the bias offset I'm confused why you would so so.
 
  • #74
That would make sense if I wanted an image as the final output, but all I want is the average ADU/pixel in the area of interest (with average noise ADU/pixel ignored). So I took the average noise per pixel and subtracted that from the average ADU/pixel in the area of interest.
 
  • #75
You are searching for a 1% answer.
Can you not turn all the autoscaling off? You don't need more than 8 bits of useful resolution to answer your query.
Why are pixels saturating? Particularly why are blue and red ones saturating but apparently not green? You need to eliminate them from your calculation, particularly the way you are doing your averages. You really should do the manipulations on a pixel by pixel basis (can you not do that with your software?)
Instead of rerunning the same experiment endlessly, take a close look at your data. How about a histogram of the dark counts for one frame many pixels? You need to develop the ability to change your point of view.
How about a plot of the theoretical predicted graphs for your wavelength and widths? There are many ways to proceed and you seem to have plenty of data, interest, and perseverence. Ask and answer a slightly tangential question and you will learn something..
 
  • #76
There was no scaling, and there was no saturation. If you look at each of the samples, there are two tables of data at the top, on the left is for the full image, and the right is for the selected area of interest. The only scaling occurred for display purposes on this forum, but not in the sample data. You want to be looking at the right table for each of the samples. Specifically you want to be looking at the R average values in the right table. if they were saturating, the max pixel value would read 16,383, but in the right tables for the selected area of interest, none of the pixels are reaching that value.
 
  • #77
It’s also interesting to note if you look on the tables on the right hand side for the single versus double slit, in the selected area of interest, the max pixel value is almost exactly double not quadruple (1.99x higher). (1293 vs 648) @collinsmark
 
  • #78
Devin-M said:
It’s also interesting to note if you look on the tables on the right hand side for the single versus double slit, in the selected area of interest, the max pixel value is almost exactly double not quadruple (1.99x higher). (1293 vs 648) @collinsmark

In terms of the peaks of any of the side-lobes (anything other than the central peak), it depends how the troughs (from destructive interference) from a single slit pattern line up with the additional troughs from having two slits.

And whether they line up at all, or how they relate to each other even if they don't line up, is dependent on a) the width of the individual slits and b) the separation of the slits. In other words, if you were to switch to a new Bahtinov mask by a different manufacturer, they might line up differently.

The troughs (a.k.a. "nulls") of the single slit pattern are a function of the slit widths. If you make the slit width more narrow, the whole pattern becomes wider. (and vice-versa)

The additional, and more frequent, troughs/nulls of the double slit pattern are dependent on the separation of slits. Move the slits closer together (while keeping the slits widths unchanged) and you'll get fewer, additional nulls in the pattern. Move the slits farther apart and you'll get more frequent nulls.

With that, there isn't any general rule about the off-center peaks of the pattern, or even the average pixel value of an off-center lobe. It all depends on how these troughs/nulls line up. [Edit: in other words, I think the specifics of the 1293 vs. 648 observation is largely a matter of coincidence, and based on the fine particulars of your mask's slit width vs. slit separation details.]

---

A bit of a side note: If you're familiar with Fourier analysis, it might help here. As it happens, diffraction patterns are intimately related to Fourier analysis. With making a few assumptions about setup, such as the slit widths and separations being << the distance to the detector; and applying various constants of proportionality, you'll find that in the limiting case, the diffraction pattern is the magnitude squared of the spacial Fourier transform of the slits. [Edit: for a given, fixed, wavelength of light.]

You can at least use this fact qualitatively if you just want a quick idea of what shape you can expect as a diffraction artifact for a given obstruction.

For example, it's no coincidence that diffraction pattern of a single slit (of monochrome light) is more-or-less that of a \left( \mathrm{sinc}(x) \right)^2. In one dimension, a single slit is that of a rectangular function. What's the Fourier transform of a rectangular function? well, it's in the form of a sinc function. Square that [magnitude squared], and you get the same shape as the diffraction pattern. You can do the same sort of thing for the double slit.

I mention all this here because it might be easier to mess around with the math (at least qualitatively, maybe with the aid of a computer) than painstakingly cut up different masks every time you want to make a minor change to the slit pattern.

Don't get me wrong, I'm not saying that Fourier transforms are the end-all-be-all of diffraction theory. Diffraction theory can be a bit more nuanced. All I'm saying is that if you've built up an intuition with Fourier analysis, that intuition can be useful in predicting the effects of how changing the slits will change the diffraction pattern, at least in a qualitative sort of way.
 
Last edited:
  • #80
Devin-M said:
That would make sense if I wanted an image as the final output, but all I want is the average ADU/pixel in the area of interest (with average noise ADU/pixel ignored). So I took the average noise per pixel and subtracted that from the average ADU/pixel in the area of interest.
You do want an image as the final output, because an image is nothing more than a bunch of data points. But that's mostly beside the point. What I'm getting at is that without carefully calibrating your image (your data) you can't draw meaningful conclusions. You don't know the average ADU for the diffraction pattern because the pattern's ADU's are mixed in with dark current ADU's and possibly a bias offset. Subtracting the average background noise does nothing because noise is not something that can be subtracted. All you're doing is finding the magnitude of the average variation in the pixels in some area and then subtracting that value from other pixels. Which is pointless as far as I know, as you're still left with all of the noise and you've introduced your own offset that has no purpose.
 
  • #81
Devin-M said:
@collinsmark would you be willing to upload your finalized TIFs / FITs of your stacked single / double slit files (and/or a RAW file or 2) here so I can closely inspect them… ?

https://u.pcloud.link/publink/show?code=kZtCzeVZwIlHPwhNQlfVXx7j9TTxPLteswcy
@Devin-M, I've uploaded the cropped versions of the data in TIFF files, in two [now three] different pixel formats.

The more detailed format is 32 bit, IEEE 754 floating point format. This should retain all the detail involved, but not all image manipulation programs work with this level of detail.

The second is with the data converted to 8bit (per color) unsigned integer. This is the more standard/more accessible format, but less detail per pixel. As a result some of the details are lost. For example, a lot of darker pixel values are simply 0. It's probably not a big deal, since most of the data at that level of detail was just noise anyway.

[Edit: I've also uploaded the TIFF files saved with 16bit, unsigned integer format too, if you'd rather work with that.]
 
Last edited:
  • #82
Drakkith said:
You do want an image as the final output, because an image is nothing more than a bunch of data points. But that's mostly beside the point. What I'm getting at is that without carefully calibrating your image (your data) you can't draw meaningful conclusions. You don't know the average ADU for the diffraction pattern because the pattern's ADU's are mixed in with dark current ADU's and possibly a bias offset. Subtracting the average background noise does nothing because noise is not something that can be subtracted. All you're doing is finding the magnitude of the average variation in the pixels in some area and then subtracting that value from other pixels. Which is pointless as far as I know, as you're still left with all of the noise and you've introduced your own offset that has no purpose.
The final output I desire is a ratio of 2 averages. I just tested my method and it works perfectly.

Test setup:
-Piece of paper illuminated by iPhone flashlight
-Nikon D800 with 600mm f/9 lens fitted

Method:
-I put the white paper on the ground (room lights off) and pointed the iPhone flashlight at the paper at close range.
-Next I set up the camera across the room on a tripod and focused.
-I took 2 exposures- one at 1/800th sec and one at 1/1600th sec (both f/9 aperature @ 6400 iso / gain)
-Then I put the lens cap & took 2 more exposures (dark frames), again at 1/800th sec & 1/1600th sec
-Next I imported the unmodified RAW files into the computer application RAWDigger
-Next I selected an area of interest (100x100 pixels) at the center of the image which shows part of the sheet of white paper
-Next I made note of the R AVG values within the area of interest in both the light frames and the dark frames
-After subtracting the noise R AVG/px measured in the area of interest in the dark frames from the R AVG/px measured in the area of interest in the light frames (using the same method I used previously), the 1/800th second exposure R average was exactly 2.00x higher than the 1/1600th second exposure.

1/800th Dark Noise Avg 15.6
Light R Avg 199.6
199.6 - 15.6 = 184.0

1/1600th Dark Noise Avg 18.1
Light R Avg 110.1
110.1 - 18.1 = 92.0

184/92 = 2.00x higher

11.png

12.png

13.png

14.png

2343F25E-B9E0-43D3-B241-BB775CB161A5.jpeg
 
Last edited:
  • #83
Devin-M said:
I just tested my method and it works perfectly.
Your method is flawed and there is a very good reason it appears to work in this situation but will not work on a real astro photo, but I leave that to you to discover, as you don't appear to want my assistance or advice.
 
  • #84
Drakkith said:
Your method is flawed and there is a very good reason it appears to work in this situation but will not work on a real astro photo, but I leave that to you to discover, as you don't appear to want my assistance or advice.
@Drakkith I do want your assistance & advice. I was just testing the error bars on the camera to the best of my abilities.
 
  • #85
Devin-M said:
@Drakkith I do want your assistance & advice. I was just testing the error bars on the camera to the best of my abilities.
Forgive me if I'm a bit snippy. I've had a rough couple of days.

First, I don't understand why you're doing anything with noise. Why are you subtracting the average noise value? What does that accomplish?
 
  • #86
Good clean derivation:
http://lampx.tugraz.at/~hadley/physikm/apps/2single_slit.en.php
Drakkith said:
First, I don't understand why you're doing anything with noise. Why are you subtracting the average noise value? What does that accomplish?
The noise gets rectified and so produces part of the DC offset. A false DC offset in a ratio is a bad thing.
So I think the averaging should be OK for the offset but I do not understand why OP does not look at the pixel by pixel ratios which may indicate why OP results seem junk. Use all the data you have. Can you scale and subtract images in your software?
Also why do the full frame shots above indicate saturated pixels? Are they just "bad" pixels?
 
  • Like
Likes collinsmark
  • #87
I managed to do a comparison between @collinsmark 's measurements and my own.

I looked at just the 1st order maxima (not to be confused with the 0th order maxima) with the single slit and the same region with the double slit in both @collinsmark 's data and my own.

I'm satisfied the results seem consistent twice as many photons with the double slit in the same area where the 1st maxima forms with a single slit.

7.jpg


Devin-M's measurements:
https://u.pcloud.link/publink/show?code=kZcjE2VZR1eBfOCgGAYfuGOv9EMgPH3KIR07
(Images 9205 & 9213)

Single Noise R Avg 39.2
Single Light R Avg 128.8
128.8 - 39.2 = 89.6

Double Noise R Avg 37.1
Double Light R Avg 218.5
218.5-37.1 = 181.4

181.4 / 89.6 = 2.02x higher

3.jpg

4.jpg

5.jpg

6.jpg


@collinsmark 's measurements:
https://u.pcloud.link/publink/show?code=kZtCzeVZwIlHPwhNQlfVXx7j9TTxPLteswcy
Single Slit Avg ADU/px = 115.2
Double Slit Avg ADU/px = 210.9
210.9/115.2 = 1.83x higher

1.jpg

2.jpg
 
  • #88
hutchphd said:
The noise gets rectified and so produces part of the DC offset.
What is the definition of 'noise' here? My understanding is that noise is the random variation in the pixels that scales as the square root of the signal.
 
  • #89
I suspect the slight discrepancy between @collinsmark ’s measurement and mine is his noise subtraction would remove dark current and read noise but the way I subtracted noise would be expected to remove read noise, dark current and the potential background sky glow from light pollution.

For example if some “sky glow” ADUs were removed from @collinsmark ‘s measurements of both the single & double slits, it could result in his measurement for the double slit being even closer to 2.00x higher (his reading was 1.83x, mine 2.02x).

For example if the single slit ADU/px was (for example) 60 and the double slit was 110, and 10 ADU/px was background sky glow, and we subtract 10 ADU/px from each, we go from 110/60=1.83x to exactly 100/50=2.00x.


Edit: nevermind, it probably doesn’t account for the discrepency because noise from the background sky glow would be expected to double with the double slit, so the math wouldn’t work out.
 
Last edited:
  • #90
@collinsmark The only other uncertainty I have… when you cropped those TIF files, perhaps in photoshop, did they possibly pick up a sRGB or Adobe RGB gamma curve?

I believe simply saving a TIF in photoshop with either an sRGB or Adobe RGB color profile will non-linearize the data by applying a gamma curve. This doesn’t apply to RAW files. (see below)

https://community.adobe.com/t5/camera-raw-ideas/p-output-linear-tiffs/idc-p/12464761

95235C8E-C35C-42DD-83E0-2CB6648020D8.jpeg

8DB405F0-9DBF-4C9F-BA5F-BBE80178768A.jpeg

Source: https://www.mathworks.com/help/images/ref/lin2rgb.html
 
  • #91
On second thought, I myself cropped @collinsmark ’s TIFs in photoshop during my analysis so maybe I inadvertently corrupted his data.
 
  • #92
@collinsmark I did a little more digging and found out that the 16bit and 8bit files you uploaded appear to have a “Dot Gain 20%” embedded color profile whereas the 32bit files have a “Linear Grayscale Profile” — and I used the 16bit files for my analysis of your data, so my analysis of your data was probably a bit off.

15.jpg

16.jpg
 
  • #93
Drakkith said:
What is the definition of 'noise' here? My understanding is that noise is the random variation in the pixels that scales as the square root of the signal.
I tend to use "noise" to mean anything that is not signal. Here that would mean stray and scattered light, noise from the photodiode junction (dark current) and associated electronics. It is the nature of these systems that the "noise" will average to a nonzero offset. Not all of it is simply characterized nor signal dependent. But it does need to be subtracted for accuracy of any ratio.
 
  • #94
Devin-M said:
Edit: nevermind, it probably doesn’t account for the discrepency because noise from the background sky glow would be expected to double with the double slit, so the math wouldn’t work out.
What do you define as 'noise' here? What are you actually computing when you compute the 'noise'? I suspect you and I may mean something different when we use that word.
 
  • #95
Drakkith said:
What do you define as 'noise' here? What are you actually computing when you compute the 'noise'?
When the lens cap is on (same exposure settings as the light frame), there is still still ADUs per pixel even through there is no light. With a large enough sample, for example 100x100 pixels, red is 1/4th of the bayer pattern so you have 2500 samples. So you add up the ADUs of all the pixels combined, then divide by the number of pixels. That’s the average noise per pixel. If you look pixel by pixel, the values will be all over the place but when you move the 2500 pixel selection around the image, the average noise is very consistent across the image.
 
  • #96
Devin-M said:
@collinsmark The only other uncertainty I have… when you cropped those TIF files, perhaps in photoshop, did they possibly pick up a sRGB or Adobe RGB gamma curve?

I believe simply saving a TIF in photoshop with either an sRGB or Adobe RGB color profile will non-linearize the data by applying a gamma curve. This doesn’t apply to RAW files. (see below)

Pixinsight gives the option to embed an ICC profile when saving the images as TIFF. I had thought that I had that checkbox unchecked (i.e., I thought that I did not include an ICC profile in the saved files). Even if a ICC profile was included in the file, your program should be able to ignore it; it doesn't affect the actual pixel data directly.

But yes, you're correct that you should not let your image manipulation program apply a gamma curve. That will mess up this experiment. You need to work directly with the unmodified data.

Btw, @Devin-M, is your program capable of working with .XISF files? I usually work with .XISF files from start to finish (well, until the very end, anyway). If your program can work with those I can upload the .XISF files (32 bit, IEEE 754 floating point format) and that way I/we don't have to worry about file format conversions.

[Edit: Or I can upload FITS format files too. The link won't let me upload anything though, presently.]
 
Last edited:
  • #97
Devin-M said:
When the lens cap is on (same exposure settings as the light frame), there is still still ADUs per pixel even through there is no light. With a large enough sample, for example 100x100 pixels, red is 1/4th of the bayer pattern so you have 2500 samples. So you add up the ADUs of all the pixels combined, then divide by the number of pixels. That’s the average noise per pixel. If you look pixel by pixel, the values will be all over the place but when you move the 2500 pixel selection around the image, the average noise is very consistent across the image.
Okay, you're measuring the combined dark current + bias + background signal, using a large sample of pixels to average out the noise (the randomness in each pixel) to get a consistent value. No wonder I've been so confused with what you've been doing.

So yes, your previous method works despite what I said previously as long as you're sampling a background area as free of stars and other background objects as possible.
 
  • #98
collinsmark said:
Btw, @Devin-M, is your program capable of working with .XISF files? I usually work with .XISF files from start to finish (well, until the very end, anyway). If your program can work with those I can upload the .XISF files (32 bit, IEEE 754 floating point format) and that way I/we don't have to worry about file format conversions.

[Edit: Or I can upload FITS format files too. The link won't let me upload anything though, presently.]
I turned uploading back on. Could you spare a raw file of the single slit and one of the double slit? My RawDigger app seems to only open RAW files. It won’t even open a TIF or JPG so I resorted to manually entering all the pixel values into a spreadsheet to get the averages.
 
  • #99
Devin-M said:
I turned uploading back on. Could you spare a raw file of the single slit and one of the double slit? My RawDigger app seems to only open RAW files. It won’t even open a TIF or JPG so I resorted to manually entering all the pixel values into a spreadsheet to get the averages.

I've uploaded the data, this time in 16-bit, unsigned integer, in FITS file format.

I don't know what you mean by "RAW" file format. "RAW" is usually used as an umbrella term for a format with pixel data that hasn't been manipulated. (For example, Nikon's "RAW" file format is actually "NEF.")

The way I gathered the data, N.I.N.A stores the data straight from the camera to XISF file format. XISF file format is similar to FITS, but more extensible. XISF or FITS is about as raw as my system can do.

The files I uploaded though are dark-frame calibrated and stacked.
 
  • #100
This thread just made realize I've been unnecessarily degrading my astro-photos for years...

The mistake? I assumed Adobe Lightroom will convert RAW files into 16-bit tifs linearly and without adding color noise before I stack them... turns out that's not the case. The reason this is important is the next step is stacking 20-60 of those TIFs to reduce the noise, so you definitely don't want to be adding noise before you reduce the noise.

The solution? It turns out the app I've been using in this thread to inspect the RAW files will also convert RAW files into 16-bit tifs (as far as I can tell) linearly and without modifying the image values or adding noise.

New process (converting RAW NEFs to 16 bit TIFs in RawDigger before stacking):
IMG-2.jpg


Old Process (same exact RAW files but converting RAW NEFs to 16 bit TIFs in Adobe Lightroom before stacking):
Casper_M78_620x620_double_stretched.jpg


Wow it's a world of difference.

Details:

Meade 2175mm f/14.5 Maksutov Cassegrain with Nikon D800 on Star Watcher 2i equatorial mount
17x stacked 90 second exposures @ 6400iso + 19 dark calibration frames + 5 flat calibration frames
RAW NEF files converted to 16bit TIFs in RawDigger
Stacking in Starry Sky Stacker
Final Histogram Stretch in Adobe Lightroom AFTER(!!) stacking

5625643.png


5625643-1.png


5625643-2.png


7600095-1.jpeg


7600095.jpeg


Old 100% Crop:
100pc_old.jpg

New 100% Crop:
100pc_new.jpg
 
Last edited:
  • Like
Likes vanhees71, collinsmark and hutchphd
Back
Top