B Does Destructive Interference Affect Starlight in a Double Slit Experiment?

Click For Summary
Destructive interference in a double slit experiment does not reflect light back to the source; instead, it redistributes energy spatially, leading to reduced energy in dark regions and increased energy in bright regions. The discussion highlights that energy conservation does not imply equal energy across different experimental setups, as variations in configurations can yield different energy distributions. The results from a double slit setup showed less than expected energy in the fringes compared to a non-parallel slit configuration, raising questions about energy deposition. The participants emphasized the importance of understanding the relationship between slit configurations and energy measurements, particularly regarding photon counts and intensity. Ultimately, the conversation underscores the complexity of interpreting energy distribution in interference patterns.
  • #91
On second thought, I myself cropped @collinsmark ’s TIFs in photoshop during my analysis so maybe I inadvertently corrupted his data.
 
Physics news on Phys.org
  • #92
@collinsmark I did a little more digging and found out that the 16bit and 8bit files you uploaded appear to have a “Dot Gain 20%” embedded color profile whereas the 32bit files have a “Linear Grayscale Profile” — and I used the 16bit files for my analysis of your data, so my analysis of your data was probably a bit off.

15.jpg

16.jpg
 
  • #93
Drakkith said:
What is the definition of 'noise' here? My understanding is that noise is the random variation in the pixels that scales as the square root of the signal.
I tend to use "noise" to mean anything that is not signal. Here that would mean stray and scattered light, noise from the photodiode junction (dark current) and associated electronics. It is the nature of these systems that the "noise" will average to a nonzero offset. Not all of it is simply characterized nor signal dependent. But it does need to be subtracted for accuracy of any ratio.
 
  • #94
Devin-M said:
Edit: nevermind, it probably doesn’t account for the discrepency because noise from the background sky glow would be expected to double with the double slit, so the math wouldn’t work out.
What do you define as 'noise' here? What are you actually computing when you compute the 'noise'? I suspect you and I may mean something different when we use that word.
 
  • #95
Drakkith said:
What do you define as 'noise' here? What are you actually computing when you compute the 'noise'?
When the lens cap is on (same exposure settings as the light frame), there is still still ADUs per pixel even through there is no light. With a large enough sample, for example 100x100 pixels, red is 1/4th of the bayer pattern so you have 2500 samples. So you add up the ADUs of all the pixels combined, then divide by the number of pixels. That’s the average noise per pixel. If you look pixel by pixel, the values will be all over the place but when you move the 2500 pixel selection around the image, the average noise is very consistent across the image.
 
  • #96
Devin-M said:
@collinsmark The only other uncertainty I have… when you cropped those TIF files, perhaps in photoshop, did they possibly pick up a sRGB or Adobe RGB gamma curve?

I believe simply saving a TIF in photoshop with either an sRGB or Adobe RGB color profile will non-linearize the data by applying a gamma curve. This doesn’t apply to RAW files. (see below)

Pixinsight gives the option to embed an ICC profile when saving the images as TIFF. I had thought that I had that checkbox unchecked (i.e., I thought that I did not include an ICC profile in the saved files). Even if a ICC profile was included in the file, your program should be able to ignore it; it doesn't affect the actual pixel data directly.

But yes, you're correct that you should not let your image manipulation program apply a gamma curve. That will mess up this experiment. You need to work directly with the unmodified data.

Btw, @Devin-M, is your program capable of working with .XISF files? I usually work with .XISF files from start to finish (well, until the very end, anyway). If your program can work with those I can upload the .XISF files (32 bit, IEEE 754 floating point format) and that way I/we don't have to worry about file format conversions.

[Edit: Or I can upload FITS format files too. The link won't let me upload anything though, presently.]
 
Last edited:
  • #97
Devin-M said:
When the lens cap is on (same exposure settings as the light frame), there is still still ADUs per pixel even through there is no light. With a large enough sample, for example 100x100 pixels, red is 1/4th of the bayer pattern so you have 2500 samples. So you add up the ADUs of all the pixels combined, then divide by the number of pixels. That’s the average noise per pixel. If you look pixel by pixel, the values will be all over the place but when you move the 2500 pixel selection around the image, the average noise is very consistent across the image.
Okay, you're measuring the combined dark current + bias + background signal, using a large sample of pixels to average out the noise (the randomness in each pixel) to get a consistent value. No wonder I've been so confused with what you've been doing.

So yes, your previous method works despite what I said previously as long as you're sampling a background area as free of stars and other background objects as possible.
 
  • #98
collinsmark said:
Btw, @Devin-M, is your program capable of working with .XISF files? I usually work with .XISF files from start to finish (well, until the very end, anyway). If your program can work with those I can upload the .XISF files (32 bit, IEEE 754 floating point format) and that way I/we don't have to worry about file format conversions.

[Edit: Or I can upload FITS format files too. The link won't let me upload anything though, presently.]
I turned uploading back on. Could you spare a raw file of the single slit and one of the double slit? My RawDigger app seems to only open RAW files. It won’t even open a TIF or JPG so I resorted to manually entering all the pixel values into a spreadsheet to get the averages.
 
  • #99
Devin-M said:
I turned uploading back on. Could you spare a raw file of the single slit and one of the double slit? My RawDigger app seems to only open RAW files. It won’t even open a TIF or JPG so I resorted to manually entering all the pixel values into a spreadsheet to get the averages.

I've uploaded the data, this time in 16-bit, unsigned integer, in FITS file format.

I don't know what you mean by "RAW" file format. "RAW" is usually used as an umbrella term for a format with pixel data that hasn't been manipulated. (For example, Nikon's "RAW" file format is actually "NEF.")

The way I gathered the data, N.I.N.A stores the data straight from the camera to XISF file format. XISF file format is similar to FITS, but more extensible. XISF or FITS is about as raw as my system can do.

The files I uploaded though are dark-frame calibrated and stacked.
 
  • #100
This thread just made realize I've been unnecessarily degrading my astro-photos for years...

The mistake? I assumed Adobe Lightroom will convert RAW files into 16-bit tifs linearly and without adding color noise before I stack them... turns out that's not the case. The reason this is important is the next step is stacking 20-60 of those TIFs to reduce the noise, so you definitely don't want to be adding noise before you reduce the noise.

The solution? It turns out the app I've been using in this thread to inspect the RAW files will also convert RAW files into 16-bit tifs (as far as I can tell) linearly and without modifying the image values or adding noise.

New process (converting RAW NEFs to 16 bit TIFs in RawDigger before stacking):
IMG-2.jpg


Old Process (same exact RAW files but converting RAW NEFs to 16 bit TIFs in Adobe Lightroom before stacking):
Casper_M78_620x620_double_stretched.jpg


Wow it's a world of difference.

Details:

Meade 2175mm f/14.5 Maksutov Cassegrain with Nikon D800 on Star Watcher 2i equatorial mount
17x stacked 90 second exposures @ 6400iso + 19 dark calibration frames + 5 flat calibration frames
RAW NEF files converted to 16bit TIFs in RawDigger
Stacking in Starry Sky Stacker
Final Histogram Stretch in Adobe Lightroom AFTER(!!) stacking

5625643.png


5625643-1.png


5625643-2.png


7600095-1.jpeg


7600095.jpeg


Old 100% Crop:
100pc_old.jpg

New 100% Crop:
100pc_new.jpg
 
Last edited:
  • Like
Likes vanhees71, collinsmark and hutchphd
  • #101
collinsmark said:
Here are the resulting plots:

double_plotwithstrip_smallforpf-png.png

Figure 8: Both slits open plot.

lc_plotwithstrip_smallforpf-png.png

Figure 9: Left slit covered plot.

rc_plotwithstrip_smallforpf-png.png

Figure 10: Right slit covered plot.

The shapes of the diffraction/interference patterns also agree with theory.

Notice the central peak of the both slits open plot is interestingly 4 times as high as either single slit plot.

So adding the second slit results in double the total photon detections & roughly 4x the intensity in certain places and significantly reduced intensity in others. That’s consistent with conservation of energy, but how do you rule out destruction of energy in some places and creation of energy in others, proportional to the square of the change of intensity in field strength (ie 2 over lapping constructive waves but 2^2 detected photons in those regions?

To further illustrate the question posed, at 2:38 in the video below, when he uncovers one half of the interferometer, doubling light, the detection screen goes completely dark:
 
  • #102
Devin-M said:
That’s consistent with conservation of energy, but how do you rule out destruction of energy in some places and creation of energy in others, proportional to the square of the change of intensity in field strength (ie 2 over lapping constructive waves but 2^2 detected photons in those regions?
Conservation of energy is all that I require. The rest is your problem.
 
  • Like
Likes Vanadium 50
  • #103
At exactly 3:00 in the video he uncovers the mirror on the right side arm of the interferometer and the light on the detector screen that had been bouncing off the left side mirror of the interferometer goes completely black. You can tell there’s light headed toward that right arm mirror because you can see the red dot on the index card before he uncovers the right mirror. What happens to the light that reflects off the right the hand mirror? It appears as if the portion of that light that reflects off the right side mirror and then passes straight through the beam splitter is being completely destroyed by having a 1/2 wavelength path length difference with the beam reflecting off the left arm mirror.
 
Last edited:
  • #104
Did you watch the entire video? (whose title is "where did the light go?")

?!
 
  • #105
Yes at the very end he end he asks the viewer if the path difference of the 2 mirrors is 1/2 wavelength, why does the light going to one screen cancel out and the other add constructively. He doesn’t answer this question and leaves it for the viewer to answer. It seems like before he adds the second detection screen, half the light from the mirrors is going to the screen and the other half is going back to the laser source. When he adds the second detection screen now 1/2 is going to the 1st screen, 1/4th is going to the 2nd screen, and 1/4th is going back to the laser. As far as why on one screen there’s a half wavelength path difference and the other there isn’t, I believe it’s due to the thickness of the beam splitter. It seems there must be an extra half wavelength path length (or odd multiple of 1/2 wavelength) for the light from the left mirror that goes back through the beam splitter. I’m not yet fully convinced that when the light cancels out its really going back to the source because once its adding destructively the path length back to the source should be identical all the way back to the source so it should add destructively all the way back to the source.

For example with a different thickness beam splitter it may be possible to have the light on both detection screens add constructively at the same time or both destructively at the same time.
 
Last edited:
  • #106
Devin-M said:
Yes at the very end he end he asks the viewer if the path difference of the 2 mirrors is 1/2 wavelength, why does the light going to one screen cancel out and the other add constructively. He doesn’t answer this question and leaves it for the viewer to answer. It seems like before he adds the second detection screen, half the light from the mirrors is going to the screen and the other half is going back to the laser source. When he adds the second detection screen now 1/2 is going to the 1st screen, 1/4th is going to the 2nd screen, and 1/4th is going back to the laser. As far as why on one screen there’s a half wavelength path difference and the other there isn’t, I believe it’s due to the thickness of the beam splitter. It seems there must be an extra half wavelength path length (or odd multiple of 1/2 wavelength) for the light from the left mirror that goes back through the beam splitter. I’m not yet fully convinced that when the light cancels out its really going back to the source because once its adding destructively the path length back to the source should be identical all the way back to the source so it should add destructively all the way back to the source.

For example with a different thickness beam splitter it may be possible to have the light on both detection screens add constructively at the same time or both destructively at the same time.
Don't forget about phase shifts. A reflected wave gets a phase shift of \pi*. Are there any differences in the total phase shifts between the two beams in question (one on the laser side of the interferometer, and the other on the wall side of the interferometer), even if their path lengths are the same?

*(For the purposes here, ignore the nuances about reflections caused by the glass itself -- the reflections going from one index of refraction to another. That makes things things overly complicated, and can be ignored for this discussion. A good beam splitter will have coatings to minimize these types of reflections anyway. Instead, treat all reflections as being caused solely by the very thin, half mirrored surface -- the light either gets reflected with a phase shift of \pi, or it gets transmitted with no phase shift at all.)
 
  • Like
Likes vanhees71 and hutchphd
  • #108
Devin-M said:
On wikipedia it says “Also, this is referring to near-normal incidence—for p-polarized light reflecting off glass at glancing angle, beyond the Brewster angle, the phase change is 0°.”

https://en.m.wikipedia.org/wiki/Reflection_phase_change

Right. As I mentioned, let's ignore the air-glass or glass-air interfaces.

What's important here are the air-silver and glass-silver interfaces. (Well, on second though, that's just an example of one type of beam splitter. See below for clarification.)

Here's an image from the Beam Splitter entry in Wikipedia:

Wavesplitter1.gif

Phase shift through a beam splitter with a dielectric coating.

Source: https://en.wikipedia.org/wiki/Beam_splitter#Phase_shift

And as mentioned in the article, the details are dependent upon the type and geometry of the beam splitter. So the details of the experiment might change a little depending on your choice of beam-splitter type. So there isn't necessarily a one-size-fits-all simple answer to this topic.

But in any case, it will always work out, one way or the other, that energy is conserved. A decrease of energy via destructive interference somewhere will always lead to a corresponding increase of energy somewhere else. That much we can count on.
 
Last edited:
  • #109
collinsmark said:
wavesplitter1-gif.gif

Phase shift through a beam splitter with a dielectric coating.

Source: https://en.wikipedia.org/wiki/Beam_splitter#Phase_shift

That diagram predicts what will be observed in the experiment but it tends to refute his claim that the destructively interfering beams on the 1st detection screen go back to the source.
 
  • #110
What I mean is in this video at 4:24 he uses quarter wave plate on the right arm to rotate the polarization of that one arm, which then prevents destructive interference even if there is a half wavelength path length difference. So suppose he used a quarter wave plate on one arm of the original setup, would the combined output on both detection screens have higher intensity from lack of destructive interference on either screen?

see 4:24:
 
  • #111
collinsmark said:
Don't forget about phase shifts. A reflected wave gets a phase shift of \pi*. Are there any differences in the total phase shifts between the two beams in question (one on the laser side of the interferometer, and the other on the wall side of the interferometer), even if their path lengths are the same?

*(For the purposes here, ignore the nuances about reflections caused by the glass itself -- the reflections going from one index of refraction to another. That makes things things overly complicated, and can be ignored for this discussion. A good beam splitter will have coatings to minimize these types of reflections anyway. Instead, treat all reflections as being caused solely by the very thin, half mirrored surface -- the light either gets reflected with a phase shift of \pi, or it gets transmitted with no phase shift at all.)
provided the reflection is on the side of the medium with the larger index of refraction.
 
  • Like
Likes collinsmark

Similar threads

Replies
32
Views
4K
  • · Replies 15 ·
Replies
15
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
535
  • · Replies 4 ·
Replies
4
Views
2K
Replies
24
Views
4K
Replies
1
Views
2K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 14 ·
Replies
14
Views
4K