- #1
Strides
- 23
- 1
I'm currently carrying out an experiment with Fraunhofer diffraction. It involves shining a laser beam through neural density filters, a lens and a diffraction grating, to create a diffraction pattern which is then picked up with a CCD camera, to find the intensity of the maximal peaks.
However I'm having an issue with my results, the diffraction pattern is clear and has no issues, however the measured intensity of the peaks isn't decreasing substantially enough with the increase of the neural density filters (i.e. from D = 1.8, D = 2.0, ... to D = 3.4), and thus according to the following equation:
D = log10(Io/Id)
D = value of the neural density filter
Id = Light intensity after filter
Io = original light intensity
The results aren't giving me a constant value for the original light intensity from the laser, and instead just increases with each iteration of neural density filter. I was thus wondering if anyone could please help me explain what the issue is and help identify any source of errors I can use to correct my result (I've already corrected for background noise, although this could still potentially be a source of error).
However I'm having an issue with my results, the diffraction pattern is clear and has no issues, however the measured intensity of the peaks isn't decreasing substantially enough with the increase of the neural density filters (i.e. from D = 1.8, D = 2.0, ... to D = 3.4), and thus according to the following equation:
D = log10(Io/Id)
D = value of the neural density filter
Id = Light intensity after filter
Io = original light intensity
The results aren't giving me a constant value for the original light intensity from the laser, and instead just increases with each iteration of neural density filter. I was thus wondering if anyone could please help me explain what the issue is and help identify any source of errors I can use to correct my result (I've already corrected for background noise, although this could still potentially be a source of error).