- #1
- 5,199
- 38
I have two spectra of a star across the visible and NIR. One spectrum comes from an online resource (CALSPEC) for spectrophotometric standards, and it provides a wavelength array along with a corresponding array of flux values (in units of ergs s-1 cm-2 Å-1)
The second spectrum is the observed spectrum of this standard star (using the LRIS spectrometer on Keck). Now, the data values at each wavelength for this spectrum are presumably proportional to the number of photons at that wavelength. So if I multiply each value by hc/λ, I then have data values that are proportional to the photon energy at that wavelength (in ergs). I want to compare these two spectra so that I can derive the combined atmospheric and instrumental response of the system on the observing night in question. Therefore, I want to somehow get these spectra into the same units:
I'm not too worried about the per cm and per second, because I'm assuming that each data point is the energy corresponding to the flux integrated over the exposure time of the detector and integrated over the area on the detector that roughly corresponds to each pixel. Therefore, the difference between these "energy" values and the corresponding "flux" values is a constant that can be absorbed into the system response. Is my assumption correct?
It's the per wavelength part that is especially confusing to me. Although I said that the data points in the observed spectrum correspond to the energies at each wavelength, it doesn't make sense to talk about the energy at a single, specific, wavelength, does it? I say that because the spectrum being observed varies continuously with wavelength, and there is also a limit to how narrow a line can be, and how fine the spectral resolution of the instrument can be. Therefore I imagine that the only thing you would be able to measure would be energy integrated over a certain small wavelength interval. In this case, since the spectrograph uses a dispersive element (a grism), a certain wavelength interval corresponds to a certain physical distance on the detector. Therefore, I am assuming that each data point is the energy that has already been integrated over a wavelength band corresponding to the width of a pixel. Does this assumption seem reasonable? I don't have enough detailed info about the instrument to go any further. If this assumption is true, then it would seem that in order to get the per angstrom part, it would suffice to divide each data point in the observed spectrum by the wavelength spacing (bin width) which is constant. Then I will be able to compare the two spectra and derive the instrumental response. What do you think?
The second spectrum is the observed spectrum of this standard star (using the LRIS spectrometer on Keck). Now, the data values at each wavelength for this spectrum are presumably proportional to the number of photons at that wavelength. So if I multiply each value by hc/λ, I then have data values that are proportional to the photon energy at that wavelength (in ergs). I want to compare these two spectra so that I can derive the combined atmospheric and instrumental response of the system on the observing night in question. Therefore, I want to somehow get these spectra into the same units:
I'm not too worried about the per cm and per second, because I'm assuming that each data point is the energy corresponding to the flux integrated over the exposure time of the detector and integrated over the area on the detector that roughly corresponds to each pixel. Therefore, the difference between these "energy" values and the corresponding "flux" values is a constant that can be absorbed into the system response. Is my assumption correct?
It's the per wavelength part that is especially confusing to me. Although I said that the data points in the observed spectrum correspond to the energies at each wavelength, it doesn't make sense to talk about the energy at a single, specific, wavelength, does it? I say that because the spectrum being observed varies continuously with wavelength, and there is also a limit to how narrow a line can be, and how fine the spectral resolution of the instrument can be. Therefore I imagine that the only thing you would be able to measure would be energy integrated over a certain small wavelength interval. In this case, since the spectrograph uses a dispersive element (a grism), a certain wavelength interval corresponds to a certain physical distance on the detector. Therefore, I am assuming that each data point is the energy that has already been integrated over a wavelength band corresponding to the width of a pixel. Does this assumption seem reasonable? I don't have enough detailed info about the instrument to go any further. If this assumption is true, then it would seem that in order to get the per angstrom part, it would suffice to divide each data point in the observed spectrum by the wavelength spacing (bin width) which is constant. Then I will be able to compare the two spectra and derive the instrumental response. What do you think?