# Analyzing Spectra

1. Jul 6, 2009

### cepheid

Staff Emeritus
I have two spectra of a star across the visible and NIR. One spectrum comes from an online resource (CALSPEC) for spectrophotometric standards, and it provides a wavelength array along with a corresponding array of flux values (in units of ergs s-1 cm-2 Å-1)

The second spectrum is the observed spectrum of this standard star (using the LRIS spectrometer on Keck). Now, the data values at each wavelength for this spectrum are presumably proportional to the number of photons at that wavelength. So if I multiply each value by hc/λ, I then have data values that are proportional to the photon energy at that wavelength (in ergs). I want to compare these two spectra so that I can derive the combined atmospheric and instrumental response of the system on the observing night in question. Therefore, I want to somehow get these spectra into the same units:

I'm not too worried about the per cm and per second, because I'm assuming that each data point is the energy corresponding to the flux integrated over the exposure time of the detector and integrated over the area on the detector that roughly corresponds to each pixel. Therefore, the difference between these "energy" values and the corresponding "flux" values is a constant that can be absorbed into the system response. Is my assumption correct?

It's the per wavelength part that is especially confusing to me. Although I said that the data points in the observed spectrum correspond to the energies at each wavelength, it doesn't make sense to talk about the energy at a single, specific, wavelength, does it? I say that because the spectrum being observed varies continuously with wavelength, and there is also a limit to how narrow a line can be, and how fine the spectral resolution of the instrument can be. Therefore I imagine that the only thing you would be able to measure would be energy integrated over a certain small wavelength interval. In this case, since the spectrograph uses a dispersive element (a grism), a certain wavelength interval corresponds to a certain physical distance on the detector. Therefore, I am assuming that each data point is the energy that has already been integrated over a wavelength band corresponding to the width of a pixel. Does this assumption seem reasonable? I don't have enough detailed info about the instrument to go any further. If this assumption is true, then it would seem that in order to get the per angstrom part, it would suffice to divide each data point in the observed spectrum by the wavelength spacing (bin width) which is constant. Then I will be able to compare the two spectra and derive the instrumental response. What do you think?

2. Jul 6, 2009

### fleem

Hmmm I doubt it. Lots of fun nonlinear responses can happen in a light sensor and its associated circuitry, and all those curves would vary with its temperature. 'Course it mainly depends on what precision you want. But things to watch out for are power non-linearity, time non-linearity (skew rates, etc.), adjacent pixel behavior, correlated pixel noise (this is the most likely problem. Its a real bear because it varies with incident light power. Sometimes it helps to take a dark capture or several blank captures with different intensities and subtract the appropriate one from other captures. You might also mechanically move the spectrum over the sensor array so different pixels get the same frequency at different times, to help integrate-out the pixel idiosyncrasies), variations in uncorrelated noise with temp, ambient RF interference, and again, all this varying with temperature.

Seems so. Hopefully that bandwidth is easily and reliably predictable per pixel.

Makes sense.

3. Jul 6, 2009

### cepheid

Staff Emeritus