Analyzing Spectra: Comparing Visible/NIR Stars

  • Thread starter Thread starter cepheid
  • Start date Start date
  • Tags Tags
    Spectra
AI Thread Summary
The discussion focuses on comparing two spectra of a star, one from an online resource and the other observed using the LRIS spectrometer. The user seeks to derive the atmospheric and instrumental response by converting the observed spectrum's energy values into comparable units with the standard spectrum. Concerns arise regarding the assumptions about energy measurements at specific wavelengths and the integration over pixel bandwidths. Participants highlight potential issues with non-linear responses in light sensors, emphasizing the importance of understanding these variations for accurate data analysis. The user concludes that quantifying errors at each stage of data collection and analysis is essential to ensure reliable flux values for comparing line strengths.
cepheid
Staff Emeritus
Science Advisor
Gold Member
Messages
5,197
Reaction score
38
I have two spectra of a star across the visible and NIR. One spectrum comes from an online resource (CALSPEC) for spectrophotometric standards, and it provides a wavelength array along with a corresponding array of flux values (in units of ergs s-1 cm-2 Å-1)

The second spectrum is the observed spectrum of this standard star (using the LRIS spectrometer on Keck). Now, the data values at each wavelength for this spectrum are presumably proportional to the number of photons at that wavelength. So if I multiply each value by hc/λ, I then have data values that are proportional to the photon energy at that wavelength (in ergs). I want to compare these two spectra so that I can derive the combined atmospheric and instrumental response of the system on the observing night in question. Therefore, I want to somehow get these spectra into the same units:

I'm not too worried about the per cm and per second, because I'm assuming that each data point is the energy corresponding to the flux integrated over the exposure time of the detector and integrated over the area on the detector that roughly corresponds to each pixel. Therefore, the difference between these "energy" values and the corresponding "flux" values is a constant that can be absorbed into the system response. Is my assumption correct?

It's the per wavelength part that is especially confusing to me. Although I said that the data points in the observed spectrum correspond to the energies at each wavelength, it doesn't make sense to talk about the energy at a single, specific, wavelength, does it? I say that because the spectrum being observed varies continuously with wavelength, and there is also a limit to how narrow a line can be, and how fine the spectral resolution of the instrument can be. Therefore I imagine that the only thing you would be able to measure would be energy integrated over a certain small wavelength interval. In this case, since the spectrograph uses a dispersive element (a grism), a certain wavelength interval corresponds to a certain physical distance on the detector. Therefore, I am assuming that each data point is the energy that has already been integrated over a wavelength band corresponding to the width of a pixel. Does this assumption seem reasonable? I don't have enough detailed info about the instrument to go any further. If this assumption is true, then it would seem that in order to get the per angstrom part, it would suffice to divide each data point in the observed spectrum by the wavelength spacing (bin width) which is constant. Then I will be able to compare the two spectra and derive the instrumental response. What do you think?
 
Physics news on Phys.org
cepheid said:
Therefore, the difference between these "energy" values and the corresponding "flux" values is a constant that can be absorbed into the system response. Is my assumption correct?

Hmmm I doubt it. Lots of fun nonlinear responses can happen in a light sensor and its associated circuitry, and all those curves would vary with its temperature. 'Course it mainly depends on what precision you want. But things to watch out for are power non-linearity, time non-linearity (skew rates, etc.), adjacent pixel behavior, correlated pixel noise (this is the most likely problem. Its a real bear because it varies with incident light power. Sometimes it helps to take a dark capture or several blank captures with different intensities and subtract the appropriate one from other captures. You might also mechanically move the spectrum over the sensor array so different pixels get the same frequency at different times, to help integrate-out the pixel idiosyncrasies), variations in uncorrelated noise with temp, ambient RF interference, and again, all this varying with temperature.

cepheid said:
Therefore, I am assuming that each data point is the energy that has already been integrated over a wavelength band corresponding to the width of a pixel. Does this assumption seem reasonable?

Seems so. Hopefully that bandwidth is easily and reliably predictable per pixel.

cepheid said:
If this assumption is true, then it would seem that in order to get the per angstrom part, it would suffice to divide each data point in the observed spectrum by the wavelength spacing (bin width) which is constant. Then I will be able to compare the two spectra and derive the instrumental response. What do you think?

Makes sense.
 
Thanks for your response, fleem.

fleem said:
Hmmm I doubt it. Lots of fun nonlinear responses can happen in a light sensor and its associated circuitry, and all those curves would vary with its temperature. 'Course it mainly depends on what precision you want. But things to watch out for are power non-linearity, time non-linearity (skew rates, etc.), adjacent pixel behavior, correlated pixel noise (this is the most likely problem. Its a real bear because it varies with incident light power. Sometimes it helps to take a dark capture or several blank captures with different intensities and subtract the appropriate one from other captures. You might also mechanically move the spectrum over the sensor array so different pixels get the same frequency at different times, to help integrate-out the pixel idiosyncrasies), variations in uncorrelated noise with temp, ambient RF interference, and again, all this varying with temperature.

Variation of the pixel response with time and with incident power seems like a huge problem, especially if I want to use my derived responsivity curve (= observed spectrum / standard spectrum) to take out the system response from other spectra (spectra of the actual target of interest) that were taken on the same night. Given that I am stuck with the data I have and can't really go and do any of the things you suggested in parentheses, what would you recommend? How can I be sure that once I have divided out my system response from my target spectra, that what I have left are reliable "flux" values (EDIT: this last part being rather important because I want to compare line strengths)?

EDIT 2: It seems like what I might have to do is just make some effort to quantify the error involved at each stage of the data collection and analysis and use that to come up with reasonable uncertainties.
 
Thread 'Question about pressure of a liquid'
I am looking at pressure in liquids and I am testing my idea. The vertical tube is 100m, the contraption is filled with water. The vertical tube is very thin(maybe 1mm^2 cross section). The area of the base is ~100m^2. Will he top half be launched in the air if suddenly it cracked?- assuming its light enough. I want to test my idea that if I had a thin long ruber tube that I lifted up, then the pressure at "red lines" will be high and that the $force = pressure * area$ would be massive...
I feel it should be solvable we just need to find a perfect pattern, and there will be a general pattern since the forces acting are based on a single function, so..... you can't actually say it is unsolvable right? Cause imaging 3 bodies actually existed somwhere in this universe then nature isn't gonna wait till we predict it! And yea I have checked in many places that tiny changes cause large changes so it becomes chaos........ but still I just can't accept that it is impossible to solve...
Back
Top