- #1
Pure Narcotic
- 2
- 0
I am stuck on a problem regarding the minimum magnitude of an astronomical object that can be handled by a given telescope+spectroscope combination. The telescope aperture (D_T) is 2 m wide, the exit beam (D1) is 3 cm wide. The slit width (s) is 10 micrometers. The wavelength of interest is 450 nm. The focal length of the collimator is f1, detector f2. Alpha is the Rayleigh criterion of the telescope, H is the height of the spectrum. Let t be the exposure time. The spectral resolution of this spectroscope is 0.1 Angstrom.
My textbook says that "The limiting magnitude of a telescope–spectroscope combination is the magnitude of
the faintest star for which a useful spectrum may be obtained. A guide to the limiting magnitude may be gained through the use of Bowen’s formula." The Bowen's formula goes:
$$ m = 12.5 + 2.5 log_{10}(\frac{s*D_1*D_T*g*q*t*\frac{d\lambda}{d\theta}}{f_1*f_2*\alpha*H}) $$
This formula confuses me a lot. The textbook tells us to approximate ##\frac{d\lambda}{d\theta}## as the ratio between the diameter of the beam and the resolution of the spectrograph. Honestly, because of how high resolution this spectrograph is, (which is not abnormal anyway) (##\frac{d\lambda}{d\theta}## is coming out to be a very low value (and the inverse of that value appears again in f2, making the total value even lower), and subsequently, the whole log10 result ends up being negative, which is made worse by the 2.5 factor, resulting in the whole answer being negative magnitude: which is obviously wrong. For a realistic setup, (forget the question for now), how is this calculated? I haven't found a single mention of this formula anywhere online apart from the textbook. Any suggestions would be very helpful, thanks!
My textbook says that "The limiting magnitude of a telescope–spectroscope combination is the magnitude of
the faintest star for which a useful spectrum may be obtained. A guide to the limiting magnitude may be gained through the use of Bowen’s formula." The Bowen's formula goes:
$$ m = 12.5 + 2.5 log_{10}(\frac{s*D_1*D_T*g*q*t*\frac{d\lambda}{d\theta}}{f_1*f_2*\alpha*H}) $$
This formula confuses me a lot. The textbook tells us to approximate ##\frac{d\lambda}{d\theta}## as the ratio between the diameter of the beam and the resolution of the spectrograph. Honestly, because of how high resolution this spectrograph is, (which is not abnormal anyway) (##\frac{d\lambda}{d\theta}## is coming out to be a very low value (and the inverse of that value appears again in f2, making the total value even lower), and subsequently, the whole log10 result ends up being negative, which is made worse by the 2.5 factor, resulting in the whole answer being negative magnitude: which is obviously wrong. For a realistic setup, (forget the question for now), how is this calculated? I haven't found a single mention of this formula anywhere online apart from the textbook. Any suggestions would be very helpful, thanks!
Last edited: