Minimum magnitude resolvable by a spectrograph

  • Thread starter Thread starter Pure Narcotic
  • Start date Start date
  • Tags Tags
    Magnitude Minimum
AI Thread Summary
The discussion centers on calculating the minimum magnitude of an astronomical object detectable by a specific telescope and spectroscope setup, using Bowen's formula. The user is confused about the formula's application, particularly the term dλ/dθ, which they believe should yield a larger value given the high resolution of the spectrograph and the beam size. They note that the resulting calculations lead to negative magnitudes, which are nonsensical in practical terms. The user seeks clarification on how to properly interpret and apply the formula, especially regarding the relationship between the beam diameter, spectral resolution, and the resulting magnitude. Overall, the conversation highlights the complexities involved in astronomical measurements and the need for accurate calculations in spectroscopy.
Pure Narcotic
Messages
2
Reaction score
0
I am stuck on a problem regarding the minimum magnitude of an astronomical object that can be handled by a given telescope+spectroscope combination. The telescope aperture (D_T) is 2 m wide, the exit beam (D1) is 3 cm wide. The slit width (s) is 10 micrometers. The wavelength of interest is 450 nm. The focal length of the collimator is f1, detector f2. Alpha is the Rayleigh criterion of the telescope, H is the height of the spectrum. Let t be the exposure time. The spectral resolution of this spectroscope is 0.1 Angstrom.

My textbook says that "The limiting magnitude of a telescope–spectroscope combination is the magnitude of
the faintest star for which a useful spectrum may be obtained. A guide to the limiting magnitude may be gained through the use of Bowen’s formula." The Bowen's formula goes:

$$ m = 12.5 + 2.5 log_{10}(\frac{s*D_1*D_T*g*q*t*\frac{d\lambda}{d\theta}}{f_1*f_2*\alpha*H}) $$

This formula confuses me a lot. The textbook tells us to approximate ##\frac{d\lambda}{d\theta}## as the ratio between the diameter of the beam and the resolution of the spectrograph. Honestly, because of how high resolution this spectrograph is, (which is not abnormal anyway) (##\frac{d\lambda}{d\theta}## is coming out to be a very low value (and the inverse of that value appears again in f2, making the total value even lower), and subsequently, the whole log10 result ends up being negative, which is made worse by the 2.5 factor, resulting in the whole answer being negative magnitude: which is obviously wrong. For a realistic setup, (forget the question for now), how is this calculated? I haven't found a single mention of this formula anywhere online apart from the textbook. Any suggestions would be very helpful, thanks!
 
Last edited:
Physics news on Phys.org
Seems dλ/dθ should be a large number if the beam 3 cm and resolution is sub nanometer.
 
Sorry, the resolution that's in the denominator of dλ/dθ is some sort of a dimensionless quantity, which actually is the ratio between the wavelength of interest and the spectral resolution. So essentially something like 4500 angstrom divided by 0.1 angstrom.
 
Thread 'Collision of a bullet on a rod-string system: query'
In this question, I have a question. I am NOT trying to solve it, but it is just a conceptual question. Consider the point on the rod, which connects the string and the rod. My question: just before and after the collision, is ANGULAR momentum CONSERVED about this point? Lets call the point which connects the string and rod as P. Why am I asking this? : it is clear from the scenario that the point of concern, which connects the string and the rod, moves in a circular path due to the string...
Back
Top