Minimum magnitude resolvable by a spectrograph

  • Thread starter Thread starter Pure Narcotic
  • Start date Start date
  • Tags Tags
    Magnitude Minimum
AI Thread Summary
The discussion centers on calculating the minimum magnitude of an astronomical object detectable by a specific telescope and spectroscope setup, using Bowen's formula. The user is confused about the formula's application, particularly the term dλ/dθ, which they believe should yield a larger value given the high resolution of the spectrograph and the beam size. They note that the resulting calculations lead to negative magnitudes, which are nonsensical in practical terms. The user seeks clarification on how to properly interpret and apply the formula, especially regarding the relationship between the beam diameter, spectral resolution, and the resulting magnitude. Overall, the conversation highlights the complexities involved in astronomical measurements and the need for accurate calculations in spectroscopy.
Pure Narcotic
Messages
2
Reaction score
0
I am stuck on a problem regarding the minimum magnitude of an astronomical object that can be handled by a given telescope+spectroscope combination. The telescope aperture (D_T) is 2 m wide, the exit beam (D1) is 3 cm wide. The slit width (s) is 10 micrometers. The wavelength of interest is 450 nm. The focal length of the collimator is f1, detector f2. Alpha is the Rayleigh criterion of the telescope, H is the height of the spectrum. Let t be the exposure time. The spectral resolution of this spectroscope is 0.1 Angstrom.

My textbook says that "The limiting magnitude of a telescope–spectroscope combination is the magnitude of
the faintest star for which a useful spectrum may be obtained. A guide to the limiting magnitude may be gained through the use of Bowen’s formula." The Bowen's formula goes:

$$ m = 12.5 + 2.5 log_{10}(\frac{s*D_1*D_T*g*q*t*\frac{d\lambda}{d\theta}}{f_1*f_2*\alpha*H}) $$

This formula confuses me a lot. The textbook tells us to approximate ##\frac{d\lambda}{d\theta}## as the ratio between the diameter of the beam and the resolution of the spectrograph. Honestly, because of how high resolution this spectrograph is, (which is not abnormal anyway) (##\frac{d\lambda}{d\theta}## is coming out to be a very low value (and the inverse of that value appears again in f2, making the total value even lower), and subsequently, the whole log10 result ends up being negative, which is made worse by the 2.5 factor, resulting in the whole answer being negative magnitude: which is obviously wrong. For a realistic setup, (forget the question for now), how is this calculated? I haven't found a single mention of this formula anywhere online apart from the textbook. Any suggestions would be very helpful, thanks!
 
Last edited:
Physics news on Phys.org
Seems dλ/dθ should be a large number if the beam 3 cm and resolution is sub nanometer.
 
Sorry, the resolution that's in the denominator of dλ/dθ is some sort of a dimensionless quantity, which actually is the ratio between the wavelength of interest and the spectral resolution. So essentially something like 4500 angstrom divided by 0.1 angstrom.
 
Kindly see the attached pdf. My attempt to solve it, is in it. I'm wondering if my solution is right. My idea is this: At any point of time, the ball may be assumed to be at an incline which is at an angle of θ(kindly see both the pics in the pdf file). The value of θ will continuously change and so will the value of friction. I'm not able to figure out, why my solution is wrong, if it is wrong .
TL;DR Summary: I came across this question from a Sri Lankan A-level textbook. Question - An ice cube with a length of 10 cm is immersed in water at 0 °C. An observer observes the ice cube from the water, and it seems to be 7.75 cm long. If the refractive index of water is 4/3, find the height of the ice cube immersed in the water. I could not understand how the apparent height of the ice cube in the water depends on the height of the ice cube immersed in the water. Does anyone have an...
Back
Top