- #1
TheMan112
- 43
- 1
I'm trying to calculate the diffraction limit/angular resolution for the Hubble Space Telescope. I know this can be found using the formula:
[tex]\theta = 1.22 \frac{\lambda}{D}[/tex]
Where [tex]\lambda[/tex] is the wavelength of the light being observed and [tex]D[/tex] is the diameter of the objective lens (2.5 m on Hubble).
Now since Hubble is able to observe a wade range of wavelenghts from the ultraviolet to the visible to the infrared spectrum I would describe the diffraction limit as an interval depeding on this range of wavelenghts.
However, in all examples I've been able to find, even http://www.nasa.gov/missions/highlights/webcasts/shuttle/sts109/hubble-qa.html" , the diffraction limit of optical telescopes is calculated using a single wavelength of 500 nm (~cyan). What's the justification for using this particular wavelength for Hubble (and other optical telescopes).
500 nm coincides with:
1. The irradience top of sunlight (and thus any G-type star). See: http://en.wikipedia.org/wiki/File:EffectiveTemperature_300dpi_e.png
2. The sensitivity top of the human eye.
3. The approximate middle wavelength of the visible spectrum.
However (respectively):
1. G-type stars aren't all that common, comprising only 7.6% of all stars, thus the usage is limited and this point cannot be applied to all visible starlight.
2. The HST employs CCD sensors and not direct human observation, thus point 2 is irrelevant.
3. This is the only point that seems relevant. However the HST would theoretically achieve greater angular resolution by using say the 380-450 nm range.
So what's the point of using 500 nm when calculating the diffraction limit?
[tex]\theta = 1.22 \frac{\lambda}{D}[/tex]
Where [tex]\lambda[/tex] is the wavelength of the light being observed and [tex]D[/tex] is the diameter of the objective lens (2.5 m on Hubble).
Now since Hubble is able to observe a wade range of wavelenghts from the ultraviolet to the visible to the infrared spectrum I would describe the diffraction limit as an interval depeding on this range of wavelenghts.
However, in all examples I've been able to find, even http://www.nasa.gov/missions/highlights/webcasts/shuttle/sts109/hubble-qa.html" , the diffraction limit of optical telescopes is calculated using a single wavelength of 500 nm (~cyan). What's the justification for using this particular wavelength for Hubble (and other optical telescopes).
500 nm coincides with:
1. The irradience top of sunlight (and thus any G-type star). See: http://en.wikipedia.org/wiki/File:EffectiveTemperature_300dpi_e.png
2. The sensitivity top of the human eye.
3. The approximate middle wavelength of the visible spectrum.
However (respectively):
1. G-type stars aren't all that common, comprising only 7.6% of all stars, thus the usage is limited and this point cannot be applied to all visible starlight.
2. The HST employs CCD sensors and not direct human observation, thus point 2 is irrelevant.
3. This is the only point that seems relevant. However the HST would theoretically achieve greater angular resolution by using say the 380-450 nm range.
So what's the point of using 500 nm when calculating the diffraction limit?
Last edited by a moderator: