- #1
omoplata
- 327
- 2
I'm trying to get an expression for how many stars below a certain magnitude [tex]m[/tex] I will see in the entire field of view of a telescope.
First, consider only one spectral type with absolute magnitude [tex]M[/tex]. The distance [tex]r[/tex] to a star of apparent magnitude [tex]m[/tex] is given by,
[tex]r=10^{3(m-M+5)/5}[/tex]
[tex]N=n \cdot V[/tex]
[tex]N[/tex] is the number of stars observed that are brighter than magnitude m
[tex]n[/tex] is the space density of stars (number of stars per cubic parsec)
[tex]V[/tex] is the volume of the spherical cone out to distance [tex]r[/tex], which is the view cone of the telescope.
But,
[tex]V= \frac{4 \pi}{3} r^{3} \times \frac{\Omega}{4 \pi}= \frac{\Omega r^{3}}{3}[/tex]
Where [tex]\Omega[/tex] is the solid angle subtended by the telescope.
Therefore,
[tex]N= \frac{\Omega}{3} \cdot n \cdot 10^{3(m-M+5)/5}[/tex]
Now, in order to account for more than one spectral type, let [tex]N_{i}[/tex] be the number of stars of spectral type [tex]i[/tex] visible below magnitude [tex]m[/tex], let [tex]n_{i}[/tex] be the space density of stars of spectral type [tex]i[/tex] and [tex]M_{i}[/tex] be the absolute magnitude of stars of spectral type [tex]i[/tex]. Then,
[tex]N_{i}= \frac{\Omega}{3} \cdot n_{i} \cdot 10^{3(m-M_{i}+5)/5}[/tex]
For all the spectral types,
[tex]\displaystyle \sum_{i} N_{i}= \displaystyle \sum_{i} \frac{\Omega}{3} \cdot n_{i} \cdot 10^{3(m-M_{i}+5)/5}[/tex]
[tex]\displaystyle \sum_{i} N_{i}= 10^{3m/5} \displaystyle \sum_{i} \frac{\Omega}{3} \cdot n_{i} \cdot 10^{3(-M_{i}+5)/5}[/tex]
Let [tex] c = \displaystyle \sum_{i} \frac{\Omega}{3} \cdot n_{i} \cdot 10^{3(-M_{i}+5)/5}[/tex], which is a constant w.r.t. [tex]m[/tex].
So,
[tex]\displaystyle \sum_{i} N_{i}= c \cdot 10^{3m/5}[/tex]
Did I make a mistake here?
The problem is that when I count the number of stars below a magnitude [tex]m[/tex] in some astronomical images that I have of orion, and fit a curve to it, what I get is [tex]\displaystyle \sum_{i} N_{i} \propto 10^{0.3m}[/tex]. According to this analysis it should be 0.6 instead of 0.3. Why is this?
The instrument I'm using has a sensitivity limit of relative magnitude of about 16. So I was told to ignore the effects of extinction. If there is nothing wrong with my math, is there something wrong with my physics?
First, consider only one spectral type with absolute magnitude [tex]M[/tex]. The distance [tex]r[/tex] to a star of apparent magnitude [tex]m[/tex] is given by,
[tex]r=10^{3(m-M+5)/5}[/tex]
[tex]N=n \cdot V[/tex]
[tex]N[/tex] is the number of stars observed that are brighter than magnitude m
[tex]n[/tex] is the space density of stars (number of stars per cubic parsec)
[tex]V[/tex] is the volume of the spherical cone out to distance [tex]r[/tex], which is the view cone of the telescope.
But,
[tex]V= \frac{4 \pi}{3} r^{3} \times \frac{\Omega}{4 \pi}= \frac{\Omega r^{3}}{3}[/tex]
Where [tex]\Omega[/tex] is the solid angle subtended by the telescope.
Therefore,
[tex]N= \frac{\Omega}{3} \cdot n \cdot 10^{3(m-M+5)/5}[/tex]
Now, in order to account for more than one spectral type, let [tex]N_{i}[/tex] be the number of stars of spectral type [tex]i[/tex] visible below magnitude [tex]m[/tex], let [tex]n_{i}[/tex] be the space density of stars of spectral type [tex]i[/tex] and [tex]M_{i}[/tex] be the absolute magnitude of stars of spectral type [tex]i[/tex]. Then,
[tex]N_{i}= \frac{\Omega}{3} \cdot n_{i} \cdot 10^{3(m-M_{i}+5)/5}[/tex]
For all the spectral types,
[tex]\displaystyle \sum_{i} N_{i}= \displaystyle \sum_{i} \frac{\Omega}{3} \cdot n_{i} \cdot 10^{3(m-M_{i}+5)/5}[/tex]
[tex]\displaystyle \sum_{i} N_{i}= 10^{3m/5} \displaystyle \sum_{i} \frac{\Omega}{3} \cdot n_{i} \cdot 10^{3(-M_{i}+5)/5}[/tex]
Let [tex] c = \displaystyle \sum_{i} \frac{\Omega}{3} \cdot n_{i} \cdot 10^{3(-M_{i}+5)/5}[/tex], which is a constant w.r.t. [tex]m[/tex].
So,
[tex]\displaystyle \sum_{i} N_{i}= c \cdot 10^{3m/5}[/tex]
Did I make a mistake here?
The problem is that when I count the number of stars below a magnitude [tex]m[/tex] in some astronomical images that I have of orion, and fit a curve to it, what I get is [tex]\displaystyle \sum_{i} N_{i} \propto 10^{0.3m}[/tex]. According to this analysis it should be 0.6 instead of 0.3. Why is this?
The instrument I'm using has a sensitivity limit of relative magnitude of about 16. So I was told to ignore the effects of extinction. If there is nothing wrong with my math, is there something wrong with my physics?
Last edited: