View Single Post
P: 3,175

## Uncertainty for a star being between two values

From those links and reading a few pages available on Google books from "Observing The Universe" edited by Andrew Norton, I conclude (from Norton) that the uncertainty in measurement of magnitudes of a particular star is treated as a normal distribution (even though the distribution of magnitudes in the population isn't). Norton doesn't give a mathematical treatment but his rules of thumb (e.g. that plus or minus one uncertainty corresponds to roughly 2/3 of the observed values) suggest that his book is doing that.

If you want to (literally) compute the probability that the actual magnitude of a particular star is between a given $K_{min}$ and $K_{max}$ then you must assume a scenario that acknowledges that the value of the actual magnitude is probabilistic. This would be a Bayesian statistical approach. You can't use the typical scenario from "frequentist" statistics where the actual magnitude is a "fixed but unknown value".

I'm a fan of Bayesian statistics, however it would surprise me if the binning program really wanted a probability based on a Bayesian statistical approach. Given the common use of "frequentist" statistics, it is more likely that it wants the probability under the standard normal density curve ( mean 0 ,sigma = 1) associated with "confidence interval" of $[\frac{K_{min} - K_{obs}}{\sigma}, \frac{K_{max}-K_{obs}}{\sigma}]$.

(The terminology for a superficially similar interval in Bayesian statistics is a "credible interval". )

In browsing the web, I was surprised to find that astronomy uses very sophisticated statistical methods such as the "Malmquist correction", so I may be underestimating the binning program. Nevertheless, I wouldn't trust how the program documentation defined "p" unless it throroughly explained how "p" is used.

To do the Bayesian calculation, you would define a problem like this:

Given this particular star was picked at random from the population of stars and given the observed magnitude was $K_{obs}$ and that the measurement errors are normally distributed with standard deviation $\sigma$, what is the probability that the actual magnitude is in the interval $[ K_{min}, K_{max}]$. This calculation can be done numerically. We can attempt to do the details if this is really the number you want.