# Why is an inverse logarithmic scale chosen for the magnitudes of stars?

• B
Star magnitudes of brightness seem to use inverse logarithmic scales, is there a benefit to this? Why was this chosen, i can understand logarithmic might make it easier to interpret data in same way we do similar for earthquakes etc.

But why inverse ? When i look at a HR diagram for example ( https://en.m.wikipedia.org/wiki/Stellar_classification ), the magnitude decreases up the Y-axis and it just seems unusual to me for that to be the standard convention.

phyzguy
Astronomers never change anything once it's defined. The magnitude scale traces back to Hipparchus in about 150 BC. he defined the brightest stars as first magnitude, the next brightest as second magnitude, etc. with the faintest ones he could see being of sixth magntitude. Many centuries later, we learned how to quantitatively measure the brightness of stars, and realized that our perception of brightness follows a logarithmic scale. It was also found that the difference between a first magnitude star and a sixth magnitude star (5 magnitudes) was about a factor of 100 in radiative flux. So the magnitude scale was defined so that a difference of 1 magnitude was a factor of 10^(0.4). Then five magnitudes gives a factor of 10^(0.4*5) = 100. Yes, it's confusing that it is an inverse scale, but there is too much inertia to redefine it now. On the one hand, it might be nice to redefine it, but on the other hand, I enjoy the fact that astronomers treasure the long history of the subject.

Last edited:
• • lomidrevo, ohwilleke, anorlunda and 2 others
Staff Emeritus
Yes, it's confusing that it is an inverse scale, but there is too much inertia to redefine it now.

Further, if it's important, you could always use janskys. (Janskys? Janskies? ) This seems to happen mostly when taking derivaties: d(something)/d(magnitude) is kind of a mess.

sophiecentaur