I just looked up the standard deviation formula. I makes quite sense to me because we're actually trying to 'measure' how much all the observations deviate from the mean value. To do this, we calculate all the possible differences, [X(mean) - x(i)], then we square all the differences to get rid of negative differences, then we find arithmetic mean of them, and ultimately the square root to get rid of the original squaring that we'd done to the original values. This makes sense but I could think of some other methods if our goal is just to determine how much the observations deviate. These are- 1. Arithmetic mean of all the values of Modulus[X(mean) - x(i)] 2. Geometric mean of all the values of Modulus[X(mean) - x(i)] These two methods also make sense. And, we don't have to do the square root in the end, because basically we're just finding the average of the differences. So, I'm talking about using Modulus instead of squaring to get rid of negative differences. Actually, there's another one I could think of: 3.Why don't we just calculate all the values of [X(mean) - x(i)] raised to any even power n ( to get rid of negative differences), then find arithmetic mean of these values and then calculate the nth root of that arithmetic mean? Why do we just use n=2? Basically, the standard deviation formula would now look like: (arithmetic mean ((X(mean) - x(i))^n))^(1/n) , where n is even. Why do we just use n=2? So, what's wrong with these 3 formulas of standard deviation?