Average Deviation: Summing Positive & Negative Values

AI Thread Summary
When calculating average deviation, summing individual deviations (both positive and negative) results in zero due to the definition of the mean. This is because the mean serves as a balance point, making the total deviations from it cancel out. Using absolute values of deviations is essential for accurately measuring average deviation, as it disregards algebraic signs that could misrepresent the data's spread. Standard deviation, calculated using the square root of the average of squared deviations, provides a more reliable measure of variability, especially for normally distributed data. Overall, while summing deviations gives a theoretical zero, it lacks practical utility in assessing data spread.
dami
Messages
18
Reaction score
0
Usually when solving for the average deviation, we have to sum up the ABSOLUTE values of individual deviations. What happens when we simply summed the individual deviations (negative and positive) for a large set of measurements.
 
Physics news on Phys.org
dami said:
Usually when solving for the average deviation, we have to sum up the ABSOLUTE values of individual deviations. What happens when we simply summed the individual deviations (negative and positive) for a large set of measurements.
That is how the statistical mean is calculated. The usual way to get the standard deviation is the square root of the average of the square of the deviations from the mean.
 
dami said:
What happens when we simply summed the individual deviations (negative and positive) for a large set of measurements.
Assuming you mean deviation from the mean, you will get a sum of exactly zero (due to the definition of mean).
 
I got -0.0005 when I tried calculating it with these set of deviations: .003, .005, -.005, -.009, -.011, .015, .000, -.011, -.009, -.001, .001, .016, -.005, -.020, -.002, .019, .001, .009, .006, .001. With a mean of .760.
 
It is said that mean deviation does not take into account algebraic signs of deviations. What if we take into account the algebraic signs of the deviations. Will there be any difference. Which will be more accurate and why does the mean deviation use the absolute values of the deviations, why can't we not use the deviations with its algebraic signs.
 
dami said:
I got -0.0005 when I tried calculating it with these set of deviations: .003, .005, -.005, -.009, -.011, .015, .000, -.011, -.009, -.001, .001, .016, -.005, -.020, -.002, .019, .001, .009, .006, .001. With a mean of .760.
I suspect the figure of 0.760 was rounded to 3 decimal places, and if you used more decimal places you'd get a much smaller average deviation. Rounded to three decimal places, your answer of -0.0005 is zero. Theoretically the average (plus-or-minus) deviation should be exactly zero, but if you calculate to a limited number of decimal places, the answer you get might not be exactly zero.
 
Is there any reason why it equals Zero
 
Because that is the (one) definition of the Mean.
Look upon the mean as the centre of gravity of a distribution and also the centre of gravity of a balancing object. If you take moments of all elements of an object about the cg, the sum will be zero.
Using the standard deviation (root mean square) gives you a comparative idea of the spread of the population. For distributions which look like a Normal Curve, the standard distribution gives a very good way of comparing spreads. If you try to analyse a distribution that is very un-normal looking then the sd is not such a good measure.

If you just add up the deviations (magnitudes) without squaring them, you Will get an idea of the spread (clearly) but, numerically, it is not so good and just doesn't give realistic, comparative answers.
 
Last edited:
dami said:
Is there any reason why it equals Zero
Example with two samples:

\mu = \frac{x_1 + x_2}{2}

y_1 = x_1 - \mu

y_2 = x_2 - \mu

\frac{y_1 + y_2}{2} = \frac{x_1 + x_2}{2} - \frac{\mu +\mu}{2} = 0​

Now, can you generalise that to any number of samples?
 
Back
Top