Understanding Standard Deviation: Squaring Variances

Click For Summary

Discussion Overview

The discussion revolves around the concept of standard deviation and the rationale behind squaring variances in its calculation. Participants explore the definitions and implications of variance and standard deviation, touching on theoretical and conceptual aspects.

Discussion Character

  • Conceptual clarification
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants assert that the squaring of variances is a matter of definition, with variance defined as the expected value of the squared deviations from the mean.
  • Others propose that squaring deviations prevents cancellation of values, as the average of deviations from the mean is zero, thus necessitating a different approach to measure typical deviation.
  • One participant suggests that using absolute values for deviations is not ideal due to non-differentiability at zero, while squaring is both differentiable and mathematically manageable.
  • Another viewpoint emphasizes the geometric interpretation of variance as a length in n-dimensional space, relating it to the Pythagorean theorem for calculating lengths of vectors formed by deviations.
  • Some participants mention the relevance of the variance definition in statistical theory, particularly in relation to the normal distribution and properties of random variables.

Areas of Agreement / Disagreement

Participants express varying levels of understanding and interpretation of the squaring of variances, with no consensus reached on the best explanation or approach. Multiple competing views remain regarding the rationale and implications of these definitions.

Contextual Notes

Limitations include the dependence on definitions of variance and standard deviation, as well as the potential for differing interpretations of geometric representations in higher dimensions.

tumelo
Messages
9
Reaction score
0
Can somebody explain to me why we have to square the variances when calculating the deviation and then finding the square root(whc is suppose to reverse the squaring) ,it doesn't make sense to me
 
Physics news on Phys.org
It is simply a matter of definition. Let X be a random variable, and let A=E(X) (average).
Then the variance V is DEFINED by V=E((X-A)2) and the standard deviation is DEFINED as the square root of the variance.
 
tumelo said:
Can somebody explain to me why we have to square the variances when calculating the deviation and then finding the square root(whc is suppose to reverse the squaring) ,it doesn't make sense to me

If you want to find the typical deviation from the average, you can't calculate the average of x -<x>, because that average is zero. You need a measure where the deviations from the average don't cancel out. Taking the average of |x-<x>| works, but that's not nice because the absolute value function isn't differentiable at zero. (x-<x>)^2 also works, and is differentiable. However, if x has units, you can't compare <(x-<x>)^2> directly to x, because the units don't match. So, you need take the square root of to get something with the same units of x that you can treat as a deviation from the mean.
 
tumelo said:
Can somebody explain to me why we have to square the variances when calculating the deviation and then finding the square root(whc is suppose to reverse the squaring) ,it doesn't make sense to me

One other thing to keep in mind is that the definition provided for obtaining the variance is used directly in statistical theory like the normal distribution: ie the std deviation that is calculated with the variance that uses difference in squares is used in the normal pdf.

Also you'll find that this definition is useful when dealing with other properties of random variables.

Another thing to keep in mind is that you could picture the variance as a length in an n dimensional euclidean space where n is the number of elements in the random variable.

For example if we have a three dimensional vector where X(1), X(2), AND X(3) represent the difference between the average and the element of the random variable, then the "length" of this vector is basically found using the pythagorean theorem where length = SQRT(X(1)^2 + X(2)^2 + X(3)^2). This definition makes sense when you interpret this geometrically as the length of a vector in an n dimensional euclidean space.
 

Similar threads

  • · Replies 42 ·
2
Replies
42
Views
6K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 24 ·
Replies
24
Views
7K
  • · Replies 11 ·
Replies
11
Views
6K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 11 ·
Replies
11
Views
8K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
5K