Understanding Standard Deviation: Squaring Variances

tumelo
Messages
9
Reaction score
0
Can somebody explain to me why we have to square the variances when calculating the deviation and then finding the square root(whc is suppose to reverse the squaring) ,it doesn't make sense to me
 
Mathematics news on Phys.org
It is simply a matter of definition. Let X be a random variable, and let A=E(X) (average).
Then the variance V is DEFINED by V=E((X-A)2) and the standard deviation is DEFINED as the square root of the variance.
 
tumelo said:
Can somebody explain to me why we have to square the variances when calculating the deviation and then finding the square root(whc is suppose to reverse the squaring) ,it doesn't make sense to me

If you want to find the typical deviation from the average, you can't calculate the average of x -<x>, because that average is zero. You need a measure where the deviations from the average don't cancel out. Taking the average of |x-<x>| works, but that's not nice because the absolute value function isn't differentiable at zero. (x-<x>)^2 also works, and is differentiable. However, if x has units, you can't compare <(x-<x>)^2> directly to x, because the units don't match. So, you need take the square root of to get something with the same units of x that you can treat as a deviation from the mean.
 
tumelo said:
Can somebody explain to me why we have to square the variances when calculating the deviation and then finding the square root(whc is suppose to reverse the squaring) ,it doesn't make sense to me

One other thing to keep in mind is that the definition provided for obtaining the variance is used directly in statistical theory like the normal distribution: ie the std deviation that is calculated with the variance that uses difference in squares is used in the normal pdf.

Also you'll find that this definition is useful when dealing with other properties of random variables.

Another thing to keep in mind is that you could picture the variance as a length in an n dimensional euclidean space where n is the number of elements in the random variable.

For example if we have a three dimensional vector where X(1), X(2), AND X(3) represent the difference between the average and the element of the random variable, then the "length" of this vector is basically found using the pythagorean theorem where length = SQRT(X(1)^2 + X(2)^2 + X(3)^2). This definition makes sense when you interpret this geometrically as the length of a vector in an n dimensional euclidean space.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top