On standardization of normal distribution

AI Thread Summary
The discussion focuses on the standardization of normal distributions, specifically the transformation from a random variable X~N(u,σ^2) to Z~N(0,1) through the formula Z=(X-μ)/σ. Participants debate the notation used in textbooks, questioning why Z is represented as Z~N(0,1) instead of Z~(1/σ)N(0,1). It is clarified that standardization involves subtracting the mean and dividing by the standard deviation, which shifts the distribution to the standard normal form. The conversation emphasizes that simply substituting values in the density function does not suffice for proper standardization. Understanding this process requires a solid grasp of calculus and transformation techniques in statistics.
jwqwerty
Messages
41
Reaction score
0
Let X be random variable and X~N(u,σ^2)
Thus, normal distribution of x is
f(x) = (1/σ*sqrt(2π))(e^(-(x-u)^2)/(2σ^2)))

If we want to standardize x, we let z=(x-u)/σ
Then the normal distribution of z becomes
z(x) = (1/σ*sqrt(2π))(e^(-(x^2)/(2))

and we usually write Z~N(0,1)

But as you can see, sigma in z(x) does not disappear. Thus, in my opinion Z~N(0,1) should be actually written as Z~(1/σ)N(0,1). So here goes my question :
why does every textbook use the notation Z~N(0,1), not Z~(1/σ)N(0,1)
 
Last edited:
Physics news on Phys.org
By subtracting μ from ##x##, f(x) will become centered at 0 instead of the mean, because all values have been reduced by μ. After this, dividing by σ has the effect of altering the spread of the data, since where x was originally σ, it has now become 1. Similarly, 2σ becomes 2 and so on, and now the curve has standard deviation (and variance) of 1. Hence, if ##Z=\frac{X-μ}{σ}##, then Z ~ N(0,1) .
 
jwqwerty said:
Let X be random variable and X~N(u,σ^2)
Thus, normal distribution of x is
f(x) = (1/σ*sqrt(2π))(e^(-(x-u)^2)/(2σ^2)))

If we want to standardize x, we let z=(x-u)/σ
Then the normal distribution of z becomes
z(x) = (1/σ*sqrt(2π))(e^(-(x^2)/(2))

and we usually write Z~N(0,1)

But as you can see, sigma in z(x) does not disappear. Thus, in my opinion Z~N(0,1) should be actually written as Z~(1/σ)N(0,1). So here goes my question :
why does every textbook use the notation Z~N(0,1), not Z~(1/σ)N(0,1)

You can't do the standardization the way you did (simply by substituting the expression for z in the density). What you are doing is attempting to find the density of a
new random variable Z given an existing density and a transformation. Have you studied that technique?
 
statdad said:
You can't do the standardization the way you did (simply by substituting the expression for z in the density). What you are doing is attempting to find the density of a
new random variable Z given an existing density and a transformation. Have you studied that technique?

Then what does standardization mean? How can we standardize?
Sorry i have just started studying statistics and i need your help, statdad!
 
Last edited:
Standardization means what you think it means: when you standardize data in this setting you subtract the mean and divide that difference by the standard deviation. The fact that this operation shifts the work from an arbitrary normal distribution to the standard normal distribution is a mathematical result that needs to be demonstrated: simply making the substitution in the density function isn't enough.

Is your course calculus based?
 
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...
Back
Top