On standardization of normal distribution

Click For Summary
SUMMARY

The discussion centers on the standardization of the normal distribution, specifically the transformation of a random variable X~N(μ,σ²) into a standardized variable Z~N(0,1). Participants clarify that standardization involves subtracting the mean (μ) and dividing by the standard deviation (σ), resulting in Z=(X-μ)/σ. A key point raised is the misconception that the notation Z~(1/σ)N(0,1) is valid; however, it is established that the correct representation is Z~N(0,1) due to the properties of the standard normal distribution. The conversation emphasizes the importance of understanding the mathematical foundations behind these transformations.

PREREQUISITES
  • Understanding of random variables and their distributions
  • Familiarity with the normal distribution and its properties
  • Basic knowledge of statistical transformations
  • Calculus concepts related to density functions
NEXT STEPS
  • Study the derivation of the standard normal distribution from the normal distribution
  • Learn about the Central Limit Theorem and its implications for standardization
  • Explore the concept of probability density functions and their transformations
  • Investigate advanced statistical techniques for variable transformation
USEFUL FOR

Students and professionals in statistics, data analysis, and mathematics who seek to deepen their understanding of normal distribution standardization and its applications in statistical analysis.

jwqwerty
Messages
41
Reaction score
0
Let X be random variable and X~N(u,σ^2)
Thus, normal distribution of x is
f(x) = (1/σ*sqrt(2π))(e^(-(x-u)^2)/(2σ^2)))

If we want to standardize x, we let z=(x-u)/σ
Then the normal distribution of z becomes
z(x) = (1/σ*sqrt(2π))(e^(-(x^2)/(2))

and we usually write Z~N(0,1)

But as you can see, sigma in z(x) does not disappear. Thus, in my opinion Z~N(0,1) should be actually written as Z~(1/σ)N(0,1). So here goes my question :
why does every textbook use the notation Z~N(0,1), not Z~(1/σ)N(0,1)
 
Last edited:
Physics news on Phys.org
By subtracting μ from ##x##, f(x) will become centered at 0 instead of the mean, because all values have been reduced by μ. After this, dividing by σ has the effect of altering the spread of the data, since where x was originally σ, it has now become 1. Similarly, 2σ becomes 2 and so on, and now the curve has standard deviation (and variance) of 1. Hence, if ##Z=\frac{X-μ}{σ}##, then Z ~ N(0,1) .
 
jwqwerty said:
Let X be random variable and X~N(u,σ^2)
Thus, normal distribution of x is
f(x) = (1/σ*sqrt(2π))(e^(-(x-u)^2)/(2σ^2)))

If we want to standardize x, we let z=(x-u)/σ
Then the normal distribution of z becomes
z(x) = (1/σ*sqrt(2π))(e^(-(x^2)/(2))

and we usually write Z~N(0,1)

But as you can see, sigma in z(x) does not disappear. Thus, in my opinion Z~N(0,1) should be actually written as Z~(1/σ)N(0,1). So here goes my question :
why does every textbook use the notation Z~N(0,1), not Z~(1/σ)N(0,1)

You can't do the standardization the way you did (simply by substituting the expression for z in the density). What you are doing is attempting to find the density of a
new random variable Z given an existing density and a transformation. Have you studied that technique?
 
statdad said:
You can't do the standardization the way you did (simply by substituting the expression for z in the density). What you are doing is attempting to find the density of a
new random variable Z given an existing density and a transformation. Have you studied that technique?

Then what does standardization mean? How can we standardize?
Sorry i have just started studying statistics and i need your help, statdad!
 
Last edited:
Standardization means what you think it means: when you standardize data in this setting you subtract the mean and divide that difference by the standard deviation. The fact that this operation shifts the work from an arbitrary normal distribution to the standard normal distribution is a mathematical result that needs to be demonstrated: simply making the substitution in the density function isn't enough.

Is your course calculus based?
 
If there are an infinite number of natural numbers, and an infinite number of fractions in between any two natural numbers, and an infinite number of fractions in between any two of those fractions, and an infinite number of fractions in between any two of those fractions, and an infinite number of fractions in between any two of those fractions, and... then that must mean that there are not only infinite infinities, but an infinite number of those infinities. and an infinite number of those...

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K