Why does normal distribution turn into t distribution when variance is unknown?

In summary: But yes, I should have said "if the population variance was known, you would replace S^2 with the actual value of the population's variance, and you would have a normal distribution" instead of "and the only random variable in T would be the mean, with a normal distribution". Sorry for my mistake.
  • #1
Happiness
679
30
Suppose ##X## ~ N(##\mu##,##\sigma^2##). Then ##\bar{X}## ~ N(##\mu##,##\frac{\sigma^2}{n}##), where ##\bar{X}## is the random variable for sample mean for samples of size ##n##.

But when the population variance ##\sigma^2## is unknown and the sample size ##n## is small, ##\bar{X}## no longer follows a normal distribution but instead follows a t distribution, such that ##T=\frac{\bar{X}-\mu}{S/\sqrt{n}}## ~ t##_{n-1}##, where ##s^2=\frac{n}{n-1}\times##sample variance##=##the unbiased estimator of the population variance and ##n-1## is the degree of freedom of the t distribution.

My question is why does the distribution of ##\bar{X}## changes just because we do not know the population variance? Shouldn't the population variance still be some fixed value ##\sigma^2## (it's just that it's unknown to us at the moment), and thus making ##\bar{X}## follow a normal distribution still: ##\bar{X}## ~ N(##\mu##,##\frac{\sigma^2}{n}##)? It seems that objective reality (the specific distribution of ##\bar{X}##) changes according to subjective knowledge (whether we know ##\sigma^2## or not). And this I find puzzling.
 
Last edited:
Physics news on Phys.org
  • #2
We should first observe that you are making some statements that are only true when ##X## has normal distribution.

Happiness said:
But when the population variance ##\sigma^2## is unknown and the sample size ##n## is small, ##\bar{X}## no longer follows a normal distribution but instead

That is not correct. ##T## and ##\bar{X}## are different random variables. ( A "statistic" is defined as a random variable that is a function of the values in a sample. ##T## and ##\bar{X}## are statistics. )

It isn't the distribution of ##\bar{X}## that changes when the sample size is small. Instead it is the choice of which statistic people prefer to use when doing statistical tests.

When the sample size is large, people approximate ##\sigma^2## (the population variance) by the sample variance of the particular sample they have, in the belief that the sample variance computed from a large sample is probably close to the population variance. They assume ##\bar{X}## has distribution ##N(\mu,s^2/n)##.When sample size is small there is less reason to believe that that the sample variance of a particular sample is close to the population variance. So people use ##T-##tests to make decisions. The distribution of ##T## is not the same as a normal distribution. However, this does not change the fact that ##\bar{X}## still has distribution ##N(\mu,\sigma^2/n)## (provided ##X## has a normal distribution).
 
  • Like
Likes StatGuy2000, Happiness and FactChecker
  • #3
The mean has normal distribution as it was said.

As for the t-statistic T, you first need to see that the sample standard deviation is a random variable (it contains the sum of squares of X, hence it could never be a "fixed value" as you suggested), and that it has a chi-distribution (or equivalently, sample variance has a chi-squared distribution). Check the 2nd post here for a proof: https://stats.stackexchange.com/que...bution-of-variance-a-chi-squared-distribution

Afterward, you can see that the t-statistic T is a ratio of a normal random variable with a chi random variable. You can check why this means that it has a t-student distribution here: https://stats.stackexchange.com/que...sqrt-chi2s-s-gives-you-a-t-distribution-proof, where his W equals your s^2 and his s equals your n.

If the population's variance was known, you would use that instead of S^2, and the only random variable in T would be the mean, with a normal distribution. In that case, T would have normal distribution as well.
 
Last edited:
  • Like
Likes Happiness
  • #4
ZeGato said:
and the only random variable in T would be the mean, with a normal distribution. In that case, T would have normal distribution as well.

The sample variance ##s## is still a random variable, even if the population variance is known. The ##T##-statistic still has a ##T-##distribution instead of a normal distribution even if ##\sigma## is known. If you used a constant in place of ##s## in the formula for ##T##, you wouldn't be computing the ##T##-statistic. So while it is true that replacing ##s## by ##\sigma## in the formula for the ##T##-statistic changes the formula into a formula for a normally distributed random variable, technically that random variable is no longer ##T##.
 
  • #5
Stephen Tashi said:
The sample variance ##s## is still a random variable, even if the population variance is known. The ##T##-statistic still has a ##T-##distribution instead of a normal distribution even if ##\sigma## is known. If you used a constant in place of ##s## in the formula for ##T##, you wouldn't be computing the ##T##-statistic. So while it is true that replacing ##s## by ##\sigma## in the formula for the ##T##-statistic changes the formula into a formula for a normally distributed random variable, technically that random variable is no longer ##T##.
I'm aware, and T would just be the variable's name and not representative of the t-statistic.
 

1. What is the difference between normal distribution and t distribution?

The normal distribution is a probability distribution that is symmetric and bell-shaped, with a known mean and standard deviation. The t distribution is also bell-shaped, but has heavier tails and a wider peak, making it more spread out than the normal distribution. It is used when the sample size is small or when the population standard deviation is unknown.

2. Why does normal distribution turn into t distribution when variance is unknown?

This is because the t distribution takes into account the uncertainty caused by using the sample standard deviation to estimate the population standard deviation. When the sample size is small, the sample standard deviation tends to underestimate the population standard deviation, resulting in a wider t distribution.

3. How does the sample size affect the conversion from normal distribution to t distribution?

The larger the sample size, the closer the t distribution will be to the normal distribution. This is because as the sample size increases, the sample standard deviation becomes a better estimate of the population standard deviation, reducing the uncertainty and making the t distribution narrower.

4. Can the t distribution be used for any sample size?

While the t distribution can be used for any sample size, it is most commonly used when the sample size is small (less than 30). For larger sample sizes, the normal distribution is a better approximation.

5. How is the t distribution used in hypothesis testing?

In hypothesis testing, the t distribution is used to calculate the probability of obtaining a sample mean that is significantly different from the population mean. This is done by comparing the sample mean to the critical values of the t distribution at a specific confidence level.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
925
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
474
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
858
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
789
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
662
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
Back
Top