- #1
nraic
- 3
- 0
Hi,
I am doing an undergraduate introductory statistics course and I'm trying to understand some basic concepts.
I'm trying to understand why the sample size (n) affects the standard deviation of the sampling distribution of the mean (σ[itex]_{M}[/itex])
I understand how a sample size affects the sampling distribution of the mean. I've been shown that with larger sample sizes the standard deviation decreases. This can be seen graphically the normal distribution curve of the samples mean becoming more narrow as the sample size increases.
σ[itex]_{M}[/itex] = σ[itex]/\sqrt{n}[/itex]
What I don't understand is why this is happening.
I have this intuitive feeling that if you take an infinite number of samples means they should have a fixed mean and standard deviation and that this shouldn't be different if you take samples of n=10 or n=100. I've been shown that this is wrong but I don't understand why.
I am doing an undergraduate introductory statistics course and I'm trying to understand some basic concepts.
I'm trying to understand why the sample size (n) affects the standard deviation of the sampling distribution of the mean (σ[itex]_{M}[/itex])
I understand how a sample size affects the sampling distribution of the mean. I've been shown that with larger sample sizes the standard deviation decreases. This can be seen graphically the normal distribution curve of the samples mean becoming more narrow as the sample size increases.
σ[itex]_{M}[/itex] = σ[itex]/\sqrt{n}[/itex]
What I don't understand is why this is happening.
I have this intuitive feeling that if you take an infinite number of samples means they should have a fixed mean and standard deviation and that this shouldn't be different if you take samples of n=10 or n=100. I've been shown that this is wrong but I don't understand why.