When i did a business statistics course some time ago, I was able to calculate confidence intervals, but i didn't understand ‘why’ they were calculated in the way they were. I considered that the size of a confidence interval is based on the number of observations and ‘the range of possible values’ that the those observations may result in. When calculating confidence intervals for the mean of a population, you use standard deviation rather than the range of possible vales. The reason i considered the range of possible values rather than the variance or standard deviation, is because they are a statistic of their own and could have a confidence interval applied to them. Something suggests to me that the confidence interval of variance can not be based the number of observations alone and that if the confidence interval of variance is dependent on another factor, then the confidence interval of the variance or standard deviation would affect the confidence interval of the mean. So where have I gone wrong here?