Finding Confidence Intervals for Unknown Parameters in a Normal Distribution

  • Thread starter Thread starter Artusartos
  • Start date Start date
  • Tags Tags
    intervals
Click For Summary
Confidence intervals for unknown parameters in a normal distribution can be derived using the chi-square distribution. When estimating variance, the formula incorporates (n-1) due to the loss of one degree of freedom from estimating the mean. If the mean is known, the degrees of freedom change to n, which affects the calculation of the confidence interval for variance. The discussion also emphasizes that using (n-1) helps prevent underestimation of the population variance. Understanding these principles is crucial for accurate statistical analysis.
Artusartos
Messages
236
Reaction score
0
Let X1, X2, ... , Xn be a random sample from N(\mu, \sigma^2), where both parameters \mu and \sigma^2 are unknown. A confidence interval for \sigma^2 can be found as follows. We know that (n-1)S^2/\sigma^2 is a random varible with X^2(n-1) distribution. Thus we can find constants a and b so that P((n-1)S^2/\sigma^2 < b) = 0.975 and P(a< (n-1)S^2/\sigma^2 < b)=0.95.

a) Show that this second probability statement can be written as

P((n-1)S^2/b < \sigma^2 < (n-1)S^2/a) = 0.95.

I could do this by flipping all of them, changing the signs...and then mulitplying all of them by (n-1)S^2.

b) If n=9 adn s^2 = 7.93, find a 95% confidence interval for \sigma^2.

Here, I just substitute n=9 and s^2=7.93 to the formula, right?

c) If \mu is known, how would you modify the preceding procedure for finding a confidence interval for \sigma^2.

I am confused with this one...so can anybody give me a hint or something?

Thanks in advance
 
Physics news on Phys.org
The n-1 in the equations comes from the fact that μ is unknown, and so is estimated from the same data used to estimate σ. If μ is known it becomes n instead. Whether that also changes it from chi-square I'm not certain, but I wouldn't think so.
 
haruspex said:
The n-1 in the equations comes from the fact that μ is unknown, and so is estimated from the same data used to estimate σ. If μ is known it becomes n instead. Whether that also changes it from chi-square I'm not certain, but I wouldn't think so.

Thanks...

Can you explain why the n-1 comes from the fact that mu is unkown...and why it would be different if we knew what mu was?
 
Artusartos said:
Can you explain why the n-1 comes from the fact that mu is unkown...and why it would be different if we knew what mu was?
If you take a sample from a population, its mean is not likely to be exactly the same as the mean of the population. If you estimate the variance by taking the differences between the sample values and the mean of the sample then you are likely to be underestimating the variance of the population. It turns out that it tends to underestimate σ2 in the ratio (n-1)/n. That's why you divide by n-1 instead of n when calculating \hat{σ}^2 as an estimate of the variance of the population.
Similarly, when you look at the distribution of the estimator \hat{σ}^2, it is chi-square with n-1 degrees of freedom. This is because one degree of freedom of the data has been lost by subtracting off the mean of the data. If you know the mean you don't lose that degree of freedom.
 
haruspex said:
If you take a sample from a population, its mean is not likely to be exactly the same as the mean of the population. If you estimate the variance by taking the differences between the sample values and the mean of the sample then you are likely to be underestimating the variance of the population. It turns out that it tends to underestimate σ2 in the ratio (n-1)/n. That's why you divide by n-1 instead of n when calculating \hat{σ}^2 as an estimate of the variance of the population.
Similarly, when you look at the distribution of the estimator \hat{σ}^2, it is chi-square with n-1 degrees of freedom. This is because one degree of freedom of the data has been lost by subtracting off the mean of the data. If you know the mean you don't lose that degree of freedom.

Thank you so much...but how would I know that "it tends to underestimate σ2 in the ratio (n-1)/n"?
 
Artusartos said:
how would I know that "it tends to underestimate σ2 in the ratio (n-1)/n"?
This can be figured out from first principles. If you assume a population has variance σ2, and propose taking a sample size N and calculating s = \frac{1}{N}\sum{X_i^2}-\left(\frac{1}{N}\sum{X_i}\right)^2, then you can calculate that E = (1-1/N)σ2.
 
Question: A clock's minute hand has length 4 and its hour hand has length 3. What is the distance between the tips at the moment when it is increasing most rapidly?(Putnam Exam Question) Answer: Making assumption that both the hands moves at constant angular velocities, the answer is ## \sqrt{7} .## But don't you think this assumption is somewhat doubtful and wrong?

Similar threads

Replies
7
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 10 ·
Replies
10
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K