1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Confidence Intervals

  1. Nov 14, 2012 #1
    Let X1, X2, ... , Xn be a random sample from [tex]N(\mu, \sigma^2)[/tex], where both parameters [tex]\mu[/tex] and [tex]\sigma^2[/tex] are unknown. A confidence interval for [tex]\sigma^2[/tex] can be found as follows. We know that [tex](n-1)S^2/\sigma^2[/tex] is a random varible with [tex]X^2(n-1)[/tex] distribution. Thus we can find constants a and b so that [tex]P((n-1)S^2/\sigma^2 < b) = 0.975[/tex] and [tex]P(a< (n-1)S^2/\sigma^2 < b)=0.95[/tex].

    a) Show that this second probability statement can be written as

    [tex]P((n-1)S^2/b < \sigma^2 < (n-1)S^2/a) = 0.95[/tex].

    I could do this by flipping all of them, changing the signs...and then mulitplying all of them by (n-1)S^2.

    b) If n=9 adn s^2 = 7.93, find a 95% confidence interval for [tex]\sigma^2[/tex].

    Here, I just substitute n=9 and s^2=7.93 to the formula, right?

    c) If [tex]\mu[/tex] is known, how would you modify the preceding procedure for finding a confidence interval for [tex]\sigma^2[/tex].

    I am confused with this one...so can anybody give me a hint or something?

    Thanks in advance
     
  2. jcsd
  3. Nov 14, 2012 #2

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    The n-1 in the equations comes from the fact that μ is unknown, and so is estimated from the same data used to estimate σ. If μ is known it becomes n instead. Whether that also changes it from chi-square I'm not certain, but I wouldn't think so.
     
  4. Nov 14, 2012 #3
    Thanks...

    Can you explain why the n-1 comes from the fact that mu is unkown...and why it would be different if we knew what mu was?
     
  5. Nov 14, 2012 #4

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    If you take a sample from a population, its mean is not likely to be exactly the same as the mean of the population. If you estimate the variance by taking the differences between the sample values and the mean of the sample then you are likely to be underestimating the variance of the population. It turns out that it tends to underestimate σ2 in the ratio (n-1)/n. That's why you divide by n-1 instead of n when calculating [itex]\hat{σ}^2[/itex] as an estimate of the variance of the population.
    Similarly, when you look at the distribution of the estimator [itex]\hat{σ}^2[/itex], it is chi-square with n-1 degrees of freedom. This is because one degree of freedom of the data has been lost by subtracting off the mean of the data. If you know the mean you don't lose that degree of freedom.
     
  6. Nov 14, 2012 #5
    Thank you so much...but how would I know that "it tends to underestimate σ2 in the ratio (n-1)/n"?
     
  7. Nov 14, 2012 #6

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    This can be figured out from first principles. If you assume a population has variance σ2, and propose taking a sample size N and calculating [itex]s = \frac{1}{N}\sum{X_i^2}-\left(\frac{1}{N}\sum{X_i}\right)^2[/itex], then you can calculate that E = (1-1/N)σ2.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Confidence Intervals
  1. Confidence Interval (Replies: 3)

  2. Confidence Intervals (Replies: 1)

  3. Confidence Interval (Replies: 0)

  4. Confidence interval (Replies: 0)

  5. Confidence interval (Replies: 0)

Loading...