# Confidence Interval

1. Jun 24, 2009

### Melawrghk

I don't know which forum to put this in... So here goes.

1. The problem statement, all variables and given/known data
Basically, I have gathered 30 data points that vary from about 1.3 - 4.7 in value (it doesn't matter what it is) with a mean of 2.98. Since I haven't done any statistics courses yet, I used the built in Excel function for determining the confidence interval (95%) and the value I got is about 0.33, while the standard deviation of the values was 0.92.

So my question is, what does that mean exactly? Does it have something to do with the probability that the data point collected will fall within the central 95% of the area under the bell curve?

I'd really appreciate any help on this one. Thanks!

2. Jun 24, 2009

### rochfor1

You would be well-served to google the Central Limit Theorem.

3. Jun 24, 2009

### statdad

Excel is one of the worst tools in existence for doing statistics, and you've just had the joy of finding one of the reasons why. :)

A confidence interval is a type of estimate, usually given as an interval of numbers: $$(a, b)$$. If you want to estimate a mean, the interpretation is that every number in the interval is a possible value for the mean. So (as an example) if we have an interval that is $$(100, 150)$$ then, based on our data, we can be reasonably confident the true mean is between 100 and 150.

Numbers like 95%, 90%, and so on, are the confidence level values. One way to think about the process is this:

* You decide that you want a 95% confidence interval estimate for a mean, you are using
a statistical procedure that has a long-term 95% "success rate" - if you were to repeat
the same experiment a large number of times, same conditions, same sample size,
same population, and each time do the same confidence interval calculation, 95% of the
intervals you create will contain the true mean
* This long-term success rate leads to our use of this language: "We can be 95% confident
the true mean is between a and b

So, a confidence interval is technically an interval. In the classical formulation, the formula for the interval is

$$\bar x \pm z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt n}$$

when the population standard deviation $$\sigma$$ is known, and

$$\bar x \pm t_{\frac{\alpha}{2}} \frac{s}{\sqrt n}$$

when only the sample standard deviation $$s$$ is known. In the first case

$$z_{\frac{\alpha}{2}} \frac{\sigma}{\sqrt n}$$

is referred to as the margin of error , in the second case

$$t_{\frac{\alpha}{2}} \frac{s}{\sqrt n}$$

is the margin of error. In both cases $$\alpha$$ equals 1 minus the confidence level: for a 95% confidence interval $$\alpha = 0.05$$, for example.

Do I have a point? Yes: Excel doesn't give the confidence interval it gives the margin of error. How do you use it? If you have the sample mean (and your post says you do) in both cases you can obtain the confidence interval with

$$\bar x \pm \text{ Margin of error}$$

I know I've been a little wordy - sorry. Hope it helped some. You might look this link for a slightly different explanation - often a having different approaches to one problem is helpful.

http://www.stat.psu.edu/~resources/ClassNotes/ljs_19/index.htm [Broken]

Last edited by a moderator: May 4, 2017
4. Jun 25, 2009

### Melawrghk

Wow, thanks a lot, statdad! :) That was really helpful.
I usually avoid Excel, but in this case since I had no idea what I was doing, I figured I'd try the easy way haha.

Thanks again!

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook