# Confusion with Uncertainties

1. Mar 16, 2010

Hey :-)
I want to establish the uncertainty in my fitted model to my data
I can evaluate the variance for the predicted model, but I'm getting very confused as to how the variance can then give me the uncertainty in my model :S
I know that the standard error of the mean may be taken as the standard deviation divided by the root of the number of measurements, and that the square root of the variance gives the standard deviation- but I am really struggling to see how from these I could interpretate an uncertainty. I am very baffled at the moment- so any help would be massively appreciated :-)
Cheers

2. Mar 16, 2010

### SW VandeCarr

Are you familiar with confidence intervals?

3. Mar 16, 2010

Is that similar to determining a 'confidence'... being some percentage calculated from a given number of degrees of freedom? I do think so. :-)

So, I might be wrong, but if the variance is a measure of the spread of the data (from my best fit model??) , I can determine some confidence limits? Does that come from a value for chi^2?

4. Mar 16, 2010

### SW VandeCarr

Confidence intervals (CIs) generally assume an underlying normal distribution and are calculated based on the Gaussian model using the standard error of the sample(s) (which is an estimate of the standard deviation of the population). I won't tell you exactly how to calculate them. This is easily found in textbooks or on the web.

Note that the half-width of a CI relative to a point estimate is often (but not always) an intuitive way to gauge uncertainty. For example, suppose your point estimate is 2 and your 95% CI is 1.8 to 2.2. This means (informally) that you have a 10% uncertainty regarding the value of the point estimate with 95% confidence.

EDIT: More formally, this means that the estimated parameter is contained in the interval with p=0.95 based on a Gaussian model.

Last edited: Mar 17, 2010
5. Mar 17, 2010