Statistcally different results

1. Aug 10, 2008

NoobixCube

Suppose I have a fitted parameter like $$s$$ with an error of $$\pm \sigma_{s}$$ which are time dependent . I then gather more data later on and re-fit to find parameter $$s$$ which should have changed. I find a new value $$s'$$ with $$\pm \sigma_{s'}$$ . Scientifically, when are these values said to be distinctly different from each other, namely what is the least amount of 'error overlap' for these two values $$s$$ and $$s'$$ to be different? Your thoughts would be most welcome. I have heard that the t-test is one way. Are there any others?

Last edited: Aug 10, 2008
2. Aug 11, 2008

vanesch

Staff Emeritus
What one usually specifies is a "confidence level". That means that you do the following: you *suppose* that the two results were actually "the same", that means, drawn from the same distribution (that distribution comes from the error model on the measurement, or also eventually intrinsically random processes in the phenomenon you try to measure). You then calculate what is the probability that for two trials, (with a single, or with many measurements themselves), your estimated values of the two trials are AT LEAST the difference apart than you found. That probability is then the complement of the confidence level by which you can say that they are different (it is the probability that you could have gotten this difference when the actual parameter was in fact the same).

3. Aug 13, 2008

NoobixCube

Thanks for your post vanesch :)