I've calculated the mean difference of my (normally distributed) data set. The mean difference is defined as: Now, I'm trying to calculate the "mean difference deviation" in order to generate a confidence interval for this quantity ( "95% of the differences in the set are greater than ____"). My question is: can I generalize the standard deviation formula to calculate this? If we take the following concepts to be parallel: Code (Text): Mean <---------> Mean Difference Single value <---------> Single Difference ... can I use the standard deviation equation to calculate the "mean difference deviation"? Namely turning this: to this: I've looked up more direct ways to calculate this quantity, and all of them are contained in statistics articles that are way(1) over(2) my head(3); so much so that I can't even determine if its what I'm looking for, much less how to go about translating it in to code (and I haven't even touched on efficiency concerns). Can anyone provide some insight? And if it turns out this can't be done, would anyone mind taking a crack at translating the derived equations in those articles in to English?