Please correct me where I am wrong but it seems to me that you could generate a very high value of sigma (e.g. 6 sigma accuracy) from a very small sample size. How then is sigma on its own reliable? Let me see if I understand sigma. To determine the standard deviation I first compute the mean average. For each data point, I take the difference from the mean, square it, determine the average of the squares, and then take the square root. Is this that one sigma? If it is one sigma, with a mean of 20 and standard deviation of 2, values of 18-22 represent one sigma accuracy? Values of <8 or >32 represent 6 sigma accuracy? What additional checks or sample size must accompany the sigma value to make it reliable?