So I took a series of measurements, where the reading error for the measurements are either ±0.01 mm or ±0.02 mm or ±0.03 mm or ±0.04 mm. Now, I can calculate the estimated mean of the measurements by x_est=Ʃ x_i / N, where x_i is the i th measurement. Then I can propagate the errors in the x_i quantities by Δx = √((Δx_1)^2 +...+(Δx_n)^2) Then the error is estimated mean would be Δ(x_est) = Δx/N. So, if I did everything correctly would the Δ(x_est) be the standard error? And would each individual measurement X_i have to have the same reading error if the same device was used to measure them (ruler), then the above equation would become Δ(x_est) =Δx/√N?