I was wondering if someone could please help me understand a simple problem of error propagation going from multiple measurements with errors to an average incorporating these errors. I have looked on several error propagation webpages (e.g. UC physics or UMaryland physics) but have yet to find exactly what I am looking for. I would like to illustrate my question with some example data. Suppose we want to know the mean ± standard deviation (mean ± SD) of the mass of 3 rocks. We weigh these rocks on a balance and get: Rock 1: 50 g Rock 2: 10 g Rock 3: 5 g So we would say that the mean ± SD of these rocks is: 21.6 ± 24.6 g. But now let's say we weigh each rock 3 times each and now there is some error associated with the mass of each rock. Let's say that the mean ± SD of each rock mass is now: Rock 1: 50 ± 2 g Rock 2: 10 ± 1 g Rock 3: 5 ± 1 g How would we describe the mean ± SD of the three rocks now that there is some uncertainty in their masses? Would it still be 21.6 ± 24.6 g? Some error propagation websites suggest that it would be the square root of the sum of the absolute errors squared, divided by N (N=3 here). But in this case the mean ± SD would only be 21.6 ± 2.45 g, which is clearly too low. I think this should be a simple problem to analyze, but I have yet to find a clear description of the appropriate equations to use. If my question is not clear please let me know. Any insight would be very appreciated.