Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Error propagation with averages and standard deviation

  1. May 25, 2012 #1
    I was wondering if someone could please help me understand a simple problem of error propagation going from multiple measurements with errors to an average incorporating these errors. I have looked on several error propagation webpages (e.g. UC physics or UMaryland physics) but have yet to find exactly what I am looking for.

    I would like to illustrate my question with some example data. Suppose we want to know the mean ± standard deviation (mean ± SD) of the mass of 3 rocks. We weigh these rocks on a balance and get:

    Rock 1: 50 g
    Rock 2: 10 g
    Rock 3: 5 g

    So we would say that the mean ± SD of these rocks is: 21.6 ± 24.6 g.

    But now let's say we weigh each rock 3 times each and now there is some error associated with the mass of each rock. Let's say that the mean ± SD of each rock mass is now:

    Rock 1: 50 ± 2 g
    Rock 2: 10 ± 1 g
    Rock 3: 5 ± 1 g

    How would we describe the mean ± SD of the three rocks now that there is some uncertainty in their masses? Would it still be 21.6 ± 24.6 g? Some error propagation websites suggest that it would be the square root of the sum of the absolute errors squared, divided by N (N=3 here). But in this case the mean ± SD would only be 21.6 ± 2.45 g, which is clearly too low.

    I think this should be a simple problem to analyze, but I have yet to find a clear description of the appropriate equations to use. If my question is not clear please let me know. Any insight would be very appreciated.
  2. jcsd
  3. May 25, 2012 #2
    Hi rano,

    You are comparing different things, in the first case you calculate the standard error for the mass rock distribution; this error gives you an idea of how far away from the average will be the weight of the next rock you sample.

    In the second case you calculate the standard error due to measurements, this time you get an idea of how far away the measured weight is from the real weight of the rock.
  4. May 25, 2012 #3


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Yes and no.
    If Rano had wanted to know the variance within the sample (the three rocks selected) I would agree. But I note that the value quoted, 24.66, is as though what's wanted is the variance of weights of rocks in general. (The variance within the sample is only 20.1.)
    That being so, I think the question is valid. The variance of the population is amplified by the uncertainty in the measurements.
    What further confuses the issue is that Rano has presented three different standard deviations for the measurements of the three rocks. In assessing the variation of rocks in general, that's unusable. We have to make some assumption about errors of measurement in general. We can assume the same variance in measurement, regardless of rock size, or some relationship between rock size and error range.
  5. May 25, 2012 #4
    I'm not sure where you get a variance of 20.1, but the standard error for the sample is definitely 24.66
  6. May 25, 2012 #5


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Sorry, a bit loose in terminology. The st dev of the sample is 20.1 The variance (average square minus square average) is 405.56.
    But for the st dev of the population the sample of n represents we multiply by sqrt(n/(n-1)) to get 24.66. Since Rano quotes the larger number, it seems that it's the s.d. of the population that's wanted.
  7. May 25, 2012 #6
    Ah, OK, I see what's going on... it's a naming thing, the standard deviation definition/estimation is unfortunately a bit messy since I see it change from book to book but anyway, I should have said standard deviation myself instead standard error since the data do not represent sample means.

    But anyway, whether standard error or standard deviation the only thing we can do is to estimate the values, and when it comes to estimators everyone has its favorites and its reason to choose them.

    So 20.1 would be the maximum likelihood estimation, 24.66 would be the unbiased estimation and 17.4 would be the lower quadratic error estimation and ... you could actually go on.

    So which estimation is the right one? all of them.

    You're right, rano is messing up different things (he should explain how he measures the errors etc.) but my point was to make him see that the numbers are different because they are measuring different things.
    Last edited: May 25, 2012
  8. May 26, 2012 #7


    User Avatar
    Science Advisor

    Hey rano and welcome to the forums.

    In general this problem can be thought of as going from values that have no variance to values that have variance.

    What this means mathematically is that you introduce a variance term for each data element that is now a random variable given by X(i) = x(i) + E where E is a random variable. In this example x(i) is your mean of the measures found (the thing before the +-)

    A good choice for a random variable would be to say use a Normal random variable with mean 0 and standard deviation of say 1/2 which means that 95% of all values would be covered within 2 standard deviations (i.e. 1 unit either side from the mean). If instead you had + or -2, you would adjust your variance.

    Then to get the variance and mean for this you simply take the mean and variance of the sum of all the X(i)'s and this will give you a mean and variance for the sample mean where you define the sample mean of your new general data to be Sum(X(i))/n where i = 1 to n and n is the number of things you have.
  9. May 27, 2012 #8
    Hi viraltux and haruspex,

    Thank you for considering my question. I apologize for any confusion; I am in fact interested in the standard deviation of the population as haruspex deduced. I think a different way to phrase my question might be, "how does the standard deviation of a population change when the samples of that population have uncertainty"?

    From your responses I gathered two things. First, this analysis requires that we need to assume equal measurement error on all 3 rocks. I'm not clear though if this is an absolute or relative error; i.e. is it ok that we set the SD of each rock to be 2 g despite the fact that their means are different (and thus different relative errors). The second thing I gathered is that I'm not sure if this is even a valid question since it appears as though I am comparing two different measures. But I guess to me it is reasonable that the SD in the sample measurement should be propagated to the population SD somehow. Thank you again for your consideration.

    Hi chiro,

    Thank you for your response. I think it makes sense to represent each sample as a function with error (e.g. 1 SD) as a random variable. What I am struggling with is the last part of your response where you calculate the population mean and variance. Let's say our rocks all have the same standard deviation on their measurement:

    Rock 1: 50 ± 2 g
    Rock 2: 10 ± 2 g
    Rock 3: 5 ± 2 g

    My interpretation of your instruction would be to add these 3 together and divide by N (3 in this case):

    Sum: (65 ± 6 g) / 3 = 21.6 ± 2 g.

    But to me this doesn't make sense because the standard deviation of the population should be at least 24.6 g as calculated earlier. If you could clarify for me how you would calculate the population mean ± SD in this case I would appreciate it. Thank you again for your consideration.
  10. May 27, 2012 #9
    But of course! OK, let's call X the random variable with the real weights, and ε the random error in the measurement. then Y=X+ε will be the actual measurements you have, in this case Y = {50,10,5}.

    You want to know how ε SD affects Y SD, right? Then we go:

    Y=X+ε → V(Y) = V(X+ε) → V(Y) = V(X) + V(ε) → V(X) = V(Y) - V(ε)

    And therefore we can say that the SD for the real weights considering the measurement errors is

    [tex]σ_X = \sqrt{σ_Y^2 - σ_ε^2}[/tex]

    What you were doing before was to compare the estimations of σY and σε
    Last edited: May 27, 2012
  11. May 27, 2012 #10
    Hi viraltux,

    Thank you very much for your explanation. That was exactly what I was looking for. I really appreciate your help.
  12. May 27, 2012 #11
    How did you get 21.6 ± 24.6 g, and 21.6 ± 2.45 g, respectively?!
  13. May 27, 2012 #12
    You're welcome :smile:
  14. May 27, 2012 #13


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Both can be valid, but you would need more data to justify the choice.
    Taking the error variance to be a function of the actual weight makes it "heteroscedastic". It would also mean the answer to the question would be a function of the observed weight - i.e. you would not get just one number for the s.d. I think you should avoid this complication if you can.
  15. May 27, 2012 #14


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    viraltux, there must be something wrong with that argument. The uncertainty in the weighings cannot reduce the s.d. I would believe [tex]σ_X = \sqrt{σ_Y^2 + σ_ε^2}[/tex]
  16. May 28, 2012 #15
    There is nothing wrong.

    σX is the uncertainty of the real weights, the measured weights uncertainty will always be higher due to the error. Probably what you mean is this [tex]σ_Y = \sqrt{σ_X^2 + σ_ε^2}[/tex] which is also true.
  17. May 28, 2012 #16


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    OK viraltux, I see what you've done.
    For clarity, let me express the problem like this:
    - We have N sets of measurements of each of M objects which samples from a population.
    - We want to know the s.d., Su, of the sampled population.
    An obvious approach is to obtain the average measurement of each object then compute a s.d for the population in the usual way from those M values. Call this result Sm (s.d. of means). Clearly this will underestimate that s.d. because it ignores the uncertainty in the M values.
    There is another thing to be clarified. I'm sure you're familiar with the fact that there are two formulae for s.d. These correspond to SDEV and SDEVP in spreadsheets. SDEVP gives the s.d. of the dataset, whereas SDEV estimates the s.d. of the population of which the dataset is a (small) sample. (Strictly speaking, it gives the sq root of the unbiased estimate of its variance.) Numerically, SDEV = SDEVP * √(n/(n-1)).
    As I understand your formula, it only works for the SDEVP interpretation, and all it does is provide another way of calculating Sm, namely, by taking the s.d. of the entire N * M dataset then adjusting it using the s.d. of the measurement error.
    So your formula is correct, but not actually useful. What's needed is a less biased estimate of the SDEV of the population.
    I'll give this some more thought...
  18. May 28, 2012 #17
    Hi everyone,
    I am having a similar problem, except that mine involves repeated measurements of the same same constant quantity. Suppose I'm measuring the brightness of a star, a few times with a good telescope that gives small errors (generally of different sizes), and many times with a less sensitive instrument that gives larger errors (also generally of different sizes).

    Clearly I can get a brightness for the star by calculating an average weighted by the inverse squares of the errors on the individual measurements, but how can I get the uncertainty on the final measurement?

    I don't think the above method for propagating the errors is applicable to my problem because incorporating more data should generally reduce the uncertainty instead of increasing it, even if the data is of poor quality. I should not have to throw away measurements to get a more precise result.

    Can anyone help?
  19. May 29, 2012 #18
    Hi haruspex... :smile:

    OK, let's go, given a random variable X, you will never able to calculate its σ (standard deviation) with a sample, ever, no matter what. The best you can do is to estimate that σ. Usually the estimation of an statistic is written with have a hat on it, in this case [itex]\hat{σ}[/itex].

    Now, though the formula I wrote is for σ, it works for any of the infinite ways to estimate σ with a [itex]\hat{σ}[/itex]. In this case, since you don't have the whole population of rocks, using SDEV or SDEVP only gives you two of those infinite ways to get a [itex]\hat{σ}[/itex] under their own mathematical assumptions, and, by the way, you can find situations where any of those infinite ways will be the best.

    the formula

    [tex]σ_X = \sqrt{σ_Y^2 - σ_ε^2}[/tex]

    is not only useful, but the one that is going to work with whatever estimation [itex]\hat{σ}[/itex] you end up using for σ.

    Sooooo... yeah, that is basically it... :smile:
  20. May 29, 2012 #19
    Hi TheBigH,

    You are absolutely right!

    A way to do so is by using a Kalman filter: http://en.wikipedia.org/wiki/Kalman_filter

    In your case, for your two measurements a and b (and assuming they both have the same size), you would have a reduced uncertainty given by the expression:

    [tex]σ_{ab}^2 = σ_a^2 \left(1-\frac{σ_a^2}{σ_a^2+σ_b^2}\right)[/tex]
    Last edited: May 29, 2012
  21. May 29, 2012 #20


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    As I said, the obvious approach to the OP, and most likely the one used by rano, is to take the average of measurements for each rock sample, then find the s.d. of those averages. It seems to me that your formula does the following to get exactly the same answer:
    - finds the s.d. of all the measurements as one large dataset
    - adjusts by removing the s.d. contribution from the measurement errors
    This is why I said it's not useful.
    But I was wrong to say it requires SDEVP; it works with SDEV, and shows one needs to be careful about the sample sizes. If SDEV is used in the 'obvious' method then in the final step, finding the s.d. of the means, the sample size to use is m * n, i.e. the total number of measurements.
    Working with variances (i.e. sigma-squareds) for convenience and using Vx, Vy, Ve, VPx, VPy, VPe with what I hope are the obvious meanings, your equation reads:
    VPx = VPy - VPe
    If there are m rocks and n weighings each, the relationship between SDEV and SDEVP yields:
    Vy = VPy*mn/(mn-1)
    Ve = VPe*mn/(mn-1)
    So if we derive VPx by your formula we get:
    VPx = VPy*mn/(mn-1) - VPe*mn/(mn-1)
    Hence we must use a sample size of mn to get Vx.

    I'm still not sure whether Vx is the unbiased estimate of the population variance... working on it.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook