Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Total standard deviation of a measured function.

  1. Mar 15, 2012 #1
    Hello folks,

    I've got a bit of a problem with estimating the overall error in measurement of a function which I have sampled at sixteen independent points and have calculated the associated error bar with each point. The problem is I'm not sure how to combine all the sixteen errors into a single average error for all my measurements in one go.

    At the moment I am simply using this method: √(1/[itex]\sum[/itex](1/weights)) where the weights are 1/[itex]\sigma[/itex]^2. The problem is this is not giving me a value for the total standard deviation that I expect therefore I do not trust it but is this the correct method? Are there any others?

    It might be important to point out that the errors on f(x) vary as approximately as x^2 + C, e.g. they are large when approaching 0 and large when x is large. Therefore I needed to use weighted methods to estimate the total standard deviation as an unweighted average will change depending on the range of x I sample over, e.g. the error goes to infinity if I average over infinity.

    Thanks for any help physics forum!
  2. jcsd
  3. Mar 15, 2012 #2


    User Avatar
    Science Advisor

    Hey ElementalArea and welcome to the forums.

    If the errors really are independent, then the variance corresponding will be related to the sum of the variances at each point.

    If the errors are not independent, then you will need to take into account what is known as a covariance matrix, which means that instead of the covariance matrix just having terms in the diagonal entries of the matrix, it will have non-zero terms in the other areas meaning that each point will have interactions which will affect the total variance of the sum of all the random variables.

    If they really are independent then as I said above you just sum all the variances up. If your variances change depending on x, then again if they are independent you still add up the variances but the variances will of course be some kind of function of the x-value.

    Just for clarification it might help us if you post some kind of simple graph or plot of the function so we get a better idea of the function and how the errors change.

    So basically the formula to use is Var[T] = Var[A+B+C+D..+whatever] = VAR[A] + VAR + VAR[C] + ... VAR[Whatever] but you can only use this if all of the random variables are independent (very important!)
  4. Mar 15, 2012 #3
    Hi chiro! Thanks for the reply, as requested I've included to plots showing what the function is (the bold line of the first graph) and how the errors vary as a function of (in this case) l which is the second graph.

    The error bars in the first graph are on equally sized log bins (chosen specifically to ensure the errors are independent!) however hopefully now you can see why simply adding the variances seems to give me problems. I can for example have the first graph running to any value, from the 4000 displayed there all the way to 10000 if I wanted but the errors will shoot off to infinity as l goes large, therefore my total variance will become infinity too!

    For example: If I sum the variances of the first graph there you will get 0.16... perhaps not too unreasonable but still larger than any of the variances on that graph. However if I shift how far I plot to say, l=6000 suddenly the sum of variances shoots to ~100000!

    If it makes it any clearer I'm hoping to calculate a signal-to-noise ratio from graph one where the bold line is what represents the signal and the error bars are the noise.

    knoxplot by ElementalArea, on Flickr

    Knoxerrors by ElementalArea, on Flickr
  5. Mar 15, 2012 #4


    User Avatar
    Science Advisor

    Do you want to get your signal/noise information for a continuous signal?

    In other words are you assuming that the signal is continuous (or close enough to it) and that you want to find the signal/noise ratio given some distribution for your noise that is added to the uncorrupted signal?
  6. Mar 15, 2012 #5
    I am assuming the original signal is continuous yes and it has errors arising from a sum of random noise, limited sampling (low l) and resolution (high l). However the signal and errors in those two graphs are binned into log bins before calculating things like the signal/noise (to ensure they are independent measurements which is related to the experimental set up).
  7. Mar 15, 2012 #6


    User Avatar
    Science Advisor

    It's nearly midnight here but if someone doesn't chime in before I do, I'll take a look at tomorrow.
  8. Mar 15, 2012 #7
    Any help will be very welcome Chiro, thanks!
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook