Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Defining SD from a set of errors

  1. Apr 17, 2012 #1
    Hi, I never took stats so maybe this doesn't make sense or is really simple, either way any help is appreciated.

    I have a series of data, say Xi (i=1,...1000) and a series of true values of that data, say Ti. Each true value is different and independent. Is it possible to get some kind of standard deviation for the whole data set? For example if I plot (Ti-Xi), can I use a histogram to find some value which means something along the lines of standard deviation, even though the samples are all independent? I want to get a comparable error for the entire set of data.
  2. jcsd
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Can you offer guidance or do you also need help?
Draft saved Draft deleted