How to Convert Error to Standard Deviation?

Click For Summary

Discussion Overview

The discussion revolves around the conversion of error to standard deviation in the context of physical measurements. Participants explore the definitions and implications of error and standard deviation, as well as the conditions under which these concepts apply.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant asks how to convert an error percentage to standard deviation for a specific measurement.
  • Another participant suggests that the standard deviation can be calculated by multiplying the error percentage by the measurement value.
  • A different participant challenges this by stating that an error does not provide enough information to directly equate it to standard deviation without additional context, such as the number of measurements taken.
  • It is proposed that if data is normally distributed, standard deviation can be interpreted in terms of confidence intervals, with specific percentages associated with certain multiples of standard deviation.
  • One participant mentions a rule of thumb that equates error to three standard deviations, implying a high level of certainty about the measurement's accuracy.
  • Another participant clarifies that while an error indicates a range, it does not specify the certainty of that range, which is crucial for determining standard deviation.
  • It is noted that the term "standard error" is often used in contexts where error is defined as one standard deviation, highlighting the importance of definitions in this discussion.

Areas of Agreement / Disagreement

Participants express differing views on the relationship between error and standard deviation, with no consensus reached on how to convert one to the other without additional information about measurement certainty and context.

Contextual Notes

Participants highlight the dependence on definitions of "error" and the assumptions regarding the distribution of measurements, which remain unresolved in the discussion.

intervoxel
Messages
192
Reaction score
1
How to convert error to standard deviation?

I explain my simple question:

I have a program that requests the standard deviation of a physical measurement.

But I only have the error, let's say v=-3.445643 +- 1.5%. How do I make the conversion?

Please
 
Physics news on Phys.org
Usually the standard deviation has the same dimension as the average. In your case it would be .015 x 3.445643.
 
You don't have enough information to make that statement.

An error only needs a single measurement to apply and says that "the actual value is somewhere in an interval from a-b to a+b".

If, however, you take N measurements, you can talk about standard deviations. If the data are normally distributed (usually a good assumption) then what a standard deviation s is saying is "68.27% of the time, my measurement is within a-s to a+s. 95.45% of the time, my measurement is within a-2s to a+2s. 99.73% of the time, my measurement is within a-3s to a+3s" and so on. These percentages are tabulated and a graphical representation can be seen here: http://en.wikipedia.org/wiki/File:Standard_deviation_diagram.svg

Now if you're not too familiar with where the error is actually coming from, I think a good rule of thumb would be to say your error is equal to 3 standard deviations. In other words, you would be saying that your measurement is within +/- 1.5% 99.73% of the time.

Edit: And like mathman said, your standard deviation should be in the same units as your measurement. You don't want to say your standard deviation is .5%.
 
I thank you for the answers.
 
Actually let me clarify one point a bit:

An error says "the actual value is somewhere in an interval from a-b to a+b" but it doesn't say with what certainty. That certainty has to exist though but you just don't know it right off the bat. It may be 99% of 99.999%, but it's that certainty that will dictate what the standard deviation is. That's where it goes back to making a lot of measurements.
 
Usually when an error is specified it is assumed to be "standard error", which means the error as given by one standard deviation. It all boils down to definition of "error" in the given context.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
7K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 42 ·
2
Replies
42
Views
6K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K