How to Convert Error to Standard Deviation?

AI Thread Summary
To convert error to standard deviation, it's essential to understand that an error indicates a range within which the actual value lies, but does not specify certainty. The standard deviation can be estimated by assuming the error corresponds to three standard deviations, meaning the measurement is within +/- 1.5% approximately 99.73% of the time. Standard deviation should be expressed in the same units as the measurement itself, rather than as a percentage. The relationship between error and standard deviation is context-dependent, often relying on the number of measurements taken and their distribution. Ultimately, the definition of "error" in the specific context will dictate how it relates to standard deviation.
intervoxel
Messages
192
Reaction score
1
How to convert error to standard deviation?

I explain my simple question:

I have a program that requests the standard deviation of a physical measurement.

But I only have the error, let's say v=-3.445643 +- 1.5%. How do I make the conversion?

Please
 
Physics news on Phys.org
Usually the standard deviation has the same dimension as the average. In your case it would be .015 x 3.445643.
 
You don't have enough information to make that statement.

An error only needs a single measurement to apply and says that "the actual value is somewhere in an interval from a-b to a+b".

If, however, you take N measurements, you can talk about standard deviations. If the data are normally distributed (usually a good assumption) then what a standard deviation s is saying is "68.27% of the time, my measurement is within a-s to a+s. 95.45% of the time, my measurement is within a-2s to a+2s. 99.73% of the time, my measurement is within a-3s to a+3s" and so on. These percentages are tabulated and a graphical representation can be seen here: http://en.wikipedia.org/wiki/File:Standard_deviation_diagram.svg

Now if you're not too familiar with where the error is actually coming from, I think a good rule of thumb would be to say your error is equal to 3 standard deviations. In other words, you would be saying that your measurement is within +/- 1.5% 99.73% of the time.

Edit: And like mathman said, your standard deviation should be in the same units as your measurement. You don't want to say your standard deviation is .5%.
 
I thank you for the answers.
 
Actually let me clarify one point a bit:

An error says "the actual value is somewhere in an interval from a-b to a+b" but it doesn't say with what certainty. That certainty has to exist though but you just don't know it right off the bat. It may be 99% of 99.999%, but it's that certainty that will dictate what the standard deviation is. That's where it goes back to making a lot of measurements.
 
Usually when an error is specified it is assumed to be "standard error", which means the error as given by one standard deviation. It all boils down to definition of "error" in the given context.
 
comparing a flat solar panel of area 2π r² and a hemisphere of the same area, the hemispherical solar panel would only occupy the area π r² of while the flat panel would occupy an entire 2π r² of land. wouldn't the hemispherical version have the same area of panel exposed to the sun, occupy less land space and can therefore increase the number of panels one land can have fitted? this would increase the power output proportionally as well. when I searched it up I wasn't satisfied with...

Similar threads

Replies
3
Views
7K
Replies
5
Views
3K
Replies
1
Views
1K
Replies
42
Views
4K
Replies
3
Views
2K
Replies
1
Views
3K
Replies
1
Views
2K
Back
Top