How is percentage uncertainty different from standard deviation

Click For Summary

Discussion Overview

The discussion centers on the differences between percentage uncertainty and standard deviation, particularly in the context of measurements and their analysis. Participants explore the definitions, applications, and implications of these statistical concepts, as well as their relevance in reporting measurement accuracy and conducting follow-up calculations.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants question the necessity of calculating percentage uncertainty in addition to standard deviation and variance, seeking clarity on the additional information it may provide.
  • One participant defines percentage uncertainty in relation to bounded distributions and suggests that it could be derived from standard deviations, though they note potential biases in this approach.
  • Another participant emphasizes that variance is essential for follow-up calculations and explains the rules for combining variances versus standard deviations.
  • It is noted that a percentage uncertainty of less than 2% is generally considered negligible, which could influence reporting practices.
  • Some participants argue that variance has advantageous properties in probability and statistics, particularly when dealing with independent random variables and normal distributions.
  • There is a suggestion that calculating variance may be sufficient for certain applications, as it leads to standard deviation, which can be used as an error measure in equipment.

Areas of Agreement / Disagreement

Participants express differing views on the necessity and utility of percentage uncertainty compared to standard deviation and variance. While some see value in calculating both, others suggest that variance alone may suffice for specific contexts. The discussion remains unresolved regarding the definitive advantages of one measure over the other.

Contextual Notes

Participants acknowledge that the definitions and applications of percentage uncertainty and standard deviation may depend on the specific context of measurements and the nature of the data being analyzed. There is also mention of potential biases when deriving percentage uncertainty from standard deviations.

Nyasha
Messages
127
Reaction score
0
How is percentage uncertainty different from standard deviation ? I have five measurements and l calculated the average, standard deviation and variance. Do l need to calculate the percentage uncertainty ? Does percentage uncertainty give me any more information which the other values l calculated above doesn't give me ?
 
Physics news on Phys.org
How would you define percentage uncertainty? In some cases you might know that a distribution is bounded. E.g. a measurement error might have a uniform distribution in a known range, so in that case you could quote an error range, as percentage or otherwise.
Or if you define uncertainty as some number of standard deviations, I suppose you could divide that by the mean. However, the mean you divide by itself has uncertainty, so this might not be the least biased estimate.
 
Usually you would measure and report an average and a standard deviation.
From these you can derive all others.

The reason to calculate a variance is to use it in follow-up calculations.
When you add numbers, you also have to add their variances to find the new variance.
You should not add standard deviations.

The reason to calculate a percentage uncertainty is:
1. To get a sense how accurate the measurement is.
As a rule of thumb an uncertainty of less than 2% is deemed negligible.
2. To use in follow-up calculations.
When you multiply with a number, the uncertainty of the product is that number multiplied by the percentage uncertainty.
 
Following on from I Like Serena's reply, variance is a natural measure in probability and statistics, and it turns out that you get nice properties with respect to general forms of analyzing uncertainty by using variances in the way they are defined.

One example is adding variances for independent random variables VAR[X+Y] = VAR[X] + VAR[Y] and COV(X,X) = VAR[X].

Plus you get all kinds of nice things, especially with normal distributions (where the variance/standard deviation is a natural parameter).

There are other things but this gives you an idea of why it is useful.
 
I like Serena said:
Usually you would measure and report an average and a standard deviation.
From these you can derive all others.

The reason to calculate a variance is to use it in follow-up calculations.
When you add numbers, you also have to add their variances to find the new variance.
You should not add standard deviations.

The reason to calculate a percentage uncertainty is:
1. To get a sense how accurate the measurement is.
As a rule of thumb an uncertainty of less than 2% is deemed negligible.
2. To use in follow-up calculations.
When you multiply with a number, the uncertainty of the product is that number multiplied by the percentage uncertainty.

So l guess the variance is good enough. After adding up the variances l just end up calculating the standard deviation and l will use that as the error in my equipment.
 
Nyasha said:
So l guess the variance is good enough. After adding up the variances l just end up calculating the standard deviation and l will use that as the error in my equipment.

Sounds good. :)
 

Similar threads

  • · Replies 42 ·
2
Replies
42
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 24 ·
Replies
24
Views
7K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K