Derivation of uncertainty formula

  • Thread starter Thread starter songoku
  • Start date Start date
  • Tags Tags
    Derivation
AI Thread Summary
The discussion focuses on deriving uncertainty formulas for addition and multiplication of quantities A and B. For the sum C = A + B, the absolute uncertainty is expressed as Δc = Δa + Δb, which represents the maximum uncertainty. In contrast, for the product D = A x B, the relative uncertainty is given by the formula Δd/d = Δa/a + Δb/b. The conversation highlights that different methods for characterizing error, such as using extreme values versus root-mean-square, can lead to different uncertainty calculations. Ultimately, understanding these statistical methods is deemed crucial for practical applications in physics and engineering.
songoku
Messages
2,467
Reaction score
382
Homework Statement
This is not homework.

I want to know how to derive the formula to calculate uncertainty
Relevant Equations
Uncertainty
Let say we have two quantities, A and B.

A = a ± Δa and B = b ± Δb, where a and b are the value of A and B and Δa and Δb are their absolute uncertainty respectively.

Now we have a formula of C, where C = A + B. The absolute uncertainty is Δc = Δa + Δb. How to derive this formula? Is the derivation still in the scope of high school level?

I also want to know about multiplication, let say D = A x B. The formula for uncertainty of D is ##\frac{\Delta d}{d}=\frac{\Delta a}{a}+\frac{\Delta b}{b}##. How to derive this formula too?

Thanks
 
Physics news on Phys.org
songoku said:
How to derive this formula? Is the derivation still in the scope of high school level?
Are derivatives within that scope ? If so, the lemma on error propagation is useful.

If not, more haphazard reasoning is in order....

##\ ##
 
  • Like
Likes songoku and PhDeezNutz
songoku said:
Now we have a formula of C, where C = A + B. The absolute uncertainty is Δc = Δa + Δb. How to derive this formula? Is the derivation still in the scope of high school level?
That is one formula for the A+B case, the one which gives the maximum uncertainty in C. The derivation is somewhat obvious, I would have thought.
But in many contexts a more statistical formula is used. It treats the given uncertainties as standard deviations of normal distributions, which means the s.d. of C's distribution is the root-sum-square, ##(\Delta C)^2=(\Delta A)^2+(\Delta B)^2##.
 
BvU said:
Are derivatives within that scope ? If so, the lemma on error propagation is useful.

If not, more haphazard reasoning is in order....

##\ ##
Yes, derivative is within the scope.

haruspex said:
That is one formula for the A+B case, the one which gives the maximum uncertainty in C. The derivation is somewhat obvious, I would have thought.
But in many contexts a more statistical formula is used. It treats the given uncertainties as standard deviations of normal distributions, which means the s.d. of C's distribution is the root-sum-square, ##(\Delta C)^2=(\Delta A)^2+(\Delta B)^2##.
I also somewhat see the similar expression in the link given by @BvU but how ##(\Delta C)^2=(\Delta A)^2+(\Delta B)^2## can change into ##(\Delta C)=(\Delta A)+(\Delta B)##?

In my note, the formula is ##(\Delta C)=(\Delta A)+(\Delta B)##

Thanks
 
songoku said:
I also somewhat see the similar expression in the link given by @BvU but how ##(\Delta C)^2=(\Delta A)^2+(\Delta B)^2## can change into ##(\Delta C)=(\Delta A)+(\Delta B)##?
Because there is more than one way to characterize an error distribution with a single number. I believe that this was pointed out by @haruspex in #3.

For instance...

1. You can characterize an error distribution by "the largest possible value minus the smallest possible value" and divide by two.

2. You can characterize an error distribution by "the average [unsigned] deviation of the measured value from the mean". This version suffers when you try to do algebra with it so...

3. You can characterize an error distribution by "the square root of the average squared deviation of the measured value from the mean". This version is nicer when you do math with it.

If you characterize your error distribution in terms of extreme values as in (1) above then ##\Delta C = \Delta A + \Delta B##

If you characterize your error distribution in terms of the root mean square error as in (3) above then ##\Delta C^2 \approx \Delta A^2 + \Delta B^2##Look at it another way...

Let us say that you flip a coin. Tails = 0, Heads = 1. If you flip it once you get an average result of ##0.5## with an error bound (max - min)/2 of 0.5. If you flip it 100 times you get an average of 50 with an error bound of (max - min)/2 of 50. That is with the extreme values approach, (1) above.

Let us switch to use the root mean square method. The average squared error is ##0.5^2 = 0.25##. This figure is also known as the "variance". The square root of that is the root mean square error, ##0.5##.

If we flip the coin 100 times, the variance of the sum turns out to be the sum of the variances. The variance is ##0.25 \times 100 = 25##. The square root of that is ##5##. As a rough estimate, if you flip a coin 100 times you'll get a result between 45 and 55 about half the time.

It is a general rule of coin flipping, dice rolling, political polling or almost any random process. If you have ##n## trials, the average variation in the sum goes as ##\sqrt{n}## times the variation in a single trial.
 
Last edited:
  • Like
Likes songoku, hutchphd and topsquark
To add to @jbriggs444 's post #5, different ways of combining error ranges may be appropriate in different contexts.
An engineer designing a production process may need to ensure that component A fits inside component B, so the maximum width of the one must be less than the minimum width of the hole in the other. If we call the remaining gap C, ##\Delta C=\Delta A+\Delta B##, because the cost of getting it wrong is very high.
At one time, British banks stopped counting coins manually and weighed them instead. All silver coins had masses proportional to their value (and likewise copper). The cost of error was small compared with the savings, so the likely error was computed using root-sum-square. It turned out that it was more accurate than manual counting.
 
  • Like
Likes songoku and jbriggs444
songoku said:
I also somewhat see the similar expression in the link given by @BvU but how (ΔC)2=(ΔA)2+(ΔB)2 can change into (ΔC)=(ΔA)+(ΔB)?

In my note, the formula is (ΔC)=(ΔA)+(ΔB)

Thanks
Errors of observation and their treatment is as much art as it is science. All methods involve estimates and in the end the choice is usually driven by how "conservative" you wish your estimate to be.
That being said I have found these statistical methods to be remarkably powerful and useful and surprisingly accurate. As a physicist-turned-engineer my (mostly self-taught) practical understanding of these concepts has been more important to practical success than any other body of knowledge. Use of that knowledge allows you to recognize the important things and ignore the rest. Well worth the effort to understand.
In some ways I am never happier than when I have a pile of data and a person (often also with piles of dollars) asking why her process is not behaving. Good fun.
 
Thank you very much for the help and explanation BvU, haruspex, jbriggs444, hutchphd
 
Back
Top