Derivation of uncertainty formula

  • Thread starter songoku
  • Start date
  • Tags
    Derivation
In summary: Different ways of combining error ranges may be appropriate in different contexts. An engineer designing a production process may need to ensure that component A fits inside component B, so the maximum width of the one must be less than the minimum width of the...I also want to know about multiplication, let say D = A x B. The formula for uncertainty of D is ##\frac{\Delta d}{d}=\frac{\Delta a}{a}+\frac{\Delta b}{b}##. How to derive this formula too?
  • #1
songoku
2,340
340
Homework Statement
This is not homework.

I want to know how to derive the formula to calculate uncertainty
Relevant Equations
Uncertainty
Let say we have two quantities, A and B.

A = a ± Δa and B = b ± Δb, where a and b are the value of A and B and Δa and Δb are their absolute uncertainty respectively.

Now we have a formula of C, where C = A + B. The absolute uncertainty is Δc = Δa + Δb. How to derive this formula? Is the derivation still in the scope of high school level?

I also want to know about multiplication, let say D = A x B. The formula for uncertainty of D is ##\frac{\Delta d}{d}=\frac{\Delta a}{a}+\frac{\Delta b}{b}##. How to derive this formula too?

Thanks
 
Physics news on Phys.org
  • #2
songoku said:
How to derive this formula? Is the derivation still in the scope of high school level?
Are derivatives within that scope ? If so, the lemma on error propagation is useful.

If not, more haphazard reasoning is in order....

##\ ##
 
  • Like
Likes songoku and PhDeezNutz
  • #3
songoku said:
Now we have a formula of C, where C = A + B. The absolute uncertainty is Δc = Δa + Δb. How to derive this formula? Is the derivation still in the scope of high school level?
That is one formula for the A+B case, the one which gives the maximum uncertainty in C. The derivation is somewhat obvious, I would have thought.
But in many contexts a more statistical formula is used. It treats the given uncertainties as standard deviations of normal distributions, which means the s.d. of C's distribution is the root-sum-square, ##(\Delta C)^2=(\Delta A)^2+(\Delta B)^2##.
 
  • Like
Likes songoku
  • #4
BvU said:
Are derivatives within that scope ? If so, the lemma on error propagation is useful.

If not, more haphazard reasoning is in order....

##\ ##
Yes, derivative is within the scope.

haruspex said:
That is one formula for the A+B case, the one which gives the maximum uncertainty in C. The derivation is somewhat obvious, I would have thought.
But in many contexts a more statistical formula is used. It treats the given uncertainties as standard deviations of normal distributions, which means the s.d. of C's distribution is the root-sum-square, ##(\Delta C)^2=(\Delta A)^2+(\Delta B)^2##.
I also somewhat see the similar expression in the link given by @BvU but how ##(\Delta C)^2=(\Delta A)^2+(\Delta B)^2## can change into ##(\Delta C)=(\Delta A)+(\Delta B)##?

In my note, the formula is ##(\Delta C)=(\Delta A)+(\Delta B)##

Thanks
 
  • #5
songoku said:
I also somewhat see the similar expression in the link given by @BvU but how ##(\Delta C)^2=(\Delta A)^2+(\Delta B)^2## can change into ##(\Delta C)=(\Delta A)+(\Delta B)##?
Because there is more than one way to characterize an error distribution with a single number. I believe that this was pointed out by @haruspex in #3.

For instance...

1. You can characterize an error distribution by "the largest possible value minus the smallest possible value" and divide by two.

2. You can characterize an error distribution by "the average [unsigned] deviation of the measured value from the mean". This version suffers when you try to do algebra with it so...

3. You can characterize an error distribution by "the square root of the average squared deviation of the measured value from the mean". This version is nicer when you do math with it.

If you characterize your error distribution in terms of extreme values as in (1) above then ##\Delta C = \Delta A + \Delta B##

If you characterize your error distribution in terms of the root mean square error as in (3) above then ##\Delta C^2 \approx \Delta A^2 + \Delta B^2##Look at it another way...

Let us say that you flip a coin. Tails = 0, Heads = 1. If you flip it once you get an average result of ##0.5## with an error bound (max - min)/2 of 0.5. If you flip it 100 times you get an average of 50 with an error bound of (max - min)/2 of 50. That is with the extreme values approach, (1) above.

Let us switch to use the root mean square method. The average squared error is ##0.5^2 = 0.25##. This figure is also known as the "variance". The square root of that is the root mean square error, ##0.5##.

If we flip the coin 100 times, the variance of the sum turns out to be the sum of the variances. The variance is ##0.25 \times 100 = 25##. The square root of that is ##5##. As a rough estimate, if you flip a coin 100 times you'll get a result between 45 and 55 about half the time.

It is a general rule of coin flipping, dice rolling, political polling or almost any random process. If you have ##n## trials, the average variation in the sum goes as ##\sqrt{n}## times the variation in a single trial.
 
Last edited:
  • Like
Likes songoku, hutchphd and topsquark
  • #6
To add to @jbriggs444 's post #5, different ways of combining error ranges may be appropriate in different contexts.
An engineer designing a production process may need to ensure that component A fits inside component B, so the maximum width of the one must be less than the minimum width of the hole in the other. If we call the remaining gap C, ##\Delta C=\Delta A+\Delta B##, because the cost of getting it wrong is very high.
At one time, British banks stopped counting coins manually and weighed them instead. All silver coins had masses proportional to their value (and likewise copper). The cost of error was small compared with the savings, so the likely error was computed using root-sum-square. It turned out that it was more accurate than manual counting.
 
  • Like
Likes songoku and jbriggs444
  • #7
songoku said:
I also somewhat see the similar expression in the link given by @BvU but how (ΔC)2=(ΔA)2+(ΔB)2 can change into (ΔC)=(ΔA)+(ΔB)?

In my note, the formula is (ΔC)=(ΔA)+(ΔB)

Thanks
Errors of observation and their treatment is as much art as it is science. All methods involve estimates and in the end the choice is usually driven by how "conservative" you wish your estimate to be.
That being said I have found these statistical methods to be remarkably powerful and useful and surprisingly accurate. As a physicist-turned-engineer my (mostly self-taught) practical understanding of these concepts has been more important to practical success than any other body of knowledge. Use of that knowledge allows you to recognize the important things and ignore the rest. Well worth the effort to understand.
In some ways I am never happier than when I have a pile of data and a person (often also with piles of dollars) asking why her process is not behaving. Good fun.
 
  • Like
Likes songoku
  • #8
Thank you very much for the help and explanation BvU, haruspex, jbriggs444, hutchphd
 

FAQ: Derivation of uncertainty formula

What is the purpose of deriving an uncertainty formula?

The purpose of deriving an uncertainty formula is to quantify the uncertainty or possible error in a measurement or calculation. This helps in understanding the reliability and precision of the results obtained from experiments or mathematical models.

How do you derive the uncertainty formula for a function of multiple variables?

To derive the uncertainty formula for a function of multiple variables, you use partial derivatives. If you have a function \( f(x, y, z, \ldots) \), the uncertainty in \( f \), denoted as \( \Delta f \), can be approximated using the formula:\[ \Delta f = \sqrt{\left( \frac{\partial f}{\partial x} \Delta x \right)^2 + \left( \frac{\partial f}{\partial y} \Delta y \right)^2 + \left( \frac{\partial f}{\partial z} \Delta z \right)^2 + \ldots} \]where \( \Delta x, \Delta y, \Delta z, \ldots \) are the uncertainties in the variables \( x, y, z, \ldots \).

What assumptions are made when deriving an uncertainty formula?

When deriving an uncertainty formula, several assumptions are typically made: 1. The uncertainties in the variables are independent.2. The uncertainties are small relative to the values of the variables.3. The function can be approximated linearly within the range of the uncertainties.These assumptions allow the use of partial derivatives and the propagation of uncertainties through addition in quadrature.

Why is the square root used in the uncertainty formula?

The square root is used in the uncertainty formula to combine the individual contributions of the uncertainties in a way that reflects their combined effect. Since uncertainties are considered to be independent and random, their variances (squares of the standard deviations) add up. Taking the square root of the sum of these variances gives the combined standard uncertainty.

Can the derived uncertainty formula be applied to any type of function?

While the derived uncertainty formula can be applied to many functions, it works best for functions that are smooth and can be approximated linearly within the range of the uncertainties. For highly non-linear functions or functions with large uncertainties, more sophisticated methods such as Monte Carlo simulations might be needed to accurately estimate the uncertainty.

Back
Top