Involving Errors In Calculations

In summary, when calculating the uncertainty on a function with two variables, you can use calculus or the standard rules for error propagation to determine the total error on the final result. In this case, the final error can be found by adding the effects of each variable's error in quadrature. Alternatively, if the errors are small, you can simply add them together to get the total error on the function.
  • #1
richnfg
46
0
Say I wanted to do a calculation with numbers that had an error attached...what would the error be on the final answer?

For example, say I was trying to work out the distance v in relation to lenses etc. by this formula:

[tex]
\frac{1}{v}=\frac{1}{u}+\frac{1}{f}

[/tex]

say u was 50cm plus or minus (cant find the symbol) 5mm
and f was 10cm plus or minus 5mm

what would the error be on the final answer?

Thanks!
 
Physics news on Phys.org
  • #2
Error propagation can be expressed most easily with calculus. If you have a function in terms of one variable, you can calculate the uncertainty on that function by simply seeing how it changes as a function of that one variable. That is:

[tex]f=f(x)[/tex]
[tex]\Delta f=\frac{\partial f}{\partial x}\vert_{x_0}\Delta x[/tex]

where [tex]\Delta[/tex] indicates the uncertainty (or error) and x0 is the measured value of the variable. However, your function is of two variables, so it turns out that you have to combine the derivatives in quadrature to get the total error on f:

[tex]f=f(x,y)[/tex]
[tex]\Delta f=\sqrt{(\frac{\partial f}{\partial x}\vert_{x_0}\Delta x)^2+(\frac{\partial f}{\partial y}\vert_{y_0}\Delta y)^2}[/tex]

Can you calculate what this gives for your case?
 
  • #3
wow, sorry but that really confused me. :(

Isn't there a simplier way?
 
  • #4
You can also do it without calculus. Take the error in each quantity one by one, calculate the effect that it alone has on the final result, then add those effects in quadrature. Using richnfg's equation:

First calculate [itex]v[/itex] without using the errors on [itex]u[/itex] and [itex]f[/itex] at all.

[tex]\frac {1}{v} = \frac {1}{u} + \frac {1}{f}[/itex]

Next, change [itex]u[/itex] by its error and recalculate [itex]v[/itex]. Let's call this [itex]v_u[/itex]:

[tex]\frac {1}{v_u} = \frac {1}{u + \Delta u} + \frac {1}{f}[/itex]

Next, go back to the original [itex]u[/itex], change [itex]f[/itex] by its error, and recalculate [itex]v[/itex]. Let's call this [itex]v_f[/itex]:

[tex]\frac {1}{v_f} = \frac {1}{u} + \frac {1}{f + \Delta f}[/itex]

Calculate the differences:

[tex]\Delta v_u = v_u - v[/tex]

[tex]\Delta v_f = v_f - v[/tex]

and add them in quadrature to get the total error in [itex]v[/itex]:

[tex]\Delta v = \sqrt {(\Delta v_u)^2 + (\Delta v_f)^2}[/itex]

This method gives the same result as SpaceTiger's method with the derivatives, in the limit as the errors approach zero.
 
  • #5
richnfg said:
wow, sorry but that really confused me. :(

Isn't there a simplier way?

Well, if you don't know calculus, then the simplest way is to use the standard rules for error propagation. For example:

[tex]f(x,y)=x+y[/tex]
[tex]\Delta f=\sqrt{(\Delta x)^2+(\Delta y)^2}[/tex]

[tex]f(y)=1/y[/tex]
[tex]\frac{\Delta f}{f}=-\frac{\Delta y}{y}[/tex]
[tex]\Delta f = -\frac{\Delta y}{y^2}[/tex]

You can combine those two to get the error on your equation.

EDIT: If your errors are small relative to the value of the measurements, jtbell's suggestion is good too...and perhaps a bit easier to think about.

EDIT 2: Also note that the "f" that I'm using is not the focal length in your equation. It's an arbitrary function.
 
Last edited:

FAQ: Involving Errors In Calculations

1. What are the types of errors that can occur in calculations?

There are three main types of errors that can occur in calculations: random errors, systematic errors, and blunders. Random errors are due to factors such as human error or equipment limitations and can be reduced through repeated measurements. Systematic errors are consistent and can be caused by faulty equipment or incorrect calibration. Blunders are large, obvious mistakes that can be easily corrected.

2. How can I minimize errors in my calculations?

To minimize errors in calculations, it is important to use accurate and calibrated equipment, double-check all measurements and calculations, and repeat experiments multiple times to reduce random errors. Additionally, it is important to be aware of common sources of errors and try to avoid them, such as parallax error or calculation mistakes.

3. What is the difference between precision and accuracy in calculations?

Precision refers to how close multiple measurements are to each other, while accuracy refers to how close a measurement is to the true or accepted value. A calculation can be precise but not accurate if there are consistent errors, or it can be accurate but not precise if there are random errors. Both precision and accuracy are important in ensuring the reliability of calculations.

4. How do significant figures play a role in calculations?

Significant figures are used to express the precision of a measurement or calculation. The number of significant figures in a calculation should be based on the least precise measurement used. When performing calculations, it is important to follow the rules for significant figures to ensure the final result is not overestimated or underestimated.

5. What should I do if I encounter errors in my calculations?

If you encounter errors in your calculations, first check for any mistakes or blunders that may have been made. If the error is due to a systematic error, try to identify and correct the source of the error. If the error is due to random errors, you may need to repeat the experiment or take more measurements to reduce the impact of the errors. If necessary, consult with colleagues or experts for further assistance.

Back
Top