Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Error propagation

  1. Sep 28, 2004 #1
    Always the easy things we forget...
    I know how errors propogate through multiplication or division when every term has an error, but how do I propagate errors in equations when only one term has an uncertainty? I want to say just multiply and divide the uncertainty value by the constants, i.e plug my value in the equation, then plug the uncertainty. This is the same as if I just found the % uncertainty, and multiplied the final product by that, correct? Is this the right way to go about this? And what if two (or more) terms have uncertainties? Would I find the uncertainty between those terms and then apply that % to the final number? Thanks.
    Last edited: Sep 28, 2004
  2. jcsd
  3. Sep 28, 2004 #2


    User Avatar
    Science Advisor

    If you mean something like y= ax+ b where a and b are exactly defined constants and x is measurement: x= m+/- e, then the largest possible value is a(m+e)+b= am+ ae+ b= (am+b)+ ae and the smallest possible is a(m-e)+ b= am-ae+ b= (am+b)- ae.

    That is: (am+ b)+/- ae. Any added constants you can ignore. Constants multiplied by x multiply the error. Same for percentage error.

    With more than one "uncertain" number you can get the exact error by calculating the maximum and minimum. A "rule of thumb" (good approximation but not exact) is that when you add or subtract measurements, the errors add, when you multiply or divide measurements, the percentage errors add.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook