Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Adding accumulated error

  1. Aug 1, 2008 #1
    I've assumed, I could add circuit error, at least to good approximation. For instance, two 5%, 100 resistors in series would have a series value of 200 +/- 10 Ohms. But is it good enough for large errors from many contributing factors?

    There are cases where you might want to know the expected error from many contributions is series, parallel, and as error products such as a current source acting through a resistor.

    Say we can model the distribution of component values as a Gaussian probability distribution about a nominal value. A 5% quoted spec might mean that 95%, or two standard deviations worth of parts, will fall within +/- 5% of nominal.

    Would the expected distribution of these two seried resistors simply be 5% at two standards or does the error value combine differently.

    Noise, as you might recall, combines as the square root, rather than directly, so I wonder about combining error.

    Edit: I don't mean to nit-pick, but I know a Gaussian distribution itself is somewhat non-physical, as this means some non-vanishingly small number of resistors would have negative resistance, but I think a Gaussian distribution should be sufficient for the usual error values encountered.
     
    Last edited: Aug 1, 2008
  2. jcsd
  3. Aug 1, 2008 #2

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    E(X + Y) = E(X) + E(Y)
    V(X + Y) = V(X) + V(Y)

    So if you are modelling each resistor as having a random resistance with a known mean and variance, then the mean and variance of their sum is the sum of the individual means and variances.

    And if you are assuming both resistances are normally distributed, then their sum is also normally distributed.
     
  4. Aug 1, 2008 #3

    NoTime

    User Avatar
    Science Advisor
    Homework Helper

    I don't know if different manufacturing techniques have changed things.
    Or if it was just limited sample size, since I didn't do a lot of component level testing.
    But, I have noticed that for a particular batch run of resistors that the value error was fairly consistent.
    That would make the distribution non random.
     
  5. Aug 1, 2008 #4

    dlgoff

    User Avatar
    Science Advisor
    Gold Member

    The thing is; with say a 5% resistors, you'll probably never find one that is right on. My understanding is that resistors get seperated into groups of tolerances during manufacturing. So for example a 1000ohm 5% resistor will have a resistance somewhere in the range of (900 to 950) or (1010 to 1050) since the 1% and better resistors have been removed from the lot.
     
  6. Aug 1, 2008 #5

    dlgoff

    User Avatar
    Science Advisor
    Gold Member

    You beat me to the punch NoTime
     
  7. Aug 1, 2008 #6

    NoTime

    User Avatar
    Science Advisor
    Homework Helper

    :smile: Get lucky sometimes.
    I don't know if they select them of if a batch just comes out with a consistent error.
    My experience was that if the first one was, say 975 ohms, that the rest would be the same value within a few ohms on a particular lot.
     
  8. Aug 1, 2008 #7

    dlgoff

    User Avatar
    Science Advisor
    Gold Member

  9. Aug 1, 2008 #8
    Some time ago I took a 101 physics lab that stressed error analysis, in this case, measurement error. On the other hand, I took a 101 physics lab that stressed measurement error some time ago. :wink:

    --something involving dx/x, relative error and logrithms, I think. There are a raft of rules for handling various algebraic combinations, for instance E(sin(x) ln(y)) and so forth, aren't there?
     
  10. Aug 1, 2008 #9
    NoTime, dlgoff-

    I've noticed that too, in taking an instrument down a reel of components, be they resistors or zeners. However, another possibility, keeping in mind that the specified tolerance is over temperature, is that the grouping (high or low) may also account for nonlinear temperature coefficent, or that ambient is not in the middle of the temperature range, or even expected aging drift--i dunno.
     
  11. Aug 2, 2008 #10
    As these problems go, it's often a matter of finding the right key-words to search.
    The study is known as Error Analysis, Propagation of Error.

    Hurkyl-

    I found that your formula's are not necessarily correct, but apply to correlated error rather than independent error. Check out equations 12 and 13:

    http://teacher.nsrl.rochester.edu/Phy_labs/AppendixB/AppendixB.html
     
  12. Aug 2, 2008 #11

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Phrak- you misread me. The formula I stated is for the variance of the sum of two (independent) random variables. Equation 13 says the same thing as I did, but in terms of standard deviations. (at least... I'm assuming the delta-variables are supposed to be proportional to standard deviation)
     
  13. Aug 2, 2008 #12
    Sorry about that, Hurkyl. Now you've got me wondering. I've been treating error as standard deviation. What units label attach to variance? resistance? I guess I have some reading to do.

    Wikipedia also discusses this stuff under Propagation of Uncertainties.
    http://en.wikipedia.org/wiki/Error_propagation
     
  14. Aug 3, 2008 #13

    dlgoff

    User Avatar
    Science Advisor
    Gold Member

    When I was going to school, there was a required course in measurment theory. I still have the text. I don't know if one can still get a copy; there are probably a lot of other good ones out there.

    Experimentation: An Introduction to Measurement Theory and Experiment Design by D.C. Baird
     
  15. Aug 3, 2008 #14
    Measurement Theory could be another good keyword.

    I think I've gleened what is required out ofhttp://teacher.nsrl.rochester.edu/Phy_labs/AppendixB/AppendixB.html, though it's never made clear exactly what is meant by "error".

    For non-correlated errors the governing equation is this:

    [tex] \Delta Z^2 = \sum_{i} \left( \frac{\partial F} {\partial A_i} \right)^2 \Delta A_i^2[/tex],

    where [tex]F=(A_1,A_2,...A_i)[/tex], and [tex]\Delta Z[/tex] is the error in [tex]F[/tex].

    The Deltas are the absolute error. For instance a 100 ohm 5% tolerance resistor will have a Delta of 5 ohms.

    For two resistors in series, [tex]R_T=R_1 + R_2[/tex], then

    [tex]\Delta R_T = \sqrt{\Delta R_1^2 + \Delta R_2^2} [/tex]

    Two 100 ohm 10% resistors in series obtain a 200 ohm 7.07% resistor, when they are not from the same batch. If from the same batch we could assume a 100% correlation, and the tolerence is once again the presumtive 10%.

    For a current source operating into a resistance, [tex]E=IR[/tex]

    [tex]\delta E = \sqrt{\delta I\,^2+\delta R\, ^2}[/tex]

    where ive used the lower case delta to indicate relative error (percent tolerance); [tex]\delta A = \Delta A / A [/tex].

    A 5% current source through a 5% resistor obtains 7.07% error in the voltage. Likewise, a 5% current source into a 5% cap yields a 7.07% error in ramp time.

    In more practical use is the resistor divider,

    [tex]V/V_{CC} = \frac{R_2}{R_1+R_2}[/tex]

    This obtains

    [tex] \Delta V / V_{CC} = \sqrt{2}\; \delta R \frac {R_1 R_2} { (R_1)^2 + (R_2)^2) }[/tex]

    where ive assumed both resistors have the same relative error.

    It's easy to see why engineers who don't much take tolerance into consideration manage to get away with it (And apparently, I hadn't either!). For two equal 1% resistors out of the same batch, the uncorrelated error between any two resistors may be 0.5%, and the correlated error factors out for all practical purposes,

    [tex]\delta V = \frac{1}{2}\sqrt{2}\delta R[/tex]

    The relative error is only 0.35%.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Adding accumulated error
  1. Adding harmonics (Replies: 4)

Loading...