Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Systematic Error Problem

  1. Oct 9, 2011 #1
    I have a coworker who is very old and set in their ways, he has been causing problems in the department in many ways and thinks everything that he does is correct. I'm currently in a debate with him over error analysis, (this includes a lot of small issues and some larger ones).


    Firstly, he continues to place what I call (intrinsic uncertainties) inherent from a given measuring tool such as a meter stick , micrometer, caliper, etc, under the category a of systematic errors.

    The intrinsic uncertainties in a measuring tool can be taken to be on the order of the least count. They are not solely systematic, I believe that that actually obey random statistics more often.

    When a manufacturer states the intrinsic uncertainty in their digital caliper is 0.002cm, this means that any measurement made (correctly) is within that value. In fact the systematic error is within 0 to 0.002cm, and the distribution in between is random.

    Secondly other random components such as how the instruments user will align the device, how much pressure is used, temperature variations that could change elongation, will have a random component that most likely will dwarf the systematic component inherent in the tool.
    ----

    The reason why this bothers me is because the way he has written the lab manual, my students are all calling the ~ least count errors are systematic.

    Systematic errors are very hard to detect, they would be not zeroing a balance, possible parallax, etc.


    Secondly, I learned that true systematic errors propagate slightly different (not in quadrature).


    So my question is, shouldn't the inherent or intrinsic error from a measuring tool such as meterstick, stop watch, or digital balance be treated as random and not defined as systematic error.

    I'm not sure if its should be defined as either.
     
  2. jcsd
  3. Oct 9, 2011 #2

    xts

    User Avatar

    I am a grumpy old man always being right too!

    For many tools your approach (only random) is OK, but for many other the error may be systematic. Many apparata lose their callibration with time (so they should be recallibrated) or with environmental conditions.

    Environmental conditions influences are not statistical noise! Your tool was probably callibrated at 20°C. But if your lab air condition is set to 18°C - you will have systematic error on a whole series of measurements. The same if the tool is used improperly (e.g. too much pressure) - the same student tends to repeat the same force whiule using some tool, so all his measurements will be biased the same way.
     
  4. Oct 9, 2011 #3

    Agreed, nor did I say that all uncertainties will introduce a random error, yes air resistance for instance can be systematic, and even the temperature variations you speak of.

    But if the manufacturer states a 0.002cm uncertainty in a digital caliper at 20 Deg C, this is not solely systematic error and not the situation that your speaking of. We are not changing temperature and even if we were I have no valid reason to assume the systematic error is 0.002cm nor can i assume it is purely systematic. I would have to do some other experiment or ask the manufacturer what it would be.

    Were talking about the context of an introductory physics lab, the uncertainties due to the measuring tool that exist on the least count, they should not be defined as systematic. If your talking about an additional error (systematic) due to temperature variation that does not directly have anything to do with the inherent uncertainty quoted via the manufacturer, (it depends on the sample being measuring and the material of the tool).

    They might have systematic parts that make them up, but they are not completely systematic. That is where my issue lies.
     
    Last edited: Oct 9, 2011
  5. Oct 9, 2011 #4

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    It depends whether you are talking about repeating measurements with different tools of the same time (e.g. different micrometers all made by the same manufacturer), or several measurements using the same tool each time.

    If you use different tools, you could reasonably expect the errors caused by the tool to be random. if you always used the same tool, the errors will be systematic if the tool is incorrectly calibrated or has been damaged in some way.
     
  6. Oct 9, 2011 #5

    I do not agree that is will be purely systematic. For instance lets say that I manufacture digital calipers.

    If the tool hasn't been zerored, that is a definate sign of a systematic error.But that has nothing to do with what the intrinsic uncertainty quote via the manufacture is.

    Lets say I quote that any measurements properly made with the caliper can be taken to be good to within 0.002cm, even though the precision of he tool is 0.001cm.

    What does this mean. Maybe I was too lazy, or it costs too much much to actually narrow down the true systematic error by calibrating against a standard, so I say the the uncertainty is 0.002cm. It may be that the true uncertainty is well below that limit!.


    Additionally if they are 100% systematic errors then every physics laboratory manual I have ever come across that calculates the propagation of uncertainties using quadrature is technically wrong, as systematic errors should not propagate in these means. See an error analysis book.


    Here is my second argument.... I quote the same 0.002cm uncertainty for every caliper that I produce... but they are all made slightly different, there are inevitable randomization in their construction and calibration. SO how could they all have the same exact 0.002cm systematic error.


    My point is that 0.002cm is a combination of systematic and random error, the systematic error is less than 0.002cm that is all we know.
     
    Last edited: Oct 9, 2011
  7. Oct 9, 2011 #6

    xts

    User Avatar

    You may always solve the dispute experimentally: take a steel cube, measure its length with high precision tool, then measure it with all your callipers (already damaged by 1st year students) and compare results.
    Other issue is that for many callipers they show too low values if pressed to much - and 1st year students very often use excessive force - so all measurements done by the same student with identical tools may be biased.
     
  8. Oct 9, 2011 #7
    Yes you make good points, but this isn't getting to the point in question.

    I'm stating that the manufacturers stated uncertainty or tolerance or even a metersticks intrinsic uncertainty given by the least count (approximately) does not imply a complete systematic error. A systematic error means that the value is off by that amount. I'm stating that the true systematic error (if any) is within those values quoted by the manufacturer.


    I can make a meter stick for sale and I quote that any measurement you make with it is good to 2cm. Does that mean that every measurement will be wrong by that amount (NO!). It could mean I was too lazy to actually care or check the accuracy and I don't want to get sued, it might be much more accurate to 2cm.


    I mean it seems completely logical to me. If it was the complete true systematic error (which are incredibly hard to determine), then why not just correct for it? Its not the systematic error but a bound for it.
     
  9. Oct 9, 2011 #8

    xts

    User Avatar

    Every manufacturer always states only statistical errors. If he discovers that the tool is biased, he just shifts the scale, or compensates it somehow during callibration.

    The systemeatic error is something occuring against the will of the manufacturer. It may be some kind of ageing, decalibrations, something got bended/losened due to improper use, etc. For many apparata such issues are recognised by manufacturer, who recommends to recalibrate the tool once a year or after 1000 uses, etc.

    Of course - properly manufactured calliper operated without excessive force, keeps its initial callibration for hundred years. I have Swiss made (Oerlikon) calliper manufactured in 1928 - and it is still more accurate than cheap digital callipers you may buy in DIY shops. I believe it does not introduce any systematic error...
     
  10. Oct 9, 2011 #9

    The manufacturer uses statistics and calibrates against a standard, and they state the tolerance or uncertainty conservatively (over reporting).

    How else do you think that every single caliper (of that type) made by the manufacturer has the same uncertainty value. Its because the true uncertainty is less than that value. It would be impossible for them to have the same exact systematic error. It would be impossible for any two tools to have the same EXACT systematic error.

    Once again the value that they are quoting is a Bound, they are making a bet that they will not lose. They are claiming any measurement is good to within that value.

    We are not talking about old calipers, all these other pieces of information have no effect on the point I'm trying to make. The out of the box uncertainty or tolerance is a bound, not the true systematic error. The true systematic error is somewhere within that bound.
     
  11. Oct 9, 2011 #10

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    If I am going to use a tool to measure something important, actually I don't care what the manufacturer says about accuracy. I want to see a calibration certificate for the specific measuring tool I am going to use. And I won't accept a certificate that's 20 years old either - that just shows that nobody has bothered to KEEP the tool properly calibrated!
     
  12. Oct 9, 2011 #11
    What are you talking about, this is a side issue that has nothing to do with the concept here.

    I'm talking about the classification of systematic error, it is irrelevant how old your tool is. Obviously if it hasn't been calibrated corrected that is a different issue altogether. My point is the accuracy or tolerance that the manufacturer quotes is conservative and larger than the true systematic error of the tool (in its properly working state).
     
  13. Oct 9, 2011 #12
    I have found something to back my claim, although wikepedia is not necessarily always a legitimate reference.

    "The smallest value that can be measured by the measuring instrument is called its least count. All the readings or measured values are good only up to this value. The least count error is the error associated with the resolution of the instrument.

    For example, a vernier callipers has the least count as 0.01 cm; a spherometer may have a least count of 0.001 cm. Least count error belongs to the category of random errors but within a limited size; it occurs with both systematic and random errors."


    http://en.wikipedia.org/wiki/Least_count
     
  14. Oct 10, 2011 #13

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    Sheesh, even that wiki page itself says it "has multiple issues"!!!!!

    The author of that wiki page doesn't seem to know the difference between accuracy and resolution.

    Leat Count is about the resolution of an instrument. It says NOTHING about its accuracy. Some practical measuring instruments have errors that are orders of magnitude different from the least count - either bigger or smaller.

    I'm getting the message that you don't really want to learn anything about practical metrology here. You just want somebody to say that you are right and the other guy in your lab is wrong.
     
  15. Oct 10, 2011 #14
    I just want someone to get to the point and not bring in things about accuracy and precision. I know the difference. What that wiki page writes is not wrong. It states

    "All the readings or measured values are good only up to this value" That doesn't necessarily mean that the measured values are good to the least count, it means that they can be no better!. Yes there are several devices that have uncertainties that are either 2, 3 or 4 times the least count depending on the circumstances and situation.

    The question that I'm looking to answer has been the same, not a lecture in measurements I'm well versed in error analysis. I've asked many of my past professors who have sided with me and some at the institution that I teach.

    Is the intrinsic uncertainty (which is most often on the order of the least count) a systematic error? The answered is no it is a bound that contains a systematic error.

    If it was a true systematic error would could just correct for it.
    If it was a true systematic error than every instrument of that type would be made exactly 100% the same down to the atomic level.
    It it was a true systematic error these should not propagate in quadrature.
     
  16. Oct 10, 2011 #15

    xts

    User Avatar

    I see you are not seeking for explanation, but for yes/no answer.
    So the answer I found for you in my good old textbook on experimental metodology is:
    "It is a systemetic error and should not be combined in quadrature, unless in particular case we may justify its statistical nature."

    K.Kozłowski, Metodologia doświadczalna, PWN, Warsaw, 1976 - translation to English - mine.
     
  17. Oct 10, 2011 #16
    That is vague, "what "is a systematic error, I would like to see the entire paragraph.



    Also if what ur saying is true, then the entire undergraduate physics community is wrong, because evyery
    Institution propagates these in quadrature which means there not entirely systematic but have a random component.
     
    Last edited: Oct 10, 2011
  18. Oct 10, 2011 #17
    Could you please explain what the bolded term means? I see you mention it in several places of your OP, but I'm afraid I am not familiar with that terminology.
     
  19. Oct 10, 2011 #18
    The least count is the precision of the tool, its the smallest amount that a tool can discern the difference between.


    For instance we might have an electronic balance that has a precision of 0.01g that means it can tell the difference (resolve) between a mass that is 1.15g and 1.16 grams. However the manufacturer may state that the accuracy of the electronic balance is 0.03grams. This means that we don't know for sure if our 1.15g mass is really 1.15g, it could be within 0.03g, but we can tell its different from a 1.16g mass.

    That is the difference between accuracy and precision. This is has very much to do with the latest CERN neutrino experiment. They have done a good job of minimizing their random error (apparently), they have a very good precision. However there might be some systematic error that is shifting their results.

    The intrinsic uncertainty of the instrument is almost always on the order of the least count. Sometimes half, sometimes double, sometimes it is up to the individual measurer.


    I have found this as one of many to support my argument.

    http://www.owlnet.rice.edu/~labgroup/pdf/Error_analysis.htm

    Notice how the least count error (or intrinisic uncertainty on the order of the least count) is discussed under RANDOM errors, not systematic!

    "you often can estimate the error by taking account of the least count or smallest division of the measuring device"

    Or how about this one

    http://www.mjburns.net/SPH3UW/SPH3UW Unit 1.2.pdf

    Look at the bottom under random errors.


    .... This is what every university does! And it makes sense. The intrinsic uncertainties "CONTAINT" systematic errors but they are not purely systematic. They are a bound for the systematic error.


    Can someone else please respond with some insight.
     
  20. Oct 10, 2011 #19

    xts

    User Avatar

    My translation was a bit shortened. But if you want the full paragraph - here we go:

    Error introduced by measurement apparatus or some its self-contained component should be considered as systematic one, unless in particular case it may be satisfactory justified that the error has statistical nature: it is caused by some known physical mechanism adding random component (of the mean equal 0 and some st.dev.) to the result. In general it should be assumed that errors caused by various parts of measurement system are correlated, thus they must be taken as a linear sum. In those cases where physical mechanisms leading to individual errors are known and may be shown to be independent (having no common cause) they may be combined in quadratures.
     
  21. Oct 10, 2011 #20

    But how do you know that this is refering to the intrinsic error (or approximate error by least count) given by the manufaturer.

    Every device has a systematic error, that is true. Every error introduced by a device will be systematic. But that does not mean that the uncertainty or error quoted by the smallest division on a meter stick or the tolerance given by a manufacturer is the actual systematic error. It is impossible for them to know what it is, they make a bound for it. They say that every measurement made with this device is no worse that 1% or to +-0.002cm. And that is done with a large bound and statistics.

    Your method implies that every digital caliper made by xxxxxx has a systematic error of 0.002cm if thats what the manufacture is quoting as the intrinsic uncertainty. Each of them will be made slightly different, it is not possible to have the same systematic error.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Systematic Error Problem
  1. Errors, errors (Replies: 1)

Loading...