Is Intrinsic Uncertainty in Measurement Tools Systematic or Random?

In summary, the conversation includes a debate over error analysis and whether intrinsic uncertainties in measuring tools should be classified as systematic or random errors. The person speaking argues that these uncertainties should be considered random in most cases, unless the tool is consistently used incorrectly or has been damaged. They also question the validity of manufacturers' stated uncertainties and point out that if these uncertainties are truly systematic, they should not be propagated in error analysis using quadrature. The other person argues that systematic errors can occur if tools are not properly calibrated or are affected by environmental conditions. They also mention the possibility of manufacturers giving a wide uncertainty range to cover potential systematic errors.
  • #1
azaharak
152
0
I have a coworker who is very old and set in their ways, he has been causing problems in the department in many ways and thinks everything that he does is correct. I'm currently in a debate with him over error analysis, (this includes a lot of small issues and some larger ones).


Firstly, he continues to place what I call (intrinsic uncertainties) inherent from a given measuring tool such as a meter stick , micrometer, caliper, etc, under the category a of systematic errors.

The intrinsic uncertainties in a measuring tool can be taken to be on the order of the least count. They are not solely systematic, I believe that that actually obey random statistics more often.

When a manufacturer states the intrinsic uncertainty in their digital caliper is 0.002cm, this means that any measurement made (correctly) is within that value. In fact the systematic error is within 0 to 0.002cm, and the distribution in between is random.

Secondly other random components such as how the instruments user will align the device, how much pressure is used, temperature variations that could change elongation, will have a random component that most likely will dwarf the systematic component inherent in the tool.
----

The reason why this bothers me is because the way he has written the lab manual, my students are all calling the ~ least count errors are systematic.

Systematic errors are very hard to detect, they would be not zeroing a balance, possible parallax, etc.


Secondly, I learned that true systematic errors propagate slightly different (not in quadrature).


So my question is, shouldn't the inherent or intrinsic error from a measuring tool such as meterstick, stop watch, or digital balance be treated as random and not defined as systematic error.

I'm not sure if its should be defined as either.
 
Physics news on Phys.org
  • #2
I am a grumpy old man always being right too!

For many tools your approach (only random) is OK, but for many other the error may be systematic. Many apparata lose their callibration with time (so they should be recallibrated) or with environmental conditions.

Environmental conditions influences are not statistical noise! Your tool was probably callibrated at 20°C. But if your lab air condition is set to 18°C - you will have systematic error on a whole series of measurements. The same if the tool is used improperly (e.g. too much pressure) - the same student tends to repeat the same force whiule using some tool, so all his measurements will be biased the same way.
 
  • #3
xts said:
I am a grumpy old man always being right too!

Environmental conditions influences are not statistical noise! Your tool was probably callibrated at 20°C. But if your lab air condition is set to 18°C - you will have systematic error on a whole series of measurements.
Agreed, nor did I say that all uncertainties will introduce a random error, yes air resistance for instance can be systematic, and even the temperature variations you speak of.

But if the manufacturer states a 0.002cm uncertainty in a digital caliper at 20 Deg C, this is not solely systematic error and not the situation that your speaking of. We are not changing temperature and even if we were I have no valid reason to assume the systematic error is 0.002cm nor can i assume it is purely systematic. I would have to do some other experiment or ask the manufacturer what it would be.

Were talking about the context of an introductory physics lab, the uncertainties due to the measuring tool that exist on the least count, they should not be defined as systematic. If your talking about an additional error (systematic) due to temperature variation that does not directly have anything to do with the inherent uncertainty quoted via the manufacturer, (it depends on the sample being measuring and the material of the tool).

They might have systematic parts that make them up, but they are not completely systematic. That is where my issue lies.
 
Last edited:
  • #4
azaharak said:
Firstly, he continues to place what I call (intrinsic uncertainties) inherent from a given measuring tool such as a meter stick , micrometer, caliper, etc, under the category a of systematic errors.

The intrinsic uncertainties in a measuring tool can be taken to be on the order of the least count. They are not solely systematic, I believe that that actually obey random statistics more often.

It depends whether you are talking about repeating measurements with different tools of the same time (e.g. different micrometers all made by the same manufacturer), or several measurements using the same tool each time.

If you use different tools, you could reasonably expect the errors caused by the tool to be random. if you always used the same tool, the errors will be systematic if the tool is incorrectly calibrated or has been damaged in some way.
 
  • #5
AlephZero said:
if you always used the same tool, the errors will be systematic if the tool is incorrectly calibrated or has been damaged in some way.
I do not agree that is will be purely systematic. For instance let's say that I manufacture digital calipers.

If the tool hasn't been zerored, that is a definate sign of a systematic error.But that has nothing to do with what the intrinsic uncertainty quote via the manufacture is.

Lets say I quote that any measurements properly made with the caliper can be taken to be good to within 0.002cm, even though the precision of he tool is 0.001cm.

What does this mean. Maybe I was too lazy, or it costs too much much to actually narrow down the true systematic error by calibrating against a standard, so I say the the uncertainty is 0.002cm. It may be that the true uncertainty is well below that limit!.Additionally if they are 100% systematic errors then every physics laboratory manual I have ever come across that calculates the propagation of uncertainties using quadrature is technically wrong, as systematic errors should not propagate in these means. See an error analysis book.Here is my second argument... I quote the same 0.002cm uncertainty for every caliper that I produce... but they are all made slightly different, there are inevitable randomization in their construction and calibration. SO how could they all have the same exact 0.002cm systematic error.My point is that 0.002cm is a combination of systematic and random error, the systematic error is less than 0.002cm that is all we know.
 
Last edited:
  • #6
You may always solve the dispute experimentally: take a steel cube, measure its length with high precision tool, then measure it with all your callipers (already damaged by 1st year students) and compare results.
Other issue is that for many callipers they show too low values if pressed to much - and 1st year students very often use excessive force - so all measurements done by the same student with identical tools may be biased.
 
  • #7
xts said:
You may always solve the dispute experimentally: take a steel cube, measure its length with high precision tool, then measure it with all your callipers (already damaged by 1st year students) and compare results.
Other issue is that for many callipers they show too low values if pressed to much - and 1st year students very often use excessive force - so all measurements done by the same student with identical tools may be biased.

Yes you make good points, but this isn't getting to the point in question.

I'm stating that the manufacturers stated uncertainty or tolerance or even a metersticks intrinsic uncertainty given by the least count (approximately) does not imply a complete systematic error. A systematic error means that the value is off by that amount. I'm stating that the true systematic error (if any) is within those values quoted by the manufacturer.I can make a meter stick for sale and I quote that any measurement you make with it is good to 2cm. Does that mean that every measurement will be wrong by that amount (NO!). It could mean I was too lazy to actually care or check the accuracy and I don't want to get sued, it might be much more accurate to 2cm.I mean it seems completely logical to me. If it was the complete true systematic error (which are incredibly hard to determine), then why not just correct for it? Its not the systematic error but a bound for it.
 
  • #8
Every manufacturer always states only statistical errors. If he discovers that the tool is biased, he just shifts the scale, or compensates it somehow during callibration.

The systemeatic error is something occurring against the will of the manufacturer. It may be some kind of ageing, decalibrations, something got bended/losened due to improper use, etc. For many apparata such issues are recognised by manufacturer, who recommends to recalibrate the tool once a year or after 1000 uses, etc.

Of course - properly manufactured calliper operated without excessive force, keeps its initial callibration for hundred years. I have Swiss made (Oerlikon) calliper manufactured in 1928 - and it is still more accurate than cheap digital callipers you may buy in DIY shops. I believe it does not introduce any systematic error...
 
  • #9
xts said:
Every manufacturer always states only statistical errors..
The manufacturer uses statistics and calibrates against a standard, and they state the tolerance or uncertainty conservatively (over reporting).

How else do you think that every single caliper (of that type) made by the manufacturer has the same uncertainty value. Its because the true uncertainty is less than that value. It would be impossible for them to have the same exact systematic error. It would be impossible for any two tools to have the same EXACT systematic error.

Once again the value that they are quoting is a Bound, they are making a bet that they will not lose. They are claiming any measurement is good to within that value.

We are not talking about old calipers, all these other pieces of information have no effect on the point I'm trying to make. The out of the box uncertainty or tolerance is a bound, not the true systematic error. The true systematic error is somewhere within that bound.
 
  • #10
azaharak said:
Lets say I quote that any measurements properly made with the caliper can be taken to be good to within 0.002cm, even though the precision of he tool is 0.001cm.

If I am going to use a tool to measure something important, actually I don't care what the manufacturer says about accuracy. I want to see a calibration certificate for the specific measuring tool I am going to use. And I won't accept a certificate that's 20 years old either - that just shows that nobody has bothered to KEEP the tool properly calibrated!
 
  • #11
AlephZero said:
If I am going to use a tool to measure something important, actually I don't care what the manufacturer says about accuracy.

What are you talking about, this is a side issue that has nothing to do with the concept here.

I'm talking about the classification of systematic error, it is irrelevant how old your tool is. Obviously if it hasn't been calibrated corrected that is a different issue altogether. My point is the accuracy or tolerance that the manufacturer quotes is conservative and larger than the true systematic error of the tool (in its properly working state).
 
  • #12
I have found something to back my claim, although wikepedia is not necessarily always a legitimate reference.

"The smallest value that can be measured by the measuring instrument is called its least count. All the readings or measured values are good only up to this value. The least count error is the error associated with the resolution of the instrument.

For example, a vernier callipers has the least count as 0.01 cm; a spherometer may have a least count of 0.001 cm. Least count error belongs to the category of random errors but within a limited size; it occurs with both systematic and random errors."


http://en.wikipedia.org/wiki/Least_count
 
  • #13
Sheesh, even that wiki page itself says it "has multiple issues"!

The author of that wiki page doesn't seem to know the difference between accuracy and resolution.

Leat Count is about the resolution of an instrument. It says NOTHING about its accuracy. Some practical measuring instruments have errors that are orders of magnitude different from the least count - either bigger or smaller.

I'm getting the message that you don't really want to learn anything about practical metrology here. You just want somebody to say that you are right and the other guy in your lab is wrong.
 
  • #14
AlephZero said:
You just want somebody to say that you are right and the other guy in your lab is wrong.

I just want someone to get to the point and not bring in things about accuracy and precision. I know the difference. What that wiki page writes is not wrong. It states

"All the readings or measured values are good only up to this value" That doesn't necessarily mean that the measured values are good to the least count, it means that they can be no better!. Yes there are several devices that have uncertainties that are either 2, 3 or 4 times the least count depending on the circumstances and situation.

The question that I'm looking to answer has been the same, not a lecture in measurements I'm well versed in error analysis. I've asked many of my past professors who have sided with me and some at the institution that I teach.

Is the intrinsic uncertainty (which is most often on the order of the least count) a systematic error? The answered is no it is a bound that contains a systematic error.

If it was a true systematic error would could just correct for it.
If it was a true systematic error than every instrument of that type would be made exactly 100% the same down to the atomic level.
It it was a true systematic error these should not propagate in quadrature.
 
  • #15
azaharak said:
Is the intrinsic uncertainty (which is most often on the order of the least count) a systematic error?
I see you are not seeking for explanation, but for yes/no answer.
So the answer I found for you in my good old textbook on experimental metodology is:
"It is a systemetic error and should not be combined in quadrature, unless in particular case we may justify its statistical nature."

K.Kozłowski, Metodologia doświadczalna, PWN, Warsaw, 1976 - translation to English - mine.
 
  • #16
xts said:
I see you are not seeking for explanation, but for yes/no answer.
So the answer I found for you in my good old textbook on experimental metodology is:
"It is a systemetic error and should not be combined in quadrature, unless in particular case we may justify its statistical nature."

K.Kozłowski, Metodologia doświadczalna, PWN, Warsaw, 1976 - translation to English - mine.

That is vague, "what "is a systematic error, I would like to see the entire paragraph.



Also if what ur saying is true, then the entire undergraduate physics community is wrong, because evyery
Institution propagates these in quadrature which means there not entirely systematic but have a random component.
 
Last edited:
  • #17
azaharak said:
The intrinsic uncertainties in a measuring tool can be taken to be on the order of the least count.

Could you please explain what the bolded term means? I see you mention it in several places of your OP, but I'm afraid I am not familiar with that terminology.
 
  • #18
The least count is the precision of the tool, its the smallest amount that a tool can discern the difference between.For instance we might have an electronic balance that has a precision of 0.01g that means it can tell the difference (resolve) between a mass that is 1.15g and 1.16 grams. However the manufacturer may state that the accuracy of the electronic balance is 0.03grams. This means that we don't know for sure if our 1.15g mass is really 1.15g, it could be within 0.03g, but we can tell its different from a 1.16g mass.

That is the difference between accuracy and precision. This is has very much to do with the latest CERN neutrino experiment. They have done a good job of minimizing their random error (apparently), they have a very good precision. However there might be some systematic error that is shifting their results.

The intrinsic uncertainty of the instrument is almost always on the order of the least count. Sometimes half, sometimes double, sometimes it is up to the individual measurer.I have found this as one of many to support my argument.

http://www.owlnet.rice.edu/~labgroup/pdf/Error_analysis.htm

Notice how the least count error (or intrinisic uncertainty on the order of the least count) is discussed under RANDOM errors, not systematic!

"you often can estimate the error by taking account of the least count or smallest division of the measuring device"

Or how about this one

http://www.mjburns.net/SPH3UW/SPH3UW Unit 1.2.pdf

Look at the bottom under random errors.... This is what every university does! And it makes sense. The intrinsic uncertainties "CONTAINT" systematic errors but they are not purely systematic. They are a bound for the systematic error.Can someone else please respond with some insight.
 
  • #19
My translation was a bit shortened. But if you want the full paragraph - here we go:

Error introduced by measurement apparatus or some its self-contained component should be considered as systematic one, unless in particular case it may be satisfactory justified that the error has statistical nature: it is caused by some known physical mechanism adding random component (of the mean equal 0 and some st.dev.) to the result. In general it should be assumed that errors caused by various parts of measurement system are correlated, thus they must be taken as a linear sum. In those cases where physical mechanisms leading to individual errors are known and may be shown to be independent (having no common cause) they may be combined in quadratures.
 
  • #20
xts said:
My translation was a bit shortened. But if you want the full paragraph - here we go:

Error introduced by measurement apparatus or some its self-contained component should be considered as systematic one.
But how do you know that this is referring to the intrinsic error (or approximate error by least count) given by the manufaturer.

Every device has a systematic error, that is true. Every error introduced by a device will be systematic. But that does not mean that the uncertainty or error quoted by the smallest division on a meter stick or the tolerance given by a manufacturer is the actual systematic error. It is impossible for them to know what it is, they make a bound for it. They say that every measurement made with this device is no worse that 1% or to +-0.002cm. And that is done with a large bound and statistics.

Your method implies that every digital caliper made by xxxxxx has a systematic error of 0.002cm if that's what the manufacture is quoting as the intrinsic uncertainty. Each of them will be made slightly different, it is not possible to have the same systematic error.
 
  • #21
Your example with the balance of the accuracy 0.01g is a very good one.

Take an electronic balance of both accuracy and resolution of 0.01g.
Now prepare a sample of some substance weighting 1g (±0.01g). Prepare 100 such samples using the same balance, the same vessel, etc.
Combine samples together. The final sample must be considered 100±1g (combined linearily) rather than 100±0.1g (as you'd get in quadratures).
 
  • #22
azaharak said:
But how do you know that this is referring to the intrinsic error (or approximate error by least count) given by the manufaturer
I don't know. I just must assume that. It refers to any error associated with the tool, unless it may be proven opposite.
 
  • #23
xts said:
Your example with the balance of the accuracy 0.01g is a very good one.

Take an electronic balance of both accuracy and resolution of 0.01g.
Now prepare a sample of some substance weighting 1g (±0.01g). Prepare 100 such samples using the same balance, the same vessel, etc.
Combine samples together. The final sample must be considered 100±1g (combined linearily) rather than 100±0.1g (as you'd get in quadratures).

I disagree...

"Repeatability is very similar to Linearity. Repeatability refers to a scale's ability to consistently deliver the same weight reading for a given mass, and to return to a zero reading after each weighing cycle. You can test this by repeatedly weighing the same object. Repeatability is sometimes referred to as "Standard Deviation" of a set of similar weight readings. Repeatability is often an important specification for expensive commercial balances and is not a specification published by manufactures for the scales you will find here at RightOnScales."Every time you take your 1g mass off you have slightly changed how the scale will continue to operate. THIS IS A RANDOM EFFECT!. That is why we add the uncertainties in quadrature! Same thing with a digital caliper. It is never placed exactly back at the same spot, it is very close but there is some randomization there!Have you ever measured something, taken it off and then noticed that the scale reads negative now?
 
  • #24
Sorry - I am unable to convince you.

You must solve this dispute experimentally and make 100 such samples and check the final weight.

Try both digital balance of 0.01g last digit and classical pharmacy balance with set of stainless steel weights down to 10mg.
 
Last edited:
  • #25
And I'm sorry you can not refute the facts I've presented.

And If I'm wrong I have to carry that message to all the colleges that I teach for, for not one follows the prescription you are describing.
 
  • #26
xts said:
Your example with the balance of the accuracy 0.01g is a very good one.

Take an electronic balance of both accuracy and resolution of 0.01g.
Now prepare a sample of some substance weighting 1g (±0.01g). Prepare 100 such samples using the same balance, the same vessel, etc.
Combine samples together. The final sample must be considered 100±1g (combined linearily) rather than 100±0.1g (as you'd get in quadratures).

This is the essence of classifying the error of the instrument. You need to repeat the procedure a lot of times in order to find the standard deviation of the sample of sums of 100 cups each!

EDIT:

I think this test is relevant for the standard deviation:
http://www.itl.nist.gov/div898/handbook/eda/section3/eda358.htm
 
  • #27
I think that finite precision of the instrument ought to be accounted for in systematic errors. Let me give an example. Suppose we measure the area of a rectangle by measuring its sides with a ruler with a graduation [itex]\Delta L[/itex]. Suppose each side is measured to be in the interval:
[tex]
a_{0} \le a < a_{0} + \Delta L
[/tex]
and
[tex]
b_{0} \le b < b_{0} + \Delta L
[/tex]

If we repeat the measurement of each side [itex]N = 10,000[/itex] times, we will always see that it falls within this interval. This, however, would not increase our knowledge of the uncertainty of each side, i.e. we cannot say that the uncertainty in:
[tex]
\bar{a} \equiv \frac{1}{N} \, \sum_{k = 1}^{N}{a_{k}}
[/tex]
is
[tex]
\sigma(\bar{a}) = \left( \sum_{k = 1}^{N}{\frac{\sigma^{2}(a_{k})}{N^{2}}}\right)^{ \frac{1}{2} } = \frac{\Delta L}{2 \sqrt{N}}
[/tex]
but it is still [itex]\Delta L/2[/itex]!

To see this, look at what limits we have for the average:
[tex]
a_{0} \le a_{k} < a_{0} + \Delta L, (k = 1, 2, \ldots, N)
[/tex]

[tex]
N a_{0} \le \sum_{k = 1}^{N}{a_{k}} < N (a_{0} + \Delta L)
[/tex]

[tex]
a_{0} \le \frac{1}{N} \sum_{k = 1}^{N}{a_{k}} \equiv \bar{a} < a_{0} + \Delta L
[/tex]

Ok, now back to our area problem. Now, the area of the rectangle is definitely in the interval:
[tex]
a_{0} b_{0} \le A \equiv a b < (a_{0} + \Delta L) (b_{0} + \Delta L)
[/tex]

with a half-width:
[tex]
\frac{1}{2} \left[(a_{0} + \Delta L) (b_{0} + \Delta L) - a_{0} b_{0} \right] = \frac{(a_{0} + b_{0} + \Delta L) \Delta L}{2} = \frac{(\bar{a} + \bar{b}) \Delta L}{2}
[/tex]
This is quite different from:
[tex]
\sigma(A) = \sqrt{\bar{b}^{2} \sigma^{2}(a) + \bar{a}^{2} \sigma^{2}(b)} = \frac{\Delta L}{2} \sqrt{\bar{a}^{2} + \bar{b}^{2}}
[/tex]
because:
[tex]
\sqrt{\frac{\bar{a}^{2} + \bar{b}^{2}}{2}} \le \frac{\bar{a} + \bar{b}}{2} \Rightarrow \sqrt{\bar{a}^{2} + \bar{b}^{2}} \le \frac{\bar{a} + \bar{b}}{\sqrt{2}}
[/tex]
so, it underestimates the uncertainty in area by at least a factor of [itex]1/\sqrt{2}[/itex].
 
  • #28
I see your point and I believe that it is fully explained when I say that the intrinsic uncertainty is a bound for the systematic error of the device.


For instance a particular digital caliper's manufacturers uncertainty is given as 0.002cm.

This means that 0.002cm = x +y. Where x will be the systematic error, and y will be some random component. Random because depending on how the tool is used the particular exact systematic error may change.

You are partly correct that you shouldn't be able to resolve more indefinitely by making multiple measurements. In fact even in a true random scenario there is some limit where fluctuations from the measurer or other affects will become problematic.

My point is its not truly systematic, as if it where we could just easily correct for it and it would be the end of the argument. There is some unknown systematic component

In the context of doing undergraduate physics labs, one does not take 1000 or 100 measurements of a 0.01g sample, there are too many other factors that will affect that ideal experiment.

For one that makes 3 measurements and takes an average, not much harm would be done by dividing the standard deviation and dividing it by the square root of three, and addiing this in quadrature to the intrinsic uncertainty. Worst case scenario it will only get larger!



Who knows the exact systematic error, its impossible to find and expensive to more you try to narrow it down. There is some random component in there.

So its not fully random, its not fully systematic, and my main point its not purely systematic. In the context of the particular experiments students will have, it is reasonable to add in quadrature since the manufactures conservatively quote these uncertainties.

But you are right that I can not get unlimited precision by taking more samples, with respect to this example or even a purley random one in practice.
 

Related to Is Intrinsic Uncertainty in Measurement Tools Systematic or Random?

What is systematic error and how does it differ from random error?

Systematic error refers to consistent and predictable mistakes or flaws in a measurement or experiment. It differs from random error, which refers to unpredictable and inconsistent fluctuations in measurements.

What are some common sources of systematic error?

Some common sources of systematic error include faulty equipment, incorrect calibration, biased sampling, and human error.

How can systematic error be reduced or eliminated?

Systematic error can be reduced or eliminated by using proper calibration techniques, ensuring accurate and precise equipment, and implementing thorough and unbiased sampling methods.

Can systematic error ever be completely eliminated?

While systematic error can be minimized, it can never be completely eliminated due to the inherent limitations and imperfections in measurement tools and techniques.

How does systematic error impact the validity of experimental results?

Systematic error can significantly impact the validity of experimental results by introducing a bias that skews the data and leads to inaccurate conclusions. It is important to identify and account for systematic error in order to ensure the accuracy and reliability of experimental findings.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Introductory Physics Homework Help
Replies
2
Views
952
  • Other Physics Topics
Replies
13
Views
3K
  • Other Physics Topics
Replies
9
Views
4K
  • Introductory Physics Homework Help
Replies
1
Views
20K
  • Quantum Physics
Replies
3
Views
303
  • Quantum Physics
2
Replies
48
Views
4K
  • Beyond the Standard Models
Replies
19
Views
5K
Replies
2
Views
3K
Back
Top