Instrument accuracy and comparison measurements

In summary, the instrument accuracy has an effect on how different multiple temperature readings of a sample will be. However, if the temperature difference between the samples is lower than the stated instrument accuracy, then the comparison difference is valid.
  • #1
fog37
1,568
108
TL;DR Summary
Instrument accuracy and comparison measurements
Hello,

All measurement instruments have a finite accuracy. For example, a thermometer may provide a temperature measurement that is +-2F from the "actual" and true temp value. For example, if the reported measurement is 70F, the actual temperature may be between 72F and 68F.

That said, let's assume we have 2 different materials and measure their temperatures ##T_1## and ##T_2##. Both ##T_1## and ##T_2## are affected by the finite accuracy of +-2F but their difference ##|T_1 - T_2|## is not. Is that correct? Even if ##T_1## and ##T_2## are not the actual temperatures, their difference is constant (assuming the same environment conditions). If we repeated, under the same conditions, both measurements 1 hr later, ##T_1## and ##T_2## may assume different values but their difference would be the same...

However, for the same and single material, the finite accuracy imply that multiple measurements would produce different temperature values even under the same conditions...
 
Engineering news on Phys.org
  • #2
fog37 said:
Summary:: Instrument accuracy and comparison measurements

That said, let's assume we have 2 different materials and measure their temperatures T1 and T2. Both T1 and T2 are affected by the finite accuracy of +-2F but their difference |T1−T2| is not.
It will depend on how many different thermometers you use to make the measurements.
 
  • Like
Likes fog37
  • #3
It also depends on if your 2F is really 2F or if it's really 3%, or something else.
 
  • Like
Likes fog37
  • #4
Not strictly correct. Your reasoning assumes that the error is a constant offset - that may be true, but cannot be assumed from a simple +/- 2% accuracy claim.
 
  • Like
Likes DaveE and fog37
  • #5
Hello, thanks everyone.

Yes, I am assuming that the same thermometer is being used to measure both samples. The rated thermal accuracy is:
Thermal Accuracy±2°C or ±2%

So when I measure the temperature of each sample, each temperature value will be off. But does this matter if all I care about is the relative temperature difference? Same reasoning goes if there are more than 2 samples and compare their temperatures to determine the hottest and the coolest.

Would any temperature difference less than 2C between the samples not be meaningful?
 
Last edited:
  • #6
Measurement error can be random, offset, or both. Search terms precision vs accuracy bring up a number of sites that discuss this. The difference is summarized in the following graphic:
AccuracyPrecision.jpg

Averaging a number of readings reduces the effects of random error (precision in the above graphic), but does not reduce the effect of offset error (accuracy in the above graphic).
 
  • Like
Likes Vanadium 50, Lord Jestocost, jim mcnamara and 2 others
  • #7
Thanks jrmichler.

I see. So would you agree that, regardless of what the offset error may be, comparing the temperatures of two different samples is meaningful even if the temperature difference is a value lower than the stated instrument accuracy, in my example 2C? For example, the two samples differ in temperature by 1.3C. Would that difference be scientifically make sense or would it be nonsense since it is lower than the accuracy of 2C?
 
  • #8
Measurement error is normally both offset and random. If you are comparing differences using the same instrument, the offset error should drop out, while averaging multiple readings will reduce the random error.

So yes, the comparison of two samples should be valid.

You test this by taking multiple temperature readings of the same sample, then plotting the temperatures vs time. A smooth curve has low random error.
 
  • Like
Likes fog37
  • #9
I agree with these responses but this brings up one of the most useful (IMHO) results in all of physics: the Central Limit Theorem.
Loosely it says that if there are enough variable inputs that affect the value of the measurement (temp, phase of moon, time of day, passing truck, shaky hand, line voltage, humidity, etc etc etc) then the result will be Gaussian distributed. In my real world experience this result is both remarkably true and fabulously useful. Then you can characterize the system and anticipate results.
 
  • Like
Likes fog37
  • #10
jrmichler said:
Measurement error is normally both offset and random. If you are comparing differences using the same instrument, the offset error should drop out, while averaging multiple readings will reduce the random error.

So yes, the comparison of two samples should be valid.

You test this by taking multiple temperature readings of the same sample, then plotting the temperatures vs time. A smooth curve has low random error.
So, somehow, that stated offset error value due to the limited accuracy is an interval and I thought that the offset would vary from measurement to measurement with offset error values within the stated +-2C or +-2% of the measurement. But the conclusion is that for comparison measurements, one done after the other, the offset error washes out and the comparison difference is valid...
 
  • #11
But it behooves the investigator to vigorously investigate such an offset error. This is unlikely to be a designed feature in a system and so the prudent course of action is to calibrate excessively until one understands the reason.
 
  • Like
Likes fog37 and DaveE
  • #12
All the above replies are very worthy of reference.

But I think for users, if the specification of the thermometer says that the accuracy is +/-2C or 2% under certain environmental conditions, and if the measured temperature is 50C, the real temperature should be in range of 48C to 52C or 49C to 51C, so of course we have to use the worse range (48C to 52C) for the measurement result, which can reasonably expect the manufacturer to guarantee.

On the other hand, before obtaining further information from the manufacturer, it is not certain to use one thermometer for multiple tests, or use multiple thermometers for multiple tests, and then on average can improve accuracy. I think this approach is safer because it reduces the possibility of disappointment caused by overly optimistic expectations.

There is one thing that should be more certain. Referring to the example mentioned above, if two thermometers of the same brand and the same model are used to measure the temperature of the same thing under the same environmental conditions, the temperature difference obtained is more than 4C, then people will indeed lose confidence in this thermometer.
 
Last edited:
  • #13
alan123hk said:
All the above replies are very worthy of reference.

But I think for users, if the specification of the thermometer says that the accuracy is +/-2C or 2% under certain environmental conditions, and if the measured temperature is 50C, the real temperature should be in range of 48C to 52C or 49C to 51C, so of course we have to use the worse range (48C to 52C) for the measurement result, which can reasonably expect the manufacturer to guarantee.

On the other hand, before obtaining further information from the manufacturer, it is not certain to use one thermometer for multiple tests, or use multiple thermometers for multiple tests, and then on average can improve accuracy. I think this approach is safer because it reduces the possibility of disappointment caused by overly optimistic expectations.

There is one thing that should be more certain. Referring to the example mentioned above, if two thermometers of the same brand and the same model are used to measure the temperature of the same thing under the same environmental conditions, the temperature difference obtained is more than 4C, then people will indeed lose confidence in this thermometer.
Thanks alan123hk.

I guess I am still hang up on the fact that, for comparison measurements between 2 different samples with the same thermometer we can take the finite accuracy of +-2C can be take to be a constant offset making the comparison measurement valid.

Example: temperature ##T_1= 60C## of sample 1 is measured at time ##t_1##. Temperature ##T_2=58C## of sample 2 is measured at time ##t_2## later. The temp difference ##(60-58)=2C## makes the first sample 5C higher. Is that meaningful even if the the accuracy is 2C?

In theory, the first sample could be lower or higher than 60C and the 2nd sample be lower or higher than 58C making the the comparison difference very different from 2C...
 
  • #14
@fog37 I hope I don't misunderstand what you mean.

Manufacturers usually only provide the simplest and most straightforward specifications, such as +/-2C, they do not need to provide the error in measuring the temperature difference, and I have not seen product specifications detailing its the random error, offset error and multiplier error.

Of course, if you know that the thermometer has only a constant offset error, then apart from the displayed resolution limit, the temperature difference it measures basically has no other errors.

Under normal circumstances, the user only knows that the measurement error is +/-2C, then the error of the measured temperature difference should be [(+2)-(-2)] to [(-2)-(+2)], which means +/-4C.

https://www.physics.umd.edu/courses/Phys276/Hill/Information/Notes/ErrorAnalysis.html
 
  • Like
Likes fog37
  • #15
alan123hk said:
@fog37 I hope I don't misunderstand what you mean.

Manufacturers usually only provide the simplest and most straightforward specifications, such as +/-2C, they do not need to provide the error in measuring the temperature difference, and I have not seen product specifications detailing its the random error, offset error and multiplier error.

Of course, if you know that the thermometer has only a constant offset error, then apart from the displayed resolution limit, the temperature difference it measures basically has no other errors.

Under normal circumstances, the user only knows that the measurement error is +/-2C, then the error of the measured temperature difference should be [(+2)-(-2)] to [(-2)-(+2)], which means +/-4C.

https://www.physics.umd.edu/courses/Phys276/Hill/Information/Notes/ErrorAnalysis.html
Thanks again. No, you didn't misunderstand I believe.

In essence, when comparing 2 samples, the larger is their temp difference, relative to the stated accuracy, the better.
In the above example, the temp difference is 2C (sample 1 is hotter than sample 2) with an accuracy of +/-4C which is a lot, i.e. there is a lot of variability in the temp difference. All we can say is that one sample is hotter than the other but by how much does not seem very meaningful if the temp difference is smaller than the accuracy. There is a big difference in saying that the two samples differ by 6C, 4C or -2C...

Same goes if we compare 3 or 4 samples temperaturewise with the same thermometer with the +/-4C accuracy...
 
  • #16
Considering the rule that the uncertainty of a difference is the sum of the uncertainties:

A = T1 +/-2C
B = T2 +/-2C

A - B = (T1-T2)+/-4C
 
  • #17
The importance of the potential error in a measurement depends upon the circumstances. Whether a difference is "meaningful" is also similarly situation dependent. And "absolute" error bars without some specification of confidence are a fiction. If you ask the uninformed what is an acceptable rate of failure the answer is invariably "zero"
There are conventional methods for characterizing error that are useful for general commerce, particularly where life and limb depend upon risk assessments and lawyers are involved. It behooves scientists to know these standards.
But to really characterize the errors in a system is seldom a "one size fits all" exercise. Attempts to do so lack focus and utility.

fog37 said:
Considering the rule that the uncertainty of a difference is the sum of the uncertainties:

A = T1 +/-2C
B = T2 +/-2C
Even uniform "box" uncertainties for a larger numbers of measurements will get more and more peaked to the average (you work it out). Suppose ##T_1=T_2## in your example.
 
  • Like
Likes fog37
  • #18
There is a special case applicable to dial thermometers not present in thermometers based on expansion of a fluid in a capillary tube or in electronic thermometers..

The indicator needle for these dials is often driven by a mechanical rack-and-pinion gearset. Any friction, dirt, dried lubricant in the gears leads to hysteresis, or deadband, in their readings.

Know you equipment before betting your reputation on it!

Cheers,
Tom
 
  • Like
Likes fog37

1. What is instrument accuracy?

Instrument accuracy refers to the degree to which a measurement made by an instrument reflects the true value of the quantity being measured. It is a measure of how close the instrument's measurement is to the actual value.

2. How is instrument accuracy determined?

Instrument accuracy is determined by comparing the instrument's measurement to a known standard or reference value. This can be done through calibration, where the instrument is adjusted to match the reference value, or through a comparison measurement using a more accurate instrument.

3. What factors can affect instrument accuracy?

There are several factors that can affect instrument accuracy, including environmental conditions (such as temperature and humidity), user error, and wear and tear on the instrument. It is important to regularly calibrate and maintain instruments to ensure their accuracy.

4. How can instrument accuracy be improved?

Instrument accuracy can be improved through regular calibration and maintenance, as well as using more accurate instruments for comparison measurements. It is also important to follow proper measurement techniques and minimize environmental factors that can affect accuracy.

5. What is the difference between instrument accuracy and precision?

Instrument accuracy refers to how close a measurement is to the true value, while precision refers to how consistent and reproducible the measurements are. An instrument can be precise but not accurate if it consistently measures the same value but it is not close to the actual value. Conversely, an instrument can be accurate but not precise if it consistently measures close to the true value but the measurements are not consistent.

Similar threads

Replies
1
Views
514
Replies
14
Views
2K
Replies
1
Views
604
  • Special and General Relativity
Replies
10
Views
1K
  • Special and General Relativity
2
Replies
40
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
2K
Replies
7
Views
847
  • Advanced Physics Homework Help
Replies
5
Views
12K
  • General Engineering
Replies
17
Views
6K
  • Programming and Computer Science
Replies
4
Views
3K
Back
Top