What is the correct definition of measurement uncertainty?

  • #1
Govind
11
1
TL;DR Summary
what would be the correct definition of measurement uncertinity which would be applicable to any case whereas 'systemaic error is involved or not' or 'measurements are finite or infinite'?
I have some confusion regarding Measurement Uncertainity. In some books/articles it is defined wrt true value as "Uncertainty in the average of measurements is the range in which true value is most likely to fall , when there is no bias or systematic component of error is involved in measurement" but if we take infinite measurements ( theoretically ) with no systematic error then we would be pretty sure that mean of measurements will be equal to true value and uncertainty would make no sense as it is defined previously!

There is another definition (which is based on confidence level, generally 68% and random measurements not with true value) states that "There is roughly a 68% chance that any measurement of a sample taken at random will be within one standard deviation of the mean. Usually the mean is what we wish to know and each individual measurement almost certainly differs from the true value of the mean by some amount. But there is a 68% chance that any single measurement lies with one standard deviation of this true value of the mean. Thus it is reasonable to say that:". It seems to me Correct to some extensions ( when no systematic error is envolved ) because it would be applicable to infinite measurements also.

Both the definitions are defined with consideration of no systematic component of error involved but if it is involved what would uncertainty in uncorrected result ( not corrected for systematic error ) represent? Well, second definition will still hold if we measure the further measurements with the same instrument or without correction of systematic component but if someone other would measure measuand with no systematic component or with corrected instruments his 2/3 measurements will not fall in that range of uncertainty. Then in that case what would be the appropriate definition of uncertainty?

And here GUM description for uncertainty came up :-

D.5.1 Whereas the exact values of the contributions to the error of a result of a measurement are unknown and unknowable, the uncertainties associated with the random and systematic effects that give rise to the error can be evaluated. But, even if the evaluated uncertainties are small, there is still no guarantee that the error in the measurement result is small; for in the determination of a correction or in the assessment of incomplete knowledge, a systematic effect may have been overlooked because it is unrecognized. Thus the uncertainty of a result of a measurement is not necessarily an indication of the likelihood that the measurement result is near the value of the measurand; it is simply an estimate of the likelihood of nearness to the best value that is consistent with presently available knowledge.
Above description is like adding an additinal point 'systematic error' to first definition which is based on true value and the same confusion arises here that what would uncertainity in population mean (mean of theoritically infinite measuremnets) with no systemati error represent?

Now I ends up here with my point of view on previously stated definitions. And want to ask what would be the correct definition of measurement uncertinity which would be applicable to any case whereas 'systemaic error is involved or not' or 'measurements are finite or infinite'?
 
Physics news on Phys.org
  • #2
Uncertainty generally refers to real-world applications where the error in a model may be quantified, you cannot quantify what you dont know about the limits of the model

https://en.wikipedia.org/wiki/Uncertainty

The analogy of the Turkey's statistical model of the farmer is often used - there is nothing in the Turkey's data set that will indicate the ultimate outcome:

Turkey-diagram.png

https://theanalogiesproject.org/the-analogies/forecasting-like-turkey/
 
  • Like
Likes Govind and PeroK
  • #3
You should say "infinitely many measurements" rather than "infinite measurements".
I wouldn't worry too much about actually having infinitely many measurements. That never happens. But you can talk about the mean of the theoretical random distribution.
I thought that your second reference had a reasonably good introduction to the systematic bias of measurements. Do you want more than that?
 
Last edited:
  • Like
Likes Govind
  • #4
It is important to distinguish between error and uncertainty. These are two very different concepts.

Error describes the difference between a measurement and the true value.

Uncertainty describes the difference between multiple measurements.

Error is unknowable because it depends on the true value which is unknowable. Uncertainty can be obtained directly from measurements. Even systematic uncertainty is determined by systematically varying the conditions and determining the discrepancy in the resulting measurements.
 
  • Like
Likes Govind
  • #5
Govind said:
I have some confusion regarding Measurement Uncertainity.

My guess is that your are consulting books and articles that aim only to state procedures for reporting lab results, not books that explain the mathematics on which these procedures are based.

In mathematical statistics "uncertainty" can be used to denote the standard deviation of a random variable. The standard deviation of a random variable can be defined precisely in mathematical terms. When organizations write documents attempting to convey meanings of "uncetainty" in an exclusively verbal manner, they come up with a variety of descriptions. If you are studying for a test that expects you to know the verbal descriptions, the safest bet is to quote from sources that the testers will use to judge the correctness of answers. For example the link you gave refers to GUM and ASME and there's also the ISO series of documents.

If you have to study such documents, you have my sympathy. I find that they have a lot of language that is, at face value, hair splitting, but actually imprecise.

If you want to study the science of measurement from the mathematical point of view, you should study mathematical statistics, in particular the theory of statistical estimation.

Perhaps you plan to spend your effort thinking about purely verbal descriptions of concepts in measurement, trying to compare and reconcile what different documents say. I'd call that a study of literature rather than an application of mathematics, but there is probably a demand for such knowledge - perhaps patent attorneys and technical writers need it.
 
  • Like
Likes Govind, e_jane and FactChecker
  • #6
Govind said:
Both the definitions are defined with consideration of no systematic component of error involved but if it is involved what would uncertainty in uncorrected result ( not corrected for systematic error ) represent?
Ok, let’s say that there is some systematic error, and look at what different types of systematic error do to the uncertainty.

Universal bias: if a constant bias systematic error affects all measurements it does not add any uncertainty.

Instrument bias: if a constant bias affects each instrument that can be measured with an ensemble of instruments. It could be corrected via calibration, but if not then its variance would be added to the other sources of uncertainty as usual.

Environmental bias: if a non-constant bias exists due to some environmental effects then that can be measured experimentally. It could be corrected by measuring the environmental effect, but if it is unmeasured then you would make a best estimate for the range and add the corresponding variance.

Other systematic errors: follow a similar approach as above
 
  • Like
Likes Govind
  • #7
FactChecker said:
You should say "infinitely many measurements" rather than "infinite measurements".
I wouldn't worry too much about actually having infinitely many measurements. That never happens. But you can talk about the mean of the theoretical random distribution.
I thought that your second reference had a reasonably good introduction to the systematic bias of measurements. Do you want more than that?
Edit:- In context of infinite measurements, I meant there a large number of measurements and I know infinite times is what not possible to measure that's why I have wrote there theoretically not practically. I wanted to know that for example what does population standard deviation in population mean shows when no bias component is involved ? Because then we can safely assume population mean to true value and don't need to show a standard deviation if it is related to only true value not the random measurements taken. ( population mean ISO 3534-2: 1.2.1 )
 
  • #8
Stephen Tashi said:
you want to study the science of measurement from the mathematical point of view, you should study mathematical statistics, in particular the theory of statistical estimation.
Would you like to suggest some good books or articles that deals with statistics in measurements.
 
  • #9
Dale said:
Uncertainty describes the difference between multiple measurements.
If it is not related to true or actual value of measurement then what's use of uncertainty in reporting a measurement.
 
  • #10
Govind said:
If it is not related to true or actual value of measurement then what's use of uncertainty in reporting a measurement.
As far as usefulness, you have this completely backwards. The true value is useless because it is unknowable.

Only the uncertainty is useful. It is useful because it directly tells you how reproducible your measurements are. It reflects the reality of your measurement.

Error doesn’t reflect reality. It depends on the true value, which is a fiction that exists only in your mental model. That is why it is not useful.
 
  • Like
Likes Govind
  • #11
Govind said:
Edit:- In context of infinite measurements, I meant there a large number of measurements and I know infinite times is what not possible to measure that's why I have wrote there theoretically not practically. I wanted to know that for example what does population standard deviation in population mean shows when no bias component is involved ? Because then we can safely assume population mean to true value and don't need to show a standard deviation if it is related to only true value not the random measurements taken. ( population mean ISO 3534-2: 1.2.1 )
Yes, in the simplest context, the population mean is a true, single number, not a random variable. So the mean has no standard deviation. But that is not the same as the population standard deviation. The population is the entire collection of all possible results of a random variable and it does have a standard deviation.
 
  • Like
Likes Govind
  • #12
There can be conflicts between mathematical definitions and definitions used to describe lab work. For example, consider the terms "bias", "true", and "error".

In mathematics, it makes no sense to talk about the "bias" of a population parameter. The mathematical use of the term "bias" applies to the (arithmetical) difference between the mean of some estimator's distribution and the parameter of the population that it attempts to estimate. So to talk about "bias" you must talk about two different populations. One population is the population being sampled. The other population is the outcomes of the estimator. In that context the "true" value is the value of the population parameter for the population being sampled. The difference between one outcome of the estimator and the "true" value of population being sampled is the "error" of that outcome.

However, in the context of lab work, you could have a situation where a scale reads 0 when the actual mass on the scale is 1 gram. So all your measurements (which may also have random errors) are "biased" by 1 gram and the mean of the population of measurements is "biased" by -1, In that context the "true"value may refer to the actual mass of the thing being measured. And "error" may refer to the difference between a measurement and the actual value of the thing being measured.

To further complicate matters, there are (at least) two different scenarios in lab work. In one scenario, you take a population of measurements on the same object or on actually identical objects, so the "true" value of the thing being measured is constant. In a different scenario, you take measurements on a each member of a population of different objects, so the "true" value involved in an individual measurement varies. If your interest is in the mean value of the property being measured taken over the population of different objects, then you may refer to this mean value as the "true" value and that "true" value is constant.
 
Last edited:
  • Like
Likes Govind
  • #13
Govind said:
If it is not related to true or actual value of measurement then what's use of uncertainty in reporting a measurement.
Here's an example. There are, on average, 465 beans in a can of baked beans. (I found that out recently!)

If you take a sample of 100 cans (say), there is some uncertainty about how many beans you'll find in each can. You might find exactly 465; you might find slightly more or fewer; and occasionally you might find considerably more or fewer. The uncertainty is part of the distribution of beans per can and is related to the production process.

Suppose you have a process to count the beans. That process may not be perfect. For example, if you had to count the beans in every can, you might get bored and lose count. The data you report may additionally contain your counting errors. There might be 462 beans in a can and you report that there were 461. That's an error.

I must admit I don't know what @Dale means by the true value is unknowable. We know lots of data precisely. For example, we know precisely how many runs each test batsman has scored in his or her career. Every time a batsman comes to the crease there is great uncertainty in how many runs they will score. But, there are generally very few errors in the scoring system.
 
  • Like
Likes jbergman and Govind
  • #14
PeroK said:
I must admit I don't know what @Dale means by the true value is unknowable.
There is always the possibility that all of your measuring devices have the same systematic bias. You can never know that is not the case. For example, if Lorentz aether theory is true and the earth is moving through the aether then our clocks all run slow. Because they are all slow by the same amount averaging does not reduce the error and we cannot detect this error regardless of the precision of our measurements.
 
  • Like
Likes Govind
  • #15
Dale said:
There is always the possibility that all of your measuring devices have the same systematic bias. You can never know that is not the case. For example, if Lorentz aether theory is true and the earth is moving through the aether then our clocks all run slow. Because they are all slow by the same amount averaging does not reduce the error and we cannot detect this error regardless of the precision of our measurements.
This is in the mathematics forum and there is plenty of data that is well-defined.
 
  • Like
Likes Govind
  • #16
Now I am done with my doubts and I am posting what I learnt through your posts and a book half of which I recently completed : Taylor : An introduction to error analysis.

FactChecker said:
Yes, in the simplest context, the population mean is a true, single number, not a random variable. So the mean has no standard deviation. But that is not the same as the population standard deviation. The population is the entire collection of all possible results of a random variable and it does have a standard deviation.
That's a concept I was lagging with. I didn't know difference between Standard deviation of the mean (also know as standard error = SD/(N)½) and Standard deviation. Both are deviation but in different sense. Former describes that how close our mean is with respect to conventional true value (population mean) and the later one is based on dispersion of measurements.

Taylor in his book (In chapter 4 and 5) described that Uncertainity reported in measurement must be standard deviation in mean not standard deviation (SD).
________________________________________________________________________________________
Screenshot_20230909-231943~2.png
________________________________________________________________________________________

Further he reported measurement as Mean ± Standard deviation in mean in a example.
________________________________________________________________________________________

Screenshot_20230909-235124~2.png
________________________________________________________________________________________

Now according to formula which is inversely proportional to number of measurements, we would have a negligible standard error associated with mean if we take a large number of measurements ( Uncertainity ≈ 0 ) But that doesn't mean that standard deviation would be negligible!
And as described above ( Uncertainity as SD/(N)½ ) GUM description for uncertainty (Uncertainity is simply an estimate of the likelihood of nearness to the best value that is consistent with presently available knowledge ) would be the best definition for Uncertainity (valid for small or large number of measurements) as sometimes we are not aware with some of bias components of error and in that condition range in mean as Uncertainity, definitely would not show true value but will describe the best estimate that can be taken in those conditions.

Stephen Tashi said:
There can be conflicts between mathematical definitions and definitions used to describe lab work.

Stephen Tashi said:
To further complicate matters, there are (at least) two different scenarios in lab work. In one scenario, you take a population of measurements on the same object or on actually identical objects, so the "true" value of the thing being measured is constant. In a different scenario, you take measurements on a each member of a population of different objects, so the "true" value involved in an individual measurement varies.

@Stephen Tashi in his answer cleared a another doubt that meaning of term can vari in maths and in physics.

Math : Like in a classroom of 100 students if I report a measurement of their performance in test out of 10, we will not use the word Uncertainity in report because each student's mark is exact i.e no error and Uncertainty is associated with that so in that case we would report performance as Average Score ± standard deviation not like Average Score ± standard error and one more point standard deviation shows here that 68 student's marks lie in SD range

Physics : Suppose I have to measure time period of a pendulum and I take 100 measurements under similar conditions and I know that now unlike to student's test scores these 100 time periods are not correct , they have a random ( or may have systematic component also) error associated with them so I would report time period as Average ± Standard Error and this reporting will not be necessarily reflect to true value (as there may be bias included) . Standard deviation here would simply represent that if someone under similar conditions measure time period 68% of his/her readings will fall in SD range.

( In classroom example we are using statistics for different students scores and each one's individual score is known perfectly but in time period calculation we would have only one true value and we are just trying to get close to that by repeating measurements as we don't know about some unknown errors that could get us take away from true value. )

That's all I had to say. Correct me if I am wrong somewhere!

@BWV @FactChecker @Dale @Stephen Tashi @PeroK
In poetic words, my thanks I convey,
For your guidance, lighting up my way.
Thankyou so much.
 
Last edited:
  • Like
Likes PeroK
  • #17
I'm sorry. I should have been more careful to distinguish between the mean of a distribution and the mean of a sample from a distribution.
The mean of a distribution is a single number, not a random variable. It is a fixed parameter of the distribution and it has no standard deviation.
The mean of a sample from a distribution is a random variable. It is different for each sample and it has a standard deviation.
 
  • Like
Likes Stephen Tashi and Govind
  • #18
PeroK said:
there is plenty of data that is well-defined
Not in the sense that a true value is knowable. You can always postulate some hidden agent systematically biasing your data in an undetectable manner. There is simply no way to know that there is no undetectable source of bias.
 
Last edited:
  • #19
Govind said:
That's all I had to say. Correct me if I am wrong somewhere!

Standard deviation here would simply represent that if someone under similar conditions measure time period 68% of his/her readings will fall in SD range.

Some texts with a practical focus would agree with that statement, but in a technical sense, it is not mathematically correct. The probability of something is not a guarantee about how often it will happen. We can't make the guarantee implied by the phrase "will fall". For that reason, many practical texts add a few weasel words and say things like "We expect 68% of the readings to fall within plus or minus 1 SD of the mean value" or "About 68% of the readings will fall within plus or minus 1 SD of the mean value".
Whether those imprecise statements are mathematically correct depends on how you interpret them.

Given values of probabilities only imply other facts about probabilities. They don't guarantee anything about how often things must happen. You could say "There is (about) an 0.68 probability that a reading will fall within plus or minus 1 SD of the mean value".

If you have studied elementary probability, you may have studied problems where there is a coin whose probability of landing heads is 0.5 and we toss it a given number of times. This scenario involves questions like "What is the probability that a fair coin tossed 8 times will fall heads exactly 3 times?". The fact that correct answers to such problems are nonzero should remind you that the probability of 0.5 is not a guarantee that the coin "will fall" heads 50% of the time. If we could make that guarantee then the probability of getting 3 heads in 8 tosses should be zero.

A student can make a certain amount of progress in statistics by thinking of probabilities as measured frequencies. However, if you study statistics to the point where it is a logically coherent set of concepts, you must eventually make the distinction between probabilities versus frequencies.
 
  • Like
Likes Govind
  • #20
Stephen Tashi said:
If you have studied elementary probability, you may have studied problems where there is a coin whose probability of landing heads is 0.5 and we toss it a given number of times. This scenario involves questions like "What is the probability that a fair coin tossed 8 times will fall heads exactly 3 times?". The fact that correct answers to such problems are nonzero should remind you that the probability of 0.5 is not a guarantee that the coin "will fall" heads 50% of the time. If we could make that guarantee then the probability of getting 3 heads in 8 tosses should be zero.

Stephen Tashi said:
There is (about) an 0.68 probability that a reading will fall within plus or minus 1 SD of the mean value".
Both your statements are absolutely correct , no doubt. But I meant there 'a very large number of measurements reflects the mathematical probability' e. g. Tossing a coin 2 times may result in one part ( head or tail ), 100% of time only one but if you increase number from 2 to 200 we would be close to 50% and further increase in tossing would be more close to 50%.
Only in infinite measurements ( which is not possible ) , result will surely reflect the mathematical probability.
So SD in terms of probability would be better and would be applicable to maths case also that is we pick any random student even after we don't have any clue about his score we would predict that his score would be probably (.68 probability) within 1 SD and 0.95 probability to be in 2SD.
Hey , I just got a doubt that in case of student's score would it be correct to say that there is 68% chance of score to be in 1SD or here also percentage will show any other thing like in physical measurements and tossing a coin?
 
  • #21
Govind said:
Hey , I just got a doubt that in case of student's score would it be correct to say that there is 68% chance of score to be in 1SD or here also percentage will show any other thing like in physical measurements and tossing a coin?
I would prefer to say that "on average, 68% will be within 1 SD", where the implication is that the average is over an infinite number of trials. And it makes no statement of how much you expect that to happen in any particular trial. To say that "we expect" could imply that you really expect something to happen that might often be false.

CORRECTION: Where random behavior is involved, it is not correct to say that "on average, 68% will be within 1 SD". Any statement like that is too strong. In general, you can't make absolute statements like that; you can only make statements that can be reasonably counted on.
 
Last edited:
  • Like
Likes Govind
  • #22
Govind said:
but if you increase number from 2 to 200 we would be close to 50% and further increase in tossing would be more close to 50%.
Only in infinite measurements ( which is not possible ) , result will surely reflect the mathematical probability.

That is not a correct statement of a mathematical theorem. I sounds like an empirical formulation of The Law Of Large Numbers, but probability theory provides no guarantees about asymptotic frequencies. It only provides results about asymptotic probabilities. And probability theory says nothing about an "infinite number of trials". It only talks about limits of sequences of probabilities.

In making practical decisions, a person may assume that a probability is equivalent to the frequency with which something must happen, but this decision must be justified by practical experience or the science of the particular field where you apply probability theory. It cannot be justified by mathematical probability theory itself.
 
  • Like
Likes Govind and FactChecker

1. What is measurement uncertainty?

Measurement uncertainty refers to the potential error or variability in a measurement. It is the degree of doubt or lack of confidence in the results obtained from a measurement.

2. What factors contribute to measurement uncertainty?

There are several factors that can contribute to measurement uncertainty, including the precision and accuracy of the measuring instrument, the skill of the person conducting the measurement, environmental conditions, and the inherent variability of the object being measured.

3. How is measurement uncertainty calculated?

Measurement uncertainty is typically calculated using statistical methods, such as standard deviation or confidence intervals. These calculations take into account the various sources of uncertainty and provide a range of values within which the true measurement is likely to fall.

4. Why is measurement uncertainty important?

Measurement uncertainty is important because it helps to determine the reliability and accuracy of a measurement. It also allows scientists to assess the level of confidence in their results and make informed decisions based on the potential variability in the data.

5. How can measurement uncertainty be reduced?

Measurement uncertainty can be reduced by using more precise and accurate measuring instruments, improving measurement techniques, and controlling environmental conditions. It is also important to properly document and analyze sources of uncertainty to minimize their impact on the final measurement.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
25
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
19
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
18
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
793
  • Set Theory, Logic, Probability, Statistics
2
Replies
42
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
845
Back
Top