• I

## Main Question or Discussion Point

Hello! I am a bit confused about the systematic errors in experiments. For example, say that a mass scale was not calibrated and indicates 1g bigger all the time and I want to publish a paper in which to quote some measurements made with that scale. Let's assume that the value obtained, without taking the systematic into account is $250\pm 2$g, where the error here is statistical. In many papers I read (most of them in experimental particle physics) the errors have both statistical and systematic uncertainties, so if I am to follow that, I would quote my final result (once I realize my calibration problem) as $250\pm(2)_{stat}\pm(1)_{sys}$ g. So my question is, why can't I subtract 1 from my result and state it as $249\pm 2$ g? The calibration would affect just the mean, not the error. Also using $\pm$ for a statistical error seems a bit weird, given that I know the direction of the bias. So overall, my question is: once you state a systematic error in your result (in a paper), it means that you are aware of it and of the direction it creates the bias, so why not just correct the mean value and remain with just the statistical error? Or, at least, why not specify the direction of the systematic error, if you want to be fair and include it in the final result? Thank you!

Related Set Theory, Logic, Probability, Statistics News on Phys.org
mfb
Mentor
If you know that your scale shows one gram too much then you subtract one gram and don't assign a systematic uncertainty. But how often do you know the exact deviation of your scale?
If you state a systematic uncertainty you don't know what is right. The manufacturer of the scale might guarantee that the scale doesn't deviate by more than one gram. But that doesn't tell you which bias your specific scale has.

If you know that your scale shows one gram too much then you subtract one gram and don't assign a systematic uncertainty. But how often do you know the exact deviation of your scale?
If you state a systematic uncertainty you don't know what is right. The manufacturer of the scale might guarantee that the scale doesn't deviate by more than one gram. But that doesn't tell you which bias your specific scale has.
Thank you for your reply. I see what you mean. However, if that is the case and the scale deviates by at most 1 gram, without knowing the direction of the deviation, wouldn't that become a statistical error? I thought (and I could be wrong, I am really looking for an explanation) that systematics are errors that always push in a given directions only. If the error goes both ways, how is it different than a statistical error?

mfb
Mentor
All your readings with that scale will be off by the same unknown amount in the same direction. It is a systematic error, measuring more often won't help you with the uncertainty.

There are cases where the difference between the two is a bit unclear, but the scale example is not one of them.

As an example where it is not so clear: Sometimes simulations are needed to evaluate the probability to detect particle X. You simulate a million particles, in your simulation you find some fraction c which you use as estimate. That fraction c has an associated uncertainty from statistics. Is its impact on the final measurement (in data) a statistical uncertainty? It comes from statistics, but it won't change if we take more data, because it is independent of the size of the experimental dataset.

Dale
Mentor
However, if that is the case and the scale deviates by at most 1 gram, without knowing the direction of the deviation, wouldn't that become a statistical error? I thought (and I could be wrong, I am really looking for an explanation) that systematics are errors that always push in a given directions only.
See https://www.nist.gov/sites/default/files/documents/2017/05/09/tn1297s.pdf which is a simple digest of the definitive reference on handling of uncertainty. For this question you will want to focus on section 2.2 and 2.3.

Uncertainties are classified into two groups, those whose value is determined statistically and those whose value is not determined statistically. There is no physical difference between the two, the difference is how you as a researcher determine them. The categorization of “systematic” or “random” is no longer used.

If you measure the uncertainty of a balance by repeatedly measuring the same mass and calculating the standard deviation then that is a statistical uncertainty. If the manufacturer of the balance reports the uncertainty of the balance in their documentation and you just use that number then that is a non-statistical uncertainty. Both represent the same physical phenomena, but that is not important. The classification is based on how the user obtains the estimate of the uncertainty.

Known biases do not contribute to uncertainty. The measurement is always assumed to be corrected for known biases. Unknown biases are not known to be either high or low so they contribute uncertainty. That uncertainty could be either type.

WWGD
Gold Member
From what I understand, error is systemic when it deviates from a pre-set limit in a control chart , then the process is said to be out of control and you need to pick it appart to determine the source of the error. Below that pre-set threshold, it is random variability and, moreover, trying to eliminate random variability is counterproductive.

Stephen Tashi
In many papers I read (most of them in experimental particle physics) the errors have both statistical and systematic uncertainties,
You should give an clear example of such a situation - perhaps a link to one of the papers.

Suppose we have fit a function $y = f(x)$ to data and the interpretation of the function is that $x$ represents some physical quantity (such as energy) and $f(x)$ represent a parameter of a stochastic process that takes place at that energy. For example, $f(x)$ might represent the mean value of a Poisson random variable when the energy is $x$.

Consider a datum $(x_0, y_0)$ from an experiment. Let $Y_0$ denote the actual mean value of the Poisson random variable at energy $x_0$. If we consider the deviation of a Poission random variable from its mean to be an "error". this error is $y_0 - Y_0$. We can express $y_0 - Y_0$ in the form $y_0 - Y_0 = (y_0 - f(x_0)) + (f(x_0) - Y_0)$. The term $(f(x_0)- Y_0)$ is a systematic error in the sense that it caused by $f(x_0)$ predicting the wrong value for $Y_0$ by the same amount on every repetition of the experiment. The term $(y_0 - f(x_0))$ is a random error in the sense that it represents a random deviation of a Poission random variable from a constant value.

Last edited:
Dale
Mentor
So the current approach advocated by the NIST is to not refer to errors but rather to uncertainty. This is not just a change in wording, but it is a recognition that fundamentally we never know the value of the measurand, so we can not know the value of the error. However, based on our measurement we can say that we believe that the value of the measurand is within some (hopefully small) range of our measurement result. The measured value and our uncertainty about the measurand is what is measurable, but neither the actual value of the measurand nor the error can be known.

The NIST also does not classify uncertainty in terms of "random" or "systematic" but rather according to how they are estimated. If the uncertainty is estimated by statistical methods then it is a "type A" uncertainty and if the uncertainty is estimated by any other method then it is a "type B" uncertainty.

Although there is a lot of outdated information available about uncertainty, I would recommend using the current scientific best-practices whenever possible.