I RMS or RSS for Characterizing Measurement Uncertainty?

  • I
  • Thread starter Thread starter senmeis
  • Start date Start date
  • Tags Tags
    Rms Uncertainty
AI Thread Summary
The discussion centers on the differences between using Root Mean Square (RMS) and Root Sum of Squares (RSS) for characterizing measurement uncertainty. RMS is associated with the average spread of values from measurements, while RSS is used to determine the spread of a function of random variables, accounting for their individual uncertainties. The choice between RMS and RSS depends on whether the focus is on the total measurement uncertainty or the average of measurements. It is emphasized that RSS is appropriate when considering multiple independent uncertainties contributing to a final result. The conversation highlights the need for clarity in defining the type of uncertainty being measured to select the correct method.
senmeis
Messages
72
Reaction score
2
TL;DR Summary
RMS; RSS
Hi,

the following statement comes from a document of Keysight spectrum analyzer:

The sources of uncertainty can be considered independent variables, so it is likely that some errors will be positive while others will be negative. Therefore, a common practice is to calculate the root sum of squares (RSS) error.

Question: What happens if RMS instead of RSS is used to characterize uncertainty? The only difference between RMS and RSS is a √1/n before RMS.
 
Physics news on Phys.org
Depends on what is calculated ...
 
The RMS is really the variance or estimate of the dispersion or average spread of the distribution of values of a random variable as obtained from measurements of the values of that variable. It is obtained from specific measurements. As in any average of measurements the more you have the more accurate the estimate leading to the 1/N factor (actually 1/N-1) in the definition of variance. The variance is considered the measure of uncertainty of that variable. Thus
$$ \sigma = \frac{1}{n-1}\sqrt{\sum{(x_{i}-\overline{x})^{2}} } $$ with
## \overline{x} ## being the mean value of the measurements.

The root sum of squares is used to determines the spread of the value of a function of random variables. The components of the RSS are the estimated uncertainties of each random variable from another source modified by their impact on the value of the uncertainty of the function. Thus
$$
\sigma_{F} = \sqrt{\sum_{1}^{i} \left ( \frac{\partial F}{\partial x_i} \right )^{2
} \sigma_{i}^{2} }
$$

The RSS contains the factor 1/√N intrinsically in σ isince the uncertainty components in the RSS expression can be determined from individual measurement of each random variable or in some other manner where N is not a consideration.
 
Last edited:
Shouldn’t you write sigma above instead of sigma squared?
 
Yep, corrected them.
 
Please read the documents in Matlab at

RMS

RSS
The section „More About“ has different form of calculation. Which form is the original definition?
 
senmeis said:
Question: What happens if RMS instead of RSS is used to characterize uncertainty?

It isn't clear what you mean by "what happens". Also, the meaning of "characterize uncertainty" varies from one field of study to another. Can you frame a more specific question?
 
Thanks for the link -- clarifies the situation considerably.
The authors correctly use the term RSS to establish an accuracy for a result where a bunch of uncertainties contribute to the final uncertainty in the result. If there are six equal contributions, the final uncertainty is the root of the sum of six squares. No argument to divide by ##\sqrt 6##.

Simple example: six equal terms with 1% each give a sum of ##\sqrt 6## %, not of 1%.
(*)

The subject at hand is relative measurements, so it's all in dB, except the calibrator accuracy. Page 42 clearly discusses the procedure
Keysight p 42 said:
It is best to consider all known uncertainties and then determine which ones can be ignored when making a certain type of measurement.
(*) taking the RMS, i.e. dividing by ##\sqrt 6## would give you the average contribution per error source
 
  • #10
senmeis said:
I think uncertainty exists in every measurement so this term has a generic sense.

However, to choose between two different mathematical calculations for uncertainty requires a specific definition.

Suppose we have measurements ##X_1,X_2,X_3##. We may be concerned with the "uncertainty" in the total measurement ##X_1 + X_2 + X_3## or we may be concerned with the "uncertainty"in the average of the measurements ##\frac{X_1 + X_2 + X_3}{3}##.

As I interpret tables 4-2 and 4-3 in your link, the concern there is with the total of the measurements. So the RSS is used since it represents the standard deviation of the sum of individual random variables based on the assumption they are mutually independent and each has mean zero.

If you did 3 independent experiments where you measured the dBs of fundamental of a signal at 10 GHZ under identical conditions and this data was ##X_1,X_2,X_3## then you would average those measurements to obtain an estimate of the "true" or typical dB value of the signal under those conditions. The quantity of interest is ##\frac{X_1 + X_2 + X_3}{3}##. The RMS of the the measurments characterizes the uncertainty in that average, again based on the assumption that the mean error in a measurement is zero.
 

Similar threads

Back
Top