# Rigorous Uncertainty Analysis

• 012anonymousx
In summary: So, even though your method calculates the error for all three variables, it is actually more likely that you will have an error in one of the variables than in all three.

#### 012anonymousx

A position of a particle in linear motion is given by:
x = vt + 0.5at2

Calculate x with the error for:
t = 25.3 ± 0.5s
v = 10.1 ± 0.4m/s
a = 2.5 ± 0.3m/s2

So for calculating vt:

q = (10.1) (25.3) = 255.53 (exact)

Δq = (10.1)(25.3) * √ (0.4/10.1)2 (0.5/25.3)2
= 11.31...

Therefore, vt = 255.53 ± 11.3 (the 11.3 is rounded because I heard uncertainties always rounded to precision of last uncertainty)

Now my question is:

How do you round 255.53? And how many sig figs?

Is it 260.0 ± 11.3?
Or 255.5 ± 11.3?
OR something else?
And why?

I thought 260.0

Because when there is just a number to a certain position with no uncertainty given, like 123.4 it is implied that:
123.4 ± 0.1
So the number is rounded to the lowest precision of the uncertainty (I think)
(i.e. if the uncertainty was ± 0.12, the rounding would still occur at the 10th decimal place.
In this case the tenth decimal place.

In the case of 255.53 ± 11.3, it is the 10's.
Therefore 260.0

I'd appreciate if someone would type of a full solution (doesn't have to be pretty).

I guess a similar question is let's say you have the number
1234.5678 +/- 123.45
How should 1234.5678 be rounded?

Using a common sense approach, 255.53 or even 255. is preferable. If you put in the errors, you are choosing between the following: [244.23,266.83] and [248.7,271.3]. Just looking at the second would be quite misleading.

If you look up any measured physics data, you will see that the error term is usually at least two significant figures.

012anonymousx said:
I guess a similar question is let's say you have the number
1234.5678 +/- 123.45
How should 1234.5678 be rounded?

That number makes no sense. 123.45 means you know the uncertainty to about 0.01% of its value, since you didn't write down 123.44 or 123.46. However, 1234.5678 +/- 123.45 implies that you know the central value to about 10% of its value - that is, you know the uncertainty 100x better than what you are measuring.

This almost never happens.

The number you want is most likely 1230 +/- 120

So what about the original problem then?

255.53 ± 11.3?

012anonymousx said:
A position of a particle in linear motion is given by:
x = vt + 0.5at2

Calculate x with the error for:
t = 25.3 ± 0.5s
v = 10.1 ± 0.4m/s
a = 2.5 ± 0.3m/s2
You will want to use the standard propagation of errors formula, explained here: http://www.foothill.edu/psme/daley/tutorials_files/10. Error Propagation.pdf

##\sigma^2_x=\sigma^2_t \left( \frac{\partial x}{\partial t} \right)^2 + \sigma^2_v \left( \frac{\partial x}{\partial v} \right)^2 + \sigma^2_a \left( \frac{\partial x}{\partial a} \right)^2 ##

DaleSpam said:
You will want to use the standard propagation of errors formula, explained here: http://www.foothill.edu/psme/daley/tutorials_files/10. Error Propagation.pdf

##\sigma^2_x=\sigma^2_t \left( \frac{\partial x}{\partial t} \right)^2 + \sigma^2_v \left( \frac{\partial x}{\partial v} \right)^2 + \sigma^2_a \left( \frac{\partial x}{\partial a} \right)^2 ##

I don't understand the reasoning behind using such a complicated method.
In the case cited in the OP shouldn't you simply calculate x given the minimum values of v, a, and t. Then calculate x again using the maximum values of v, a, and t. The value and error of x should then be simply ((Xmax+Xmin)/2)±((Xmax-Xmin)/2)

mrspeedybob said:
I don't understand the reasoning behind using such a complicated method.
In the case cited in the OP shouldn't you simply calculate x given the minimum values of v, a, and t. Then calculate x again using the maximum values of v, a, and t. The value and error of x should then be simply ((Xmax+Xmin)/2)±((Xmax-Xmin)/2)
You could do that, but it would overestimate the error as well as introduce some bias. In the case of the OP your method would get x = 1060 ± 143 whereas the full method would give x = 1056 ± 103.

The reason that your method gives an artifically high estimate of the error is a consequence of the fact that it is highly unlikely that you will get a maximum error in all three inputs at the same time. In fact, errors in one variable can offset errors in another variable.

Last edited:

## 1. What is rigorous uncertainty analysis?

Rigorous uncertainty analysis is a scientific method used to quantify and evaluate the uncertainty associated with experimental or simulation data. It involves identifying and quantifying sources of uncertainty, propagating them through mathematical models, and assessing the impact on the final results.

## 2. Why is rigorous uncertainty analysis important?

Rigorous uncertainty analysis is important because it allows scientists to understand the reliability and accuracy of their data and results. It also helps to identify areas where further research or experimentation is needed to reduce uncertainty and improve the robustness of the findings.

## 3. What are the steps involved in rigorous uncertainty analysis?

The steps involved in rigorous uncertainty analysis include: identifying sources of uncertainty, quantifying the uncertainties, propagating them through mathematical models, and assessing the impact on the final results. It also involves sensitivity analysis to determine which uncertainties have the greatest influence on the results.

## 4. How is rigorous uncertainty analysis different from sensitivity analysis?

Rigorous uncertainty analysis and sensitivity analysis are closely related but serve different purposes. Rigorous uncertainty analysis focuses on quantifying and evaluating the overall uncertainty in the data and results, while sensitivity analysis identifies which parameters have the greatest influence on the results. In other words, sensitivity analysis is a part of the rigorous uncertainty analysis process.

## 5. What are some common methods used in rigorous uncertainty analysis?

There are several methods used in rigorous uncertainty analysis, including Monte Carlo simulation, Latin hypercube sampling, and polynomial chaos expansion. Each method has its advantages and limitations, and the choice of method depends on the specific application and data available.