# Experimental Uncertainty

• I
• Schfra
I don't quite understand how this works.In summary, this equation calculates the variance of a measured quantity as a function of the uncertainty associated with the measurement. The uncertainty can be expressed as a number between +1 and -1, with +1 representing the greatest uncertainty and -1 representing the least uncertainty.

#### Schfra

We have been using the equation attached as in image to calculate experiment uncertainty in my class, can somebody explain exactly how this works?

Let’s say we have a value y which is equal to 1/x, where x is some measured quantity with some uncertainty, and let’s say that that value of x is measured to be 5.

We can say that y = 1/5 +/- some error value determined by the equation. I don’t quite understand how this works. If the uncertainty in x was 1, the greatest value of y would be 1/4, while the smallest would be 1/6. 1/4 and 1/6 are not equally far from 1/5, so how can the value of y be expressed as 1/5 +/- any number?

#### Attachments

• D2F9DBFC-3C22-482E-A5CF-4A6D61FC970B.jpeg
9.4 KB · Views: 522
Schfra said:
We can say that y = 1/5 +/- some error value determined by the equation.
That is not what that equation says. It simply gives the variance of y as a function of x and the variance of x. There is no implication whatsoever that the resulting distribution is symmetric nor even what the expected value is.

Schfra
Dale said:
That is not what that equation says. It simply gives the variance of y as a function of x and the variance of x. There is no implication whatsoever that the resulting distribution is symmetric nor even what the expected value is.
Doesn’t the equation give the +/- value that can be added on to the end of the value of y? And if that value is some constant doesn’t that mean that the distribution is symmetric?

If not, what does the variance in y mean?

Schfra said:
Doesn’t the equation give the +/- value that can be added on to the end of the value of y?
No, it gives the variance.

Schfra said:
If not, what does the variance in y mean?
The variance of y is defined as E[(y-E[y])^2]. It has nothing to do with symmetry.

Skewness is a measure of the asymmetry of a statistical distribution:

https://en.m.wikipedia.org/wiki/Skewness

Schfra
Dale said:
No, it gives the variance.

The variance of y is defined as E[(y-E[y])^2]. It has nothing to do with symmetry.

Skewness is a measure of the asymmetry of a statistical distribution:

https://en.m.wikipedia.org/wiki/Skewness
Why are they then reporting the value given from the above equation as the +/- value in the attached image? Doesn’t this imply a symmetry? The value can be anywhere between the value + the uncertainty and the value - the uncertainty.

#### Attachments

• 34F4E822-338F-4246-889D-F31A8A002D8B.jpeg
40.5 KB · Views: 408
Schfra said:
We have been using the equation attached as in image to calculate experiment uncertainty in my class, can somebody explain exactly how this works?

Sure. This equation uses a linearized error propagation model. It is an approximation, like small angle approximations in trigonometry. It is only valid for "small" errors.

Consider a distribution roughly centered around f(x,y,z).

If you have a function f(x,y,z), and the function is smooth, then if you zoom into a small region, then the slopes look like straight lines. So, approximately, you can say
##f(x+\delta,y,z) \approx f(x,y,z)+\delta \frac{\partial f}{\partial x}##
If you think about it, this is a first order Taylor expansion around (x,y,z).
If f is some nonlinear function, it's not going to be exactly correct. You could base your error propagation around a second order Taylor expansion if you wanted to be more accurate, or even integrate the full distribution functions if you want to be exactly correct.

But usually, when we are doing experimental error analysis, we don't care about exactly correct error distributions, since it's like calculating an error on an error.

Edit: adding a little more detail.
If X, Y, and Z are distributions roughly centered on x, y, and z, then f(X,Y,Z) will be roughly centered on f(x,y,z). You can write X as : ##X = x + \delta##, where ##\delta## is a distribution of small values with zero expected value. Analogously for Y and Z. Since we used a linear approximation, the expected value of f(X,Y,Z) is f(x,y,z). So it is simple to calculate the variance.
##Var[f(X,Y,Z)] = E[f(X,Y,Z)^2] - f(x,y,z)^2##
##E[f(X,Y,Z)^2] \approx f(x,y,z)^2 + (\delta_x \frac{\partial f}{\partial x})^2 + (\delta_y \frac{\partial f}{\partial y})^2 + (\delta_z \frac{\partial f}{\partial z})^2 + ## cross terms
In many cases we can assume that X, Y, and Z, are independently distributed, so we just throw away the cross terms involving covariances.

Last edited:
Schfra said:
Why are they then reporting the value given from the above equation as the +/- value in the attached image?
Look earlier in the text. It probably describes the usage of the ##\pm## symbol as “mean ##\pm## st. dev.”

Schfra said:
Doesn’t this imply a symmetry?
Not necessarily. It only implies what the text says it implies.

Schfra said:
The value can be anywhere between the value + the uncertainty and the value - the uncertainty.
For a normally distributed variable only about 68% of the values will be within plus or minus 1 standard deviation.

## What is experimental uncertainty?

Experimental uncertainty refers to the range of possible values that a measured quantity can have due to errors and limitations in the measurement process. It is also known as measurement error or experimental error.

## What are the sources of experimental uncertainty?

The sources of experimental uncertainty can include human error, instrumental error, environmental factors, and limitations of the measurement equipment. Human error can include mistakes in reading instruments or recording data, while instrumental error can be caused by imprecise or faulty equipment. Environmental factors, such as temperature or humidity, can also affect the accuracy of measurements.

## How is experimental uncertainty calculated?

Experimental uncertainty is typically calculated by finding the standard deviation of a series of measurements. This is done by taking the square root of the sum of the squared differences between each measurement and the mean of all the measurements. The result is a measure of the spread of data points around the mean, which represents the uncertainty of the measurement.

## Why is experimental uncertainty important in scientific experiments?

Experimental uncertainty is important because it allows scientists to understand the limitations and reliability of their measurements. It also helps to determine the accuracy and precision of the data collected, which is crucial for drawing valid conclusions and making accurate predictions.

## How can experimental uncertainty be reduced?

Experimental uncertainty can be reduced by using more precise measurement equipment, taking multiple measurements and averaging the results, and being aware of and minimizing potential sources of error. Additionally, conducting multiple trials and using statistical analysis can help to reduce uncertainty and increase the reliability of the data.