Experimental Uncertainty

  • I
  • Thread starter Schfra
  • Start date
  • #1
Schfra
47
0
We have been using the equation attached as in image to calculate experiment uncertainty in my class, can somebody explain exactly how this works?

Let’s say we have a value y which is equal to 1/x, where x is some measured quantity with some uncertainty, and let’s say that that value of x is measured to be 5.

We can say that y = 1/5 +/- some error value determined by the equation. I don’t quite understand how this works. If the uncertainty in x was 1, the greatest value of y would be 1/4, while the smallest would be 1/6. 1/4 and 1/6 are not equally far from 1/5, so how can the value of y be expressed as 1/5 +/- any number?
 

Attachments

  • D2F9DBFC-3C22-482E-A5CF-4A6D61FC970B.jpeg
    D2F9DBFC-3C22-482E-A5CF-4A6D61FC970B.jpeg
    9.4 KB · Views: 499

Answers and Replies

  • #2
33,672
11,244
We can say that y = 1/5 +/- some error value determined by the equation.
That is not what that equation says. It simply gives the variance of y as a function of x and the variance of x. There is no implication whatsoever that the resulting distribution is symmetric nor even what the expected value is.
 
  • #3
Schfra
47
0
That is not what that equation says. It simply gives the variance of y as a function of x and the variance of x. There is no implication whatsoever that the resulting distribution is symmetric nor even what the expected value is.
Doesn’t the equation give the +/- value that can be added on to the end of the value of y? And if that value is some constant doesn’t that mean that the distribution is symmetric?

If not, what does the variance in y mean?
 
  • #4
33,672
11,244
Doesn’t the equation give the +/- value that can be added on to the end of the value of y?
No, it gives the variance.

If not, what does the variance in y mean?
The variance of y is defined as E[(y-E[y])^2]. It has nothing to do with symmetry.

Skewness is a measure of the asymmetry of a statistical distribution:

https://en.m.wikipedia.org/wiki/Skewness
 
  • #5
Schfra
47
0
No, it gives the variance.

The variance of y is defined as E[(y-E[y])^2]. It has nothing to do with symmetry.

Skewness is a measure of the asymmetry of a statistical distribution:

https://en.m.wikipedia.org/wiki/Skewness
Why are they then reporting the value given from the above equation as the +/- value in the attached image? Doesn’t this imply a symmetry? The value can be anywhere between the value + the uncertainty and the value - the uncertainty.
 

Attachments

  • 34F4E822-338F-4246-889D-F31A8A002D8B.jpeg
    34F4E822-338F-4246-889D-F31A8A002D8B.jpeg
    40.5 KB · Views: 386
  • #6
Khashishi
Science Advisor
2,815
493
We have been using the equation attached as in image to calculate experiment uncertainty in my class, can somebody explain exactly how this works?

Sure. This equation uses a linearized error propagation model. It is an approximation, like small angle approximations in trigonometry. It is only valid for "small" errors.

Consider a distribution roughly centered around f(x,y,z).

If you have a function f(x,y,z), and the function is smooth, then if you zoom into a small region, then the slopes look like straight lines. So, approximately, you can say
##f(x+\delta,y,z) \approx f(x,y,z)+\delta \frac{\partial f}{\partial x}##
If you think about it, this is a first order Taylor expansion around (x,y,z).
If f is some nonlinear function, it's not going to be exactly correct. You could base your error propagation around a second order Taylor expansion if you wanted to be more accurate, or even integrate the full distribution functions if you want to be exactly correct.

But usually, when we are doing experimental error analysis, we don't care about exactly correct error distributions, since it's like calculating an error on an error.

Edit: adding a little more detail.
If X, Y, and Z are distributions roughly centered on x, y, and z, then f(X,Y,Z) will be roughly centered on f(x,y,z). You can write X as : ##X = x + \delta##, where ##\delta## is a distribution of small values with zero expected value. Analogously for Y and Z. Since we used a linear approximation, the expected value of f(X,Y,Z) is f(x,y,z). So it is simple to calculate the variance.
##Var[f(X,Y,Z)] = E[f(X,Y,Z)^2] - f(x,y,z)^2##
##E[f(X,Y,Z)^2] \approx f(x,y,z)^2 + (\delta_x \frac{\partial f}{\partial x})^2 + (\delta_y \frac{\partial f}{\partial y})^2 + (\delta_z \frac{\partial f}{\partial z})^2 + ## cross terms
In many cases we can assume that X, Y, and Z, are independently distributed, so we just throw away the cross terms involving covariances.
 
Last edited:
  • #7
33,672
11,244
Why are they then reporting the value given from the above equation as the +/- value in the attached image?
Look earlier in the text. It probably describes the usage of the ##\pm## symbol as “mean ##\pm## st. dev.”

Doesn’t this imply a symmetry?
Not necessarily. It only implies what the text says it implies.

The value can be anywhere between the value + the uncertainty and the value - the uncertainty.
For a normally distributed variable only about 68% of the values will be within plus or minus 1 standard deviation.
 
Top