How does the equation for experimental uncertainty work?

Click For Summary

Discussion Overview

The discussion revolves around the understanding of an equation used for calculating experimental uncertainty, particularly in the context of error propagation. Participants explore the implications of this equation, its assumptions, and how it relates to variance and distribution symmetry.

Discussion Character

  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant seeks clarification on how the equation for experimental uncertainty works, particularly when applied to a function y = 1/x.
  • Another participant asserts that the equation provides the variance of y as a function of x and its variance, without implying symmetry or an expected value.
  • There is a challenge regarding whether the equation gives a symmetric distribution, with some participants questioning the interpretation of the variance in relation to uncertainty.
  • A later reply introduces the concept of linearized error propagation and mentions that the equation is an approximation valid for small errors, referencing a first-order Taylor expansion.
  • Participants discuss the implications of using the ± symbol in reporting uncertainty, with some arguing that it does not necessarily imply symmetry in the distribution of values.

Areas of Agreement / Disagreement

Participants express differing views on the interpretation of the equation, particularly regarding the nature of the distribution and the implications of variance. There is no consensus on whether the reported uncertainty implies symmetry.

Contextual Notes

Limitations include assumptions about the smoothness of functions and the validity of linear approximations for small errors. The discussion also touches on the independence of distributions and the neglect of cross terms in variance calculations.

Schfra
Messages
47
Reaction score
0
We have been using the equation attached as in image to calculate experiment uncertainty in my class, can somebody explain exactly how this works?

Let’s say we have a value y which is equal to 1/x, where x is some measured quantity with some uncertainty, and let’s say that that value of x is measured to be 5.

We can say that y = 1/5 +/- some error value determined by the equation. I don’t quite understand how this works. If the uncertainty in x was 1, the greatest value of y would be 1/4, while the smallest would be 1/6. 1/4 and 1/6 are not equally far from 1/5, so how can the value of y be expressed as 1/5 +/- any number?
 

Attachments

  • D2F9DBFC-3C22-482E-A5CF-4A6D61FC970B.jpeg
    D2F9DBFC-3C22-482E-A5CF-4A6D61FC970B.jpeg
    9.4 KB · Views: 616
Physics news on Phys.org
Schfra said:
We can say that y = 1/5 +/- some error value determined by the equation.
That is not what that equation says. It simply gives the variance of y as a function of x and the variance of x. There is no implication whatsoever that the resulting distribution is symmetric nor even what the expected value is.
 
  • Like
Likes   Reactions: Schfra
Dale said:
That is not what that equation says. It simply gives the variance of y as a function of x and the variance of x. There is no implication whatsoever that the resulting distribution is symmetric nor even what the expected value is.
Doesn’t the equation give the +/- value that can be added on to the end of the value of y? And if that value is some constant doesn’t that mean that the distribution is symmetric?

If not, what does the variance in y mean?
 
Schfra said:
Doesn’t the equation give the +/- value that can be added on to the end of the value of y?
No, it gives the variance.

Schfra said:
If not, what does the variance in y mean?
The variance of y is defined as E[(y-E[y])^2]. It has nothing to do with symmetry.

Skewness is a measure of the asymmetry of a statistical distribution:

https://en.m.wikipedia.org/wiki/Skewness
 
  • Like
Likes   Reactions: Schfra
Dale said:
No, it gives the variance.

The variance of y is defined as E[(y-E[y])^2]. It has nothing to do with symmetry.

Skewness is a measure of the asymmetry of a statistical distribution:

https://en.m.wikipedia.org/wiki/Skewness
Why are they then reporting the value given from the above equation as the +/- value in the attached image? Doesn’t this imply a symmetry? The value can be anywhere between the value + the uncertainty and the value - the uncertainty.
 

Attachments

  • 34F4E822-338F-4246-889D-F31A8A002D8B.jpeg
    34F4E822-338F-4246-889D-F31A8A002D8B.jpeg
    40.5 KB · Views: 501
Schfra said:
We have been using the equation attached as in image to calculate experiment uncertainty in my class, can somebody explain exactly how this works?

Sure. This equation uses a linearized error propagation model. It is an approximation, like small angle approximations in trigonometry. It is only valid for "small" errors.

Consider a distribution roughly centered around f(x,y,z).

If you have a function f(x,y,z), and the function is smooth, then if you zoom into a small region, then the slopes look like straight lines. So, approximately, you can say
##f(x+\delta,y,z) \approx f(x,y,z)+\delta \frac{\partial f}{\partial x}##
If you think about it, this is a first order Taylor expansion around (x,y,z).
If f is some nonlinear function, it's not going to be exactly correct. You could base your error propagation around a second order Taylor expansion if you wanted to be more accurate, or even integrate the full distribution functions if you want to be exactly correct.

But usually, when we are doing experimental error analysis, we don't care about exactly correct error distributions, since it's like calculating an error on an error.

Edit: adding a little more detail.
If X, Y, and Z are distributions roughly centered on x, y, and z, then f(X,Y,Z) will be roughly centered on f(x,y,z). You can write X as : ##X = x + \delta##, where ##\delta## is a distribution of small values with zero expected value. Analogously for Y and Z. Since we used a linear approximation, the expected value of f(X,Y,Z) is f(x,y,z). So it is simple to calculate the variance.
##Var[f(X,Y,Z)] = E[f(X,Y,Z)^2] - f(x,y,z)^2##
##E[f(X,Y,Z)^2] \approx f(x,y,z)^2 + (\delta_x \frac{\partial f}{\partial x})^2 + (\delta_y \frac{\partial f}{\partial y})^2 + (\delta_z \frac{\partial f}{\partial z})^2 + ## cross terms
In many cases we can assume that X, Y, and Z, are independently distributed, so we just throw away the cross terms involving covariances.
 
Last edited:
Schfra said:
Why are they then reporting the value given from the above equation as the +/- value in the attached image?
Look earlier in the text. It probably describes the usage of the ##\pm## symbol as “mean ##\pm## st. dev.”

Schfra said:
Doesn’t this imply a symmetry?
Not necessarily. It only implies what the text says it implies.

Schfra said:
The value can be anywhere between the value + the uncertainty and the value - the uncertainty.
For a normally distributed variable only about 68% of the values will be within plus or minus 1 standard deviation.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
7K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K