I Error Propagation in Measurements

AI Thread Summary
The discussion centers on error propagation in measurements, particularly when calculating the area of a rectangle defined by dimensions x and y with symmetric errors ε_x and ε_y. The participants explore the implications of including a higher-order term in the error expansion, questioning whether it introduces a negative bias due to the pairing of signs. They clarify that the standard approach to error is to use relative variations, which can be combined through various methods. Three main strategies for estimating errors are discussed: evaluating extreme values, summing small relative errors, and adding relative errors in quadrature for greater accuracy. The conversation emphasizes the importance of understanding these methods for accurate measurement analysis.
erobz
Gold Member
Messages
4,442
Reaction score
1,839
I was imagining trying to construct a rectangle of area ##A = xy##

If we give a symmetric error to each dimension ##\epsilon_x, \epsilon_y##

$$ A + \Delta A = ( x \pm \epsilon_x )( y \pm \epsilon_y )$$

Expanding the RHS and dividing through by ##A##

$$ \frac{\Delta A}{ A} = \pm \frac{\epsilon_x}{x} \pm \frac{\epsilon_y}{y} (\pm)(\pm) \frac{\epsilon_x \epsilon_y}{xy}$$

The first two terms are symmetrical error, but without neglecting the third higher order term should it have a negative bias since ## \frac{2}{3}## of sign ( ##\pm##) parings result in a negative third term, and ##\frac{1}{3}## pairings result in a positive third term?

My terminology is probably improper.
 
Last edited:
Mathematics news on Phys.org
Never mind! I think I did that wrong... There are only 4 pairings. for some reason I had ##C(4,2)## in my head.
 
The standard term for the error is the relative variation (the square of the standard deviation divided by the measurement). If you have several possible error sources, add the relative variations.
 
Three options to consider:
1) Simply evaluate your function using measurements that result in the highest and lowest possible values, in this case calculate area given by the maximum probable measurements and the minimum probable measurements. The difference in these values will be roughly symmetric about the best estimate provided the uncertainties are relatively small. Since the high and low will be roughly symmetric from the best estimate you can get away with just finding either the highest or lowest for

2) What @Svein said. If the relative errors are small you can add them together to find the relative error of the product and then easily find the absolute error. It will match with method 1 when rounded sensibly using standard significant digit 'rules.'

3) Add the relative errors in quadrature (square them, add, then square root). This is likely a more accurate estimate of the uncertainty in the product provided that the uncertainties are not covariant. This method comes from the calculus of probabilities. See Taylor's An Introduction to Error Analysis for an excellent introductory text on this.
 
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Back
Top