I Error Propagation in Measurements

erobz
Gold Member
Messages
4,442
Reaction score
1,839
I was imagining trying to construct a rectangle of area ##A = xy##

If we give a symmetric error to each dimension ##\epsilon_x, \epsilon_y##

$$ A + \Delta A = ( x \pm \epsilon_x )( y \pm \epsilon_y )$$

Expanding the RHS and dividing through by ##A##

$$ \frac{\Delta A}{ A} = \pm \frac{\epsilon_x}{x} \pm \frac{\epsilon_y}{y} (\pm)(\pm) \frac{\epsilon_x \epsilon_y}{xy}$$

The first two terms are symmetrical error, but without neglecting the third higher order term should it have a negative bias since ## \frac{2}{3}## of sign ( ##\pm##) parings result in a negative third term, and ##\frac{1}{3}## pairings result in a positive third term?

My terminology is probably improper.
 
Last edited:
Mathematics news on Phys.org
Never mind! I think I did that wrong... There are only 4 pairings. for some reason I had ##C(4,2)## in my head.
 
The standard term for the error is the relative variation (the square of the standard deviation divided by the measurement). If you have several possible error sources, add the relative variations.
 
Three options to consider:
1) Simply evaluate your function using measurements that result in the highest and lowest possible values, in this case calculate area given by the maximum probable measurements and the minimum probable measurements. The difference in these values will be roughly symmetric about the best estimate provided the uncertainties are relatively small. Since the high and low will be roughly symmetric from the best estimate you can get away with just finding either the highest or lowest for

2) What @Svein said. If the relative errors are small you can add them together to find the relative error of the product and then easily find the absolute error. It will match with method 1 when rounded sensibly using standard significant digit 'rules.'

3) Add the relative errors in quadrature (square them, add, then square root). This is likely a more accurate estimate of the uncertainty in the product provided that the uncertainties are not covariant. This method comes from the calculus of probabilities. See Taylor's An Introduction to Error Analysis for an excellent introductory text on this.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top