I Error propagation of exponentials

TheCanadian
Messages
361
Reaction score
13
I am just wondering why there is a discrepancy between two different methods for error propagation. For example, if you have ## Q = (a)(b)(c) ## then the relative error in Q is simply the square root of the sum of the squares of each of the terms being multiplied together, correct? But what if ## Q = (a)(a)(a) ##. Why isn't the relative error in Q now simply once again the square root of the sum of the squares of a (which in this case would be 3 terms)? I understand the derivation for the relative error in ## Q = a^3 ## being ## 3 \frac {\Delta a}{a} ## but just don't quite understand why the earlier rule pertaining to basic multiplication and division no longer applies. What is the reason for a discrepancy between the two methods of error propagation? Can't exponentiation (using positive integers) be considered as just an extension of multiplication?
 
Mathematics news on Phys.org
Notice that in Q = (a)(b)(c) the terms a, b and c are considered to be independent in the sense that the error of one of them is independent from the error to another. But if a = b = c then the "three" terms are not independent.
 
One suggestion is to consider a function like S=(A+B)^2 where A and B are two variables that can each take on the values of +1 and -1. If A and B are uncorrelated, you will have less spread in the S distribution than if A=B. In computing experimental uncertainties, often the "delta" is just an estimate and it can be difficult to account for correlations in the variables that are often considered to have random uncertainties.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.

Similar threads

Replies
5
Views
2K
Replies
1
Views
2K
Replies
10
Views
2K
Replies
6
Views
4K
Replies
15
Views
4K
Replies
10
Views
2K
Replies
5
Views
1K
Replies
7
Views
2K
Back
Top