Error Propagation - multiplication vs powers

AI Thread Summary
The discussion explores the nuances of error propagation in mathematical functions, particularly focusing on the differences between squaring a variable and multiplying it by itself. It highlights that when calculating the error for f = x^2, the propagation formula yields a different result than for f = x * x due to the dependence of the variables. The key point is that the error propagation formula assumes independent variables, which does not hold true when the same variable is used in both cases. This leads to a discrepancy where the error of x * x is greater by a factor of √2 compared to x^2. The conversation emphasizes the importance of recognizing variable dependence in error calculations.
Caspian
Messages
15
Reaction score
0
Ok, this isn't a homework question -- more out of curiosity. But it seems so trivial that I hate to post it under "General Physics"

We all know the standard formula for error propagation:
\sigma_f = \sqrt{\dfrac{\partial f}{\partial x}^2 \sigma_x^2 + \dfrac{\partial f}{\partial y}^2 \sigma_y^2

Now, let f = x^2. We get \sigma_f = \sqrt{(2x)^2 \sigma_x^2}

Now, let f = x \cdot x. We get \sigma_f = \sqrt{x^2 \sigma_x^2 + x^2 \sigma_x^2} = \sqrt{2 x^2 \sigma_x^2}.

This says that the error of x \cdot x equals \sqrt{2} times the error of x^2!

I'm baffled at this... does anyone know why this is true? I've never seen a derivation of the standard error propagation formula... does the derivation assume that the two variables are not equal? (btw, If someone knows where to find the derivation to the formula, I would be very happy to see it)

Thanks!
 
Physics news on Phys.org
\sigma_f = \sqrt{(\dfrac{\partial f}{\partial x})^2 \sigma_x^2 + (\dfrac{\partial f}{\partial y})^2 \sigma_y^2

But f(x) = x2 = x\cdotx, and they are the same variable.

Propogation of errors applies to error of independent variables, x1, x2, . . . , xn or x, y, z, . . .

f'(x\cdotx) = x + x = 2x
 
Last edited:
Yeah, I messed up in my LaTeX, and forgot to put parenthesis, but I did square the function after taking the partial derivative. So, that's not where I've gone wrong here. There's got to be something else going on here... does the derivation behind the formula for error propagation assume that the two values are not equal?
 
If one had f(x, y, z), with x, y, z being independent, then the propagation of error would be

\sigma_f = \sqrt{(\dfrac{\partial f}{\partial x})^2 \sigma_x^2 + (\dfrac{\partial f}{\partial y})^2 \sigma_y^2 + (\dfrac{\partial f}{\partial z})^2 \sigma_z^2}
 
Sorry, I left out intermediate steps in my original post... let me provide more detail.

let f(x) = x^2. So, \dfrac{\partial f}{\partial x} = 2x

Thus, \sigma_f = \sqrt{(2x)^2 \sigma_x^2}.

..

Now, let g(x,y) = x \ctimes y. So, \dfrac{\partial g}{\partial x} = y and \dfrac{\partial g}{\partial y} = x.

Thus, \sigma_g = \sqrt{x^2 \sigma_y^2 + y^2 \sigma_x^2}

..

Now, let x = y. This means that f = g. But \sigma_g = \sqrt{x^2 \sigma_x^2 + x^2 \sigma_x^2} = \sqrt{2 x^2 \sigma_x^2}.

So, f = g, but \sigma_f = \sqrt{(2x)^2 \sigma_x^2} and \sigma_g = \sqrt{2 x^2 \sigma_x^2} (the two are differ by a factor of \sqrt{2}).

Why is this?
 
Last edited:
In one case, one has one error \sigma_x, and in the other case, two independent errors \sigma_x, \sigma_y, the dependence of f on x and y is the same.

See also - http://sosnick.uchicago.edu/propagation_errors.pdf

I think there is a better discussion of propagation of error, but I just have to find it.
 
Last edited by a moderator:
Hi,maybe I am too late but I just saw it. In the second case you are assuming that the error of x is independent of the error of y. This is not true when you say y=x and that is where the error is. You are assuming that what happens to y is not happening to x and that's not true for x*x.
 
Yes, you must be very careful with dependent variables.

If f(A, r) = A/r, then sigma-f = Sqrt[(1/r)^2 (sigma-A)^2 + (A/r^2)^2 (sigma-r)^2].

If A = pi r^2, then sigma-A = 2 pi r sigma-r.

This would lead to sigma-f = Sqrt[(2 pi sigma r)^2 + (pi sigma-r)^2] = Sqrt[5] pi sigma-r.

But if A = pi r^2, f must be simplified before propagating errors.

f = A/r = pi r, so sigma-f = pi sigma-r
 

Similar threads

Back
Top