Error Propagation - multiplication vs powers

Click For Summary

Homework Help Overview

The discussion revolves around the topic of error propagation in the context of mathematical functions, specifically focusing on the differences in error calculations when dealing with multiplication versus powers. Participants explore the implications of using the standard error propagation formula in various scenarios.

Discussion Character

  • Conceptual clarification, Assumption checking

Approaches and Questions Raised

  • Participants analyze the error propagation formula and its application to functions like \(f = x^2\) and \(g = x \cdot x\). Questions arise regarding the assumptions made when the same variable is used in different contexts, particularly concerning independence of errors.

Discussion Status

There is an ongoing exploration of the differences in error propagation results when treating variables as independent versus dependent. Some participants have provided insights into the implications of these assumptions, while others are seeking clarification on the derivation and application of the error propagation formula.

Contextual Notes

Participants note that the derivation of the error propagation formula may assume independence of variables, which is questioned when the same variable is used in different forms. There is also mention of the need for careful consideration of dependent variables in error calculations.

Caspian
Messages
15
Reaction score
0
Ok, this isn't a homework question -- more out of curiosity. But it seems so trivial that I hate to post it under "General Physics"

We all know the standard formula for error propagation:
\sigma_f = \sqrt{\dfrac{\partial f}{\partial x}^2 \sigma_x^2 + \dfrac{\partial f}{\partial y}^2 \sigma_y^2

Now, let f = x^2. We get \sigma_f = \sqrt{(2x)^2 \sigma_x^2}

Now, let f = x \cdot x. We get \sigma_f = \sqrt{x^2 \sigma_x^2 + x^2 \sigma_x^2} = \sqrt{2 x^2 \sigma_x^2}.

This says that the error of x \cdot x equals \sqrt{2} times the error of x^2!

I'm baffled at this... does anyone know why this is true? I've never seen a derivation of the standard error propagation formula... does the derivation assume that the two variables are not equal? (btw, If someone knows where to find the derivation to the formula, I would be very happy to see it)

Thanks!
 
Physics news on Phys.org
\sigma_f = \sqrt{(\dfrac{\partial f}{\partial x})^2 \sigma_x^2 + (\dfrac{\partial f}{\partial y})^2 \sigma_y^2

But f(x) = x2 = x\cdotx, and they are the same variable.

propagation of errors applies to error of independent variables, x1, x2, . . . , xn or x, y, z, . . .

f'(x\cdotx) = x + x = 2x
 
Last edited:
Yeah, I messed up in my LaTeX, and forgot to put parenthesis, but I did square the function after taking the partial derivative. So, that's not where I've gone wrong here. There's got to be something else going on here... does the derivation behind the formula for error propagation assume that the two values are not equal?
 
If one had f(x, y, z), with x, y, z being independent, then the propagation of error would be

\sigma_f = \sqrt{(\dfrac{\partial f}{\partial x})^2 \sigma_x^2 + (\dfrac{\partial f}{\partial y})^2 \sigma_y^2 + (\dfrac{\partial f}{\partial z})^2 \sigma_z^2}
 
Sorry, I left out intermediate steps in my original post... let me provide more detail.

let f(x) = x^2. So, \dfrac{\partial f}{\partial x} = 2x

Thus, \sigma_f = \sqrt{(2x)^2 \sigma_x^2}.

..

Now, let g(x,y) = x \ctimes y. So, \dfrac{\partial g}{\partial x} = y and \dfrac{\partial g}{\partial y} = x.

Thus, \sigma_g = \sqrt{x^2 \sigma_y^2 + y^2 \sigma_x^2}

..

Now, let x = y. This means that f = g. But \sigma_g = \sqrt{x^2 \sigma_x^2 + x^2 \sigma_x^2} = \sqrt{2 x^2 \sigma_x^2}.

So, f = g, but \sigma_f = \sqrt{(2x)^2 \sigma_x^2} and \sigma_g = \sqrt{2 x^2 \sigma_x^2} (the two are differ by a factor of \sqrt{2}).

Why is this?
 
Last edited:
In one case, one has one error \sigma_x, and in the other case, two independent errors \sigma_x, \sigma_y, the dependence of f on x and y is the same.

See also - http://sosnick.uchicago.edu/propagation_errors.pdf

I think there is a better discussion of propagation of error, but I just have to find it.
 
Last edited by a moderator:
Hi,maybe I am too late but I just saw it. In the second case you are assuming that the error of x is independent of the error of y. This is not true when you say y=x and that is where the error is. You are assuming that what happens to y is not happening to x and that's not true for x*x.
 
Yes, you must be very careful with dependent variables.

If f(A, r) = A/r, then sigma-f = Sqrt[(1/r)^2 (sigma-A)^2 + (A/r^2)^2 (sigma-r)^2].

If A = pi r^2, then sigma-A = 2 pi r sigma-r.

This would lead to sigma-f = Sqrt[(2 pi sigma r)^2 + (pi sigma-r)^2] = Sqrt[5] pi sigma-r.

But if A = pi r^2, f must be simplified before propagating errors.

f = A/r = pi r, so sigma-f = pi sigma-r
 

Similar threads

Replies
15
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 9 ·
Replies
9
Views
6K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K