# Error propagation and optimum

1. Feb 2, 2005

### Pseudopod

I was collecting data for a simple physics lab today when I stumbled upon a question I couldn't answer.

Very basically, the lab consisted of measuring the time it takes a ball to drop a variety of distances, between 0 and 100cm. By plotting $$\frac{y}{t}$$ vs. $$t$$ (where y is height and t is time) and running a LSQ fit, you can find the value of g. I want to take data at 20 different heights between 0cm and 100cm.

$$y = V_0t + \frac{1}{2} gt^2$$
so $$g$$ is going to be proportional to $$u = \frac{y}{t^2}$$ where $$u$$ is $$\frac{1}{2}g$$.

Running error propagation:

$$du^2 = (\frac{\partial u}{\partial y})^2 dy^2 + (\frac{\partial u}{\partial t})^2 dt^2$$
$$(\frac{\partial u}{\partial y}) = \frac{1}{t^2}, and (\frac{\partial u}{\partial t}) = \frac{2y}{t}$$

So obviously of my 20 heights I should take more data down low and less data up high, because the longer the ball is falling, the less the error becomes, whereas the error in y is constant regardless of height.

Now my question is this: ideally, how should I space the 20 heights at which I will take data? What I'm basically asking is how to analytically solve for constant $$du$$ when the range and number of data points to be taken is given.

2. Feb 2, 2005

### dextercioby

Before everything,please explain to me what is with those squares in the "error formula"...To make a joke..."the error formula is erroneous"... :tongue2:

Daniel.

3. Feb 2, 2005

### Pseudopod

Shouldn't everything be squared because the error is random (it can be + or - so you need to square it to get rid of the sign)? Hmm.

4. Feb 2, 2005

### DoubleMike

Course, that would also change the actual numerical values (not to count all those random imaginary errors :tongue:.) From memory, I would calculate the differential of dt=f'(x)dx

then plug in your error and the given x.

5. Feb 2, 2005

### dextercioby

No,for "random signs" u use the MODULUS/ABSOLUTE VALUE which is the mathematical trick to prove the simple thing that ERRORS ALWAYS ARE ADDED...

Daniel...

6. Feb 2, 2005

### Pseudopod

Everything I've seen for random signs you always square it all. The cross term generally goes away because the errors are independent. I honestly don't know why people don't use absolute value, but I just looked at several science references and they all show it squared.

7. Feb 2, 2005

### DoubleMike

Cross term as in the middle term when you FOIL it? I was wondering about that... $(x + y)^2 != x^2 + y^$ (not equal to.. not factorial)

I don't know what you mean by independent errors.

Last edited: Feb 2, 2005
8. Feb 2, 2005

### Pseudopod

Yeah that's what I mean by the cross term.

$$(\frac{\partial u}{\partial y}dy + \frac{\partial u}{\partial t}dt)^2 = (\frac{\partial u}{\partial y})^2 dy^2 + (\frac{\partial u}{\partial t})^2 dt^2 + 2(\frac{\partial u}{\partial y}) (\frac{\partial u}{\partial t})dydt$$

But if the error in y and t is independent (like in my case, the error in my height measurement has absolutely nothing to do with the error in my time measurement), then $$dydt$$ averages out to 0 because the error in each is both random and positive/negative. On the other hand, if the errors were dependent in some way I would then have to worry about that term because they wouldn't necessarily average out to 0.

9. Feb 2, 2005

### DoubleMike

I don't understand why the error has to average to 0... in fact since it's multiplication I don't see it being 0 at all. Maybe I have this all wrong.

10. Feb 2, 2005

### Pseudopod

Well we are saying the error is both random and signed, right? Let's say the error can be anywhere from -100 to 100. Obviously when you pick random numbers from this set and add them together, the sum will converge on zero. It's the same thing. The whole reason we are squaring the error propagation equation in the first place is to get rid of the negative signs because error should always add. If you used the normal error propagation equation:
$$du = (\frac{\partial u}{\partial y}) dy + (\frac{\partial u}{\partial t}) dt$$

You wouldn't get out any meaningful answer for random error because dy and dt will both be 0. So we have to square it, or use absolute value like dexterioby said in order to get a meaningful answer out.

Last edited: Feb 2, 2005
11. Feb 3, 2005

### ahrkron

Staff Emeritus
In experimental physics, that correct way is usually the one with the squares. Basically, that corresponds to assuming that the errors are gaussian. When they are, they add up by convolution of the corresponding distributions, which gives the squares' formula.

The full formula to obtain the error on a function of many measured quantities, including possible correlations, is (IIRC):

$$\sigma^2 = \left( \begin{array}{c} {\partial f \over \partial x_1} \\ {\partial f \over \partial x_2} \\ \vdots \\ {\partial f \over \partial x_n} \\ \end{array} \right)^T \left( \begin{array}{ccc} \sigma_{11}^2 & \hdots & \sigma_{1n}^2 \\ \vdots & \ddots & \vdots \\ \sigma_{n1}^2 & \hdots & \sigma_{nn}^2 \\ \end{array} \right) \left( \begin{array}{c} {\partial f \over \partial x_1} \\ {\partial f \over \partial x_2} \\ \vdots \\ {\partial f \over \partial x_n} \\ \end{array} \right)$$

This reduces to the formula you mentioned when there are no correlations (i.e., when $$\sigma_{ij}=0$$ for $$i\neq j$$). It also reduces to the usual addition in quadrature for f(x,y)=x+y and for f(x,y)=x-y.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook