# Error calculations

There isn't a problem question really, but I think this section is most appropriate since this is a question born out of my lab module.

To calculate the gradient of a line of best fit for a set of data you can use the equation

M = n $\sum$xy - $\sum$x$\sum$y / n$\sum$x2 - ($\sum$x)2

Where $\sum$xy = x1y1 + x2y2... and so on.

I'm told that you can calculate the eroor on this gradient to be

σM = $\sqrt{\sum}[(y-Y)$-M(x-X)]2 / n$\sum$x2 - ($\sum$x)2

Capital letters are supposed represent mean values of x and y (don't know how to get xbar in latex) and the square root should encompass the entire equation.

My question is this, from what rule of error propagation do we arrive at this equation? I can't see how i'd arrive at this using what I already know, so i'm thinking there's some stuff here that i'm missing.

Again, this is not homework, but for peace of mind.

Thanks!