I am doing a calculation involving taking three or more temperature measurements and then plotting them against another quantity (dependent). I get a relationship that is pretty linear, so I take the line of best fit to obtain an equation with a slope and an intercept. Now, my question is: how do you calculate error/uncertainty in the resulting slope? I have looked around the Internet and found ways to calculate error purely on the distribution of points, but I am rather looking for error caused by uncertainties in my measurements. For instance, if my temperature readings are good to 0.1K, how would that factor into the uncertainty of the slope? (I have previously used software like IGOR Pro that I think calculated those values for me, but I want to know how it is done.) Should I, for example, take the worst cases (highest and lowest possible slopes based on measurement uncertainty) and take the difference as the error? Or is this a bit pessimistic? (A formula would be great, I could understand it from there.) Thank you.