OK, I have a question I have no idea how to answer (and all my awful undergrad stats books are useless on the matter). Say I make a number of pairs of measurements (x,y). I plot the data, and it looks strongly positively correlated. I do a linear regression and get an equation for a line of best fit, say y = 0.3x + 0.1 or something. The Pearson coefficient is very close to one, IE 0.9995 or so. Now, say that the quantity I am interested in is the slope of this line, that is (for the above equation) 0.3. I take all my measurements, get the line of best fit, find its slope, and the slope is something I want. For example, with the photoelectric effect, maybe I measure stopping potential vs. frequency of light; the slope can be related to Planck's constant. Or something similar. The question I have is: how do I estimate the error (uncertainty) in this slope value I get? My professor said to use the "standard deviation in the slope," which doesn't sound sensible to me. I thought to myself: well, maybe it has to do with using the uncertainty in x and the uncertainty in y. But how would you combine these uncertainties to find the uncertainty in dy/dx? How does one estimate the error range for a parameter obtained from the slope of a line of best fit on a set of (x,y) data? Thank you so much, this one seems really important and I'm a bit disturbed I haven't the slightest idea what to do.