Dear all, let's say I want to know the elasticity constant of a spring (k), so I measure several times different values for the force applied to the spring, F, and the displacement of the spring, x. So, for N measures, I have xi and Fi and their uncertainties. Now, I'm really not an expert of statistics but I think there are several methods I can use to calculate k and its uncertainty. I'd like to know the conceptual difference between them. I can use the least square method to fit my data in an equation of type y=Ax+B (where y is F and x is x), so I would find a value for k=-A. I guess I'd just ignore B. The least squares method gives me also a value for the uncertainty of A I can use the least square method to fit the eqation y=Ax, again I will get a value for k (=-A) and for its uncertainty I can just consider to have made different measures of the physical quantity -F/x (k), find the average of the sample and its standard deviation of the mean I think the difference between the first and the second method is that in the second it is like I am sure that the line must pass through the origin with no uncertainty, so I force it to pass there. So, between the two, I'd prefer the second. What puzzles me is the difference that lies between the second and the third method. Please correct me if I am wrong, but in both methods I assume the quantities xi and Fi are random, normally distributed, variables. In method 2 I find the line that maximizes the probability for the points to be samples taken from normal distributions about the line (I am not so sure about that...) In method 3 I consider the quantity itself (-F/x) to be a random, normally distributed, value and I find the center of the distribution that maximized the probability of the sample values to be taken from the distribution. Somehow all this isn't clear for me, especially the part involving extimating the uncertainties. Moreover, what method is reasonably the best to use?