Find k in y=kx: linear regression or just average of (y/x)?

In summary: If there's error in the y's, you should use a different test (e.g. t-test or F-test), again assuming normality.
  • #1
atat1tata
29
0
Dear all,
let's say I want to know the elasticity constant of a spring (k), so I measure several times different values for the force applied to the spring, F, and the displacement of the spring, x.
So, for N measures, I have xi and Fi and their uncertainties.
Now, I'm really not an expert of statistics but I think there are several methods I can use to calculate k and its uncertainty. I'd like to know the conceptual difference between them.
  1. I can use the least square method to fit my data in an equation of type y=Ax+B (where y is F and x is x), so I would find a value for k=-A. I guess I'd just ignore B. The least squares method gives me also a value for the uncertainty of A
  2. I can use the least square method to fit the equation y=Ax, again I will get a value for k (=-A) and for its uncertainty
  3. I can just consider to have made different measures of the physical quantity -F/x (k), find the average of the sample and its standard deviation of the mean

I think the difference between the first and the second method is that in the second it is like I am sure that the line must pass through the origin with no uncertainty, so I force it to pass there. So, between the two, I'd prefer the second.
What puzzles me is the difference that lies between the second and the third method. Please correct me if I am wrong, but in both methods I assume the quantities xi and Fi are random, normally distributed, variables.
In method 2 I find the line that maximizes the probability for the points to be samples taken from normal distributions about the line (I am not so sure about that...)
In method 3 I consider the quantity itself (-F/x) to be a random, normally distributed, value and I find the center of the distribution that maximized the probability of the sample values to be taken from the distribution.

Somehow all this isn't clear for me, especially the part involving extimating the uncertainties.

Moreover, what method is reasonably the best to use?
 
Physics news on Phys.org
  • #2
Are you interested in a real world problem? (The typical spring that one buys in a hardware store does not obey the relation F = kx for small forces. In a real world prolbem, you would have to say what your were trying to accomplish by the curve fit. ) Or is this a theoretical question?
 
  • #3
Stephen Tashi said:
Are you interested in a real world problem? (The typical spring that one buys in a hardware store does not obey the relation F = kx for small forces. In a real world prolbem, you would have to say what your were trying to accomplish by the curve fit. ) Or is this a theoretical question?

Actually, I'm really more interested in the theoretical aspects of the question. However I'm a bit confused at this moment, (sorry for my English, I'm Italian) so if you could define a real world problem and show what is the most reasonable procedure to follow, it would help me to grasp an idea.
 
  • #4
Typically method 2 is going to work a lot better than method 3, but it depends on the nature of your errors. Assume that all your measurements of y have equal uncertainty. In this case y/x has much larger uncertainty when x is small than when x is large. If you use method 3, the error in the average will be dominated by the most uncertain estimates of k. Method 2, in contrast, will deal with this correctly.

On the other hand, suppose your measurements are made in such a way that /relative/ error is constant, e.g. each y is uncertain by +- 10%. In that case method 3 will actually work better.

Generally speaking, if you're not certain, method 2 (or method 1) will be safer. This depends on the precise experimental design, of course, but in my experience method 3, when it fails, fails badly, while method 2, although it may not give you the best possible estimate, will typically give you one that's not terrible.

atat1tata said:
Please correct me if I am wrong, but in both methods I assume the quantities xi and Fi are random, normally distributed, variables.
You're assuming (at least in methods 1 and 2) that you know the x's exactly, and that the y's are random variables with means given by the regression model and constant standard deviations. You are NOT assuming normality; regression still makes sense even if the y's are not normal (although it may not be a maximum likelihood estimate if they're not). But many of the statistical tests you would do (t-tests, F-test) assume normality.

If there's error in the x's, you should in principle do a somewhat different type of regression. In practice it won't make much difference and no one ever does this.
 
  • #5
Thank you very much, pmsrw3! Now it's much clearer for me.

You are right, for the linear regression I assume that I know the x's without uncertainty (I forgot that).

However, I have now some doubts:
First, regarding method 3, let's assume I did not estimate the errors for the individual xi's and yi's. I just got a distribution of zi's, where zi=yi/xi and I estimated its standard deviation of mean. Would it be now significantly different from method 2?

Just for the sake of curiosity, could you explain more how can you deal with non-normal y's? How can you do the regression?

Thank you
 
  • #6
atat1tata said:
...let's assume I did not estimate the errors for the individual xi's and yi's. I just got a distribution of zi's, where zi=yi/xi and I estimated its standard deviation of mean. Would it be now significantly different from method 2?
Yes, it would be different. Even worse, it would be wrong. The usual formula for SEM assumes that that z's all have equal SD. If they don't, it gives a wrong answer.

Just for the sake of curiosity, could you explain more how can you deal with non-normal y's? How can you do the regression?
Just the way you normally would. You can even do the usual ANOVA to estimate sources of variance -- that doesn't depend on a normal distribution, either.
 
  • #7
All 3 methods can be viewed as linear regression - write the linear model as Y=X.b+ep where X is a matrix, with least squares solution b = inv(X'X)*(X'Y) and covariance matrix inv(X'X) giving the standard errors of b.

1) y=kx+c+ep has Y=y, X=[x,1], b=[k;c] so k=(E(xy)-E(x)E(y))/(E(x^2)-E(x)^2)

2) y=kx+ep has Y=y, X=x, b=k so k=E(xy)/E(x^2)

3) y=(k+ep)x has Y=y/x, X=1, b=k so k=E(y/x).
 
  • #8
atat1tata said:
so if you could define a real world problem and show what is the most reasonable procedure to follow, it would help me to grasp an idea.

Real world problems are complicated and I'm too busy to make up all the details of one! However, I will give these illustrations. Suppose the purpose of estimating k is to determine whether the given spring will will work in a shock absorber. If the shock absorber has limited "travel" and we overestimate k then the spring might be used in a situation wher it would travel too far and bang into something. The penalty for overestimating k will be greater than the penalty for underestimating it. On the other hand,, suppose the spring is to be use in a pendulum and the quantity of concern is the frequency of the pendulum. Then what we are really interested in is sqrt(k). Real world requirements may be essentially bureaucratic. For example, suppose a spring manufacturer is required by industry standards to assure that the F produced by a given x is within plus or minus 10 percent of the value predicted by F = kx. Percentage error has different implications that absolute error.

In a real world problem, a person should ask himself "exactly what am I trying to accomplish?". Unfortunately, the average technician will not do this.
 
  • #9
Thanks to you all, now I can understand it better
 

1. What is the purpose of finding k in y=kx?

The purpose of finding k in y=kx is to determine the slope of a linear relationship between two variables, y and x. The value of k represents the rate of change between these two variables and can be used to make predictions or analyze the strength of the relationship.

2. What is linear regression?

Linear regression is a statistical method used to model the relationship between two variables, typically represented by a straight line on a graph. It is often used to find the best fit line that describes the relationship between the variables and can be used to make predictions or analyze the relationship.

3. How is k calculated in y=kx?

K is calculated by dividing the change in y (Δy) by the change in x (Δx). This can be represented as k = Δy/Δx. Alternatively, it can also be calculated by using the formula k = (y2-y1)/(x2-x1), where (x1,y1) and (x2,y2) are any two points on the line.

4. Can k be negative?

Yes, k can be negative. This indicates a negative relationship between the two variables, where an increase in x results in a decrease in y. A negative k value would result in a downward sloping line on a graph.

5. Is finding k the only way to analyze a linear relationship between y and x?

No, finding k is not the only way to analyze a linear relationship between y and x. Another method is to find the average of y/x, which can also give a sense of the relationship between the variables. However, this method does not take into account the change in y for a given change in x, and may not accurately represent the overall trend of the data.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
493
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
893
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
475
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
839
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
2
Replies
64
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
2K
Back
Top