Finding Intercept in R with lm() for Specified Slope

AI Thread Summary
The discussion focuses on fitting a line to data in R using the lm() function, specifically to find the best-fit intercept given a specified slope. The error function for linear least squares is defined, leading to the formula for the intercept, α, as α = ȳ - β * ȳx, where ȳ and ȳx are the averages of y and x, respectively. A participant also explores fitting data to a power-law model of the form ax^b, using a log transformation for linear regression. They derive an expression for the parameter A, which is confirmed as correct, with A calculated as A = Σ(y_i * x_i^b) / Σ(x_i^b)^2. This approach is validated as acceptable for the intended analysis.
Mosis
Messages
53
Reaction score
0
I'm interested in fitting a line to some data. There is a built-in function in R lm() that gives me both the best-fit slope and intercept, however, I would like to determine the best fit intercept GIVEN a specified value of the slope. Is there an easy way to do this?

I apologize if this is in the wrong forum. I know it's not exactly "programming" but I don't know a more appropriate place to post this.
 
Technology news on Phys.org
So your trying to fit your data to the model

y = \alpha + \beta \cdot x

where \beta is a given (non-parameter) and \alpha is a parameter.

If that's the case, then the Error function (for linear least squares) is

E = \sum^n_i (y_i - (\alpha+\beta \cdot x_i))^2

Since the model only has one free parameter, the solution is rather easy. Take the derivative of E with respect to this parameter \alpha and set it equal to zero. The following results:

n \cdot \alpha = \sum y_i - \beta \cdot \sum x_i

or

\alpha = \bar y - \beta \cdot \bar x

where the bar values represent averages, eg, \bar x = the averages of the x values, etc.
 
Thanks for the reply!

actually, I've tried something different. I'm ultimately interested in fitting some data with a power-law of the form ax^b, where b is the known parameter. One approach is to consider linear regression on the log transform, whereby b will be the known slope and loga will be the unknown parameter. Instead I considered S = sum (y_i - a(x_i)^b)^2. Differentiating wrt a and setting the expression equal to 0 gives me sum((x_i)^b(y_i - A(x_i)^b) = 0, and Maple can easily solve for A given the data.

Is this the correct approach?
 
Yes, that's an acceptable approach. Note that the solution for "A" is now

A = \frac {\sum y_i \cdot x_i^b}{\sum (x_i^b)^2}
 
Thread 'Is this public key encryption?'
I've tried to intuit public key encryption but never quite managed. But this seems to wrap it up in a bow. This seems to be a very elegant way of transmitting a message publicly that only the sender and receiver can decipher. Is this how PKE works? No, it cant be. In the above case, the requester knows the target's "secret" key - because they have his ID, and therefore knows his birthdate.
Back
Top