Solving for least square coefficients of power law

In summary: Lower residual means a better fit.In summary, the conversation discusses two different methods for solving for A in the equation y_i=A{x_i}^b, and how they may result in different answers. The first method involves taking the logarithm of both sides and using least squares to minimize the sum of residuals, while the second method involves directly minimizing the sum of squared residuals. The conversation also clarifies that the lower the residual, the better the fit.
  • #1
enfield
21
0
[tex] y_i=A{x_i}^b [/tex]

When I solve for A two different ways I am getting different answers..so somewhere I'm doing something wrong. If someone could point out where I would be grateful :).

Using logs:

[tex] y_i=A{x_i}^b [/tex]
[tex] ln(y_i)=ln(A)+b*ln(x_i) [/tex]
[tex] ln(y_i)-(ln(A)+b*ln(x_i))=r_i [/tex] for least squares we want to minimize: [tex] S=\sum_i^{n}{r_i}^2 [/tex] which means the gradient has to be zero. I only care about finding A right now so I only have to deal with the partial with respect to A:
[tex] \frac{\partial S}{\partial A}\sum_i^{n}{r_i}^2 =2 \sum_i^{n}{r_i}\frac{\partial r_i}{\partial A}= 2\sum_i^{n}(ln(y_i)-ln(A)-b*ln(x_i))\frac{1}{A}=0 [/tex]

The numerator inside the sum has to be zero, and we can ignore the 2, so:

[tex]\sum_i^{n}(ln(y_i)-ln(A)-b*ln(x_i))=0[/tex]
[tex]\sum_i^{n}ln(y_i)-n*ln(A)-b\sum_i^{n}ln(x_i)=0[/tex]
[tex] ln(A)=\frac{\sum_i^{n}ln(y_i)-b\sum_i^{n}ln(x_i)}{n} [/tex]

(this is the derivation that I think is correct).
But when i solve for A without taking the logs of each side first I get something else:

[tex] y_i - A{x_i}^b = r_i [/tex][tex] \frac{\partial S}{\partial A}\sum_i^{n}{r_i}^2 =2 \sum_i^{n}{r_i}\frac{\partial r_i}{\partial A}= 2\sum_i^{n}(y_i - A{x_i}^b)*-{x_i}^b=0 [/tex]

[tex]\sum_i^{n}(-{x_i}{y_i} + A{x_i}^{2b})=0 [/tex]

[tex]-\sum_i^{n}{x_i}{y_i}+A\sum_i^{n}{x_i}^{2b}=0 [/tex]
[tex] A=\frac{\sum_i^{n}{x_i}{y_i}}{\sum_i^{n}{x_i}^{2b}} [/tex]

And if you take the ln of it to compare it with what we got before you get:

[tex] ln(A)= ln(\frac{\sum_i^{n}{x_i}{y_i}}{\sum_i^{n}{x_i}^{2b}}) =ln(\sum_i^{n}{x_i}{y_i})-ln(\sum_i^{n}{x_i}^{2b}) [/tex]

Which is not the same as:

[tex] ln(A)=\frac{\sum_i^{n}ln(y_i)-b\sum_i^{n}ln(x_i)}{n} [/tex]

as far as I can tell...
 
Physics news on Phys.org
  • #2
That's because you solved two different problems.

Minimizing [itex]\sum[ln(y_i)-ln(A)-b*ln(x_i)]^2[/itex] is not necessarily the same thing as minimizing [itex]\sum(y_i - A{x_i}^b)^2[/itex]. The 2nd method will result in a lower residual unless the data happen to fit the curve exactly (i.e. zero residual), in which case both will yield the same result.
 
Last edited:
  • #3
Ah, thanks so much. Yeah, I had thought they were maybe different problems, but I couldn't see exactly how.

When you do least squares with the log of the data and function you are fitting to the data you won't get the same coefficients for the function that you would if you just did least squares with the unmodified data/function.

Okay, that makes sense. Does lower residual mean lower mean squared error, so better fit?
 
  • #4
enfield said:
Does lower residual mean lower mean squared error, so better fit?
.

Yes. Two different words for the same thing.
 
  • #5

It appears that there may be an error in your derivation when you do not take the logs of each side first. In this case, the partial derivative with respect to A should be:

\frac{\partial S}{\partial A}\sum_i^{n}{r_i}^2 =2 \sum_i^{n}{r_i}\frac{\partial r_i}{\partial A}= -2\sum_i^{n}{x_i}^b(y_i - A{x_i}^b)=0

This leads to the correct solution:

A=\frac{\sum_i^{n}{x_i}{y_i}}{\sum_i^{n}{x_i}^{2b}}

When comparing this to the solution obtained by taking the logs, we can see that they are equivalent:

ln(A)=ln(\frac{\sum_i^{n}{x_i}{y_i}}{\sum_i^{n}{x_i}^{2b}})=ln(\sum_i^{n}{x_i}{y_i})-ln(\sum_i^{n}{x_i}^{2b})

Therefore, the discrepancy in your solutions may be due to an error in your derivation when not taking the logs of each side first.
 

Related to Solving for least square coefficients of power law

1. What is the power law equation and why is it important?

The power law equation is a mathematical relationship between two variables where one variable is proportional to the other raised to a certain power. It is important because it can be used to model and predict various natural phenomena, such as population growth, income distribution, and network connectivity.

2. How do you solve for the least square coefficients of a power law?

To solve for the least square coefficients of a power law, you need to use a statistical method called the least squares regression. This involves minimizing the sum of the squared differences between the observed data points and the predicted values from the power law equation. This can be done using numerical methods or software programs.

3. What is the significance of the least square coefficients in a power law equation?

The least square coefficients in a power law equation represent the best fit values that minimize the overall error between the observed data and the predicted values. They can be used to determine the strength and direction of the relationship between the variables and can also be used to make predictions for new data points.

4. What factors can affect the accuracy of the least square coefficients in a power law equation?

The accuracy of the least square coefficients in a power law equation can be affected by the quality and quantity of the data, the chosen power law model, and the assumptions made during the regression analysis. It is important to carefully select and prepare the data and to critically evaluate the results to ensure the coefficients are reliable.

5. Can the least square coefficients of a power law change over time?

Yes, the least square coefficients of a power law may change over time if the underlying relationship between the variables changes. This can be due to various factors such as external influences, changes in the data, or improvements in the model used. It is important to regularly re-evaluate and update the coefficients to ensure the accuracy of the power law equation.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
944
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
19
Views
1K
Replies
0
Views
397
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
Back
Top