Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Solving for least square coefficients of power law

  1. Oct 14, 2011 #1
    [tex] y_i=A{x_i}^b [/tex]

    When I solve for A two different ways I am getting different answers..so somewhere I'm doing something wrong. If someone could point out where I would be grateful :).

    Using logs:

    [tex] y_i=A{x_i}^b [/tex]
    [tex] ln(y_i)=ln(A)+b*ln(x_i) [/tex]
    [tex] ln(y_i)-(ln(A)+b*ln(x_i))=r_i [/tex] for least squares we want to minimize: [tex] S=\sum_i^{n}{r_i}^2 [/tex] which means the gradient has to be zero. I only care about finding A right now so I only have to deal with the partial with respect to A:
    [tex] \frac{\partial S}{\partial A}\sum_i^{n}{r_i}^2 =2 \sum_i^{n}{r_i}\frac{\partial r_i}{\partial A}= 2\sum_i^{n}(ln(y_i)-ln(A)-b*ln(x_i))\frac{1}{A}=0 [/tex]

    The numerator inside the sum has to be zero, and we can ignore the 2, so:

    [tex] ln(A)=\frac{\sum_i^{n}ln(y_i)-b\sum_i^{n}ln(x_i)}{n} [/tex]

    (this is the derivation that I think is correct).
    But when i solve for A without taking the logs of each side first I get something else:

    [tex] y_i - A{x_i}^b = r_i [/tex]

    [tex] \frac{\partial S}{\partial A}\sum_i^{n}{r_i}^2 =2 \sum_i^{n}{r_i}\frac{\partial r_i}{\partial A}= 2\sum_i^{n}(y_i - A{x_i}^b)*-{x_i}^b=0 [/tex]

    [tex]\sum_i^{n}(-{x_i}{y_i} + A{x_i}^{2b})=0 [/tex]

    [tex]-\sum_i^{n}{x_i}{y_i}+A\sum_i^{n}{x_i}^{2b}=0 [/tex]
    [tex] A=\frac{\sum_i^{n}{x_i}{y_i}}{\sum_i^{n}{x_i}^{2b}} [/tex]

    And if you take the ln of it to compare it with what we got before you get:

    [tex] ln(A)= ln(\frac{\sum_i^{n}{x_i}{y_i}}{\sum_i^{n}{x_i}^{2b}}) =ln(\sum_i^{n}{x_i}{y_i})-ln(\sum_i^{n}{x_i}^{2b}) [/tex]

    Which is not the same as:

    [tex] ln(A)=\frac{\sum_i^{n}ln(y_i)-b\sum_i^{n}ln(x_i)}{n} [/tex]

    as far as I can tell...
  2. jcsd
  3. Oct 14, 2011 #2


    User Avatar
    Homework Helper

    That's because you solved two different problems.

    Minimizing [itex]\sum[ln(y_i)-ln(A)-b*ln(x_i)]^2[/itex] is not necessarily the same thing as minimizing [itex]\sum(y_i - A{x_i}^b)^2[/itex]. The 2nd method will result in a lower residual unless the data happen to fit the curve exactly (i.e. zero residual), in which case both will yield the same result.
    Last edited: Oct 14, 2011
  4. Oct 14, 2011 #3
    Ah, thanks so much. Yeah, I had thought they were maybe different problems, but I couldn't see exactly how.

    When you do least squares with the log of the data and function you are fitting to the data you won't get the same coefficients for the function that you would if you just did least squares with the unmodified data/function.

    Okay, that makes sense. Does lower residual mean lower mean squared error, so better fit?
  5. Oct 14, 2011 #4


    User Avatar
    Homework Helper


    Yes. Two different words for the same thing.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook