Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Linear Regression: Pros and cons of Normal vs. simplified methods?

  1. Dec 31, 2011 #1
    I'm currently looking at a linear regression handout from Uni and there are two methods to calculate the equation. The Normal one is to find a and b for y=a+bx, the equations for a and b are given in the handout but I'll assume you're familiar with them. The simplified one is using

    [itex]y = Bx + (\overline{y} − B\overline{x}[/itex])​

    The two produce slightly different values and I assume the normal one is more precise than the simplified.

    What I'd like to know is which one would be best to use when? Is the difference negligible?

    Thanks, and sorry if I've posted this in the wrong forum.
     
  2. jcsd
  3. Dec 31, 2011 #2
    The two are mathematically identical, so the difference is likely roundoff error due to the difference in computation.

    [itex]y = Bx + (y - Bx)[/itex]
    [itex]y = Bx + A[/itex]
    [itex]y = A + Bx[/itex]

    Note that the standard least squares estimates are the best linear unbiased estimates (as well as the maximum likelihood estimates), provided that the model assumptions are met. In this case, you would rarely want to use anything else.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Linear Regression: Pros and cons of Normal vs. simplified methods?
  1. Linear vs affine (Replies: 2)

Loading...