Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Nonlinear regression in two or more independent variables

  1. Jul 6, 2017 #1
    Hi. I wanted to learn more on this topic, but it seems all the available resources in the internet points to using R, SPSS, MINITAB or EXCEL.

    Is there an established numerical method for such cases? I am aware of the Levenberg-Marquardt, Gauss-Newton and such methods for nonlinear regression on one independent variable only. I would like to ask how can I extend these methods for use on two or more independent variables.
     
  2. jcsd
  3. Jul 6, 2017 #2

    BvU

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi,

    I suggest you widen your search to include non-linear least squares minimization. Plenty stuff there.
     
  4. Jul 6, 2017 #3
    Hi. Thanks for replying.

    I did search for nonlinear LSM as well, but I keep on getting information only on single independent variables.
     
  5. Jul 6, 2017 #4

    BvU

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    ? In the link I gave ##\bf x## is a vector -- or is that not what you are referring to ?
     
  6. Jul 6, 2017 #5
    Errr. If I understand this paper correctly x is a vector containing the values for the independent variable. My problem refers to a case where there are two independent variables (and thus, two vectors).

    Unless I'm not reading this correctly, if this is the case I extend my apologies.
     
  7. Jul 6, 2017 #6

    BvU

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    No need for apologies.
    Two independent vectors with N and M components, respectively, make one vector with N+M components ....
     
  8. Jul 6, 2017 #7

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    Yes. In this context, all the independent variables are usually consolidated into a single vector, X. That makes the math notation much more concise in terms of vector operations.
     
  9. Jul 7, 2017 #8
    This is making me feel sad. I am totally unable to comprehend the notation. I have no idea how to insert the second set of independent variables.

    As how I understood it, it should look like a matrix of N rows (data) x M columns (independent variables). The Jacobian should be simple I guess as it's simply going to be the partial derivative of the function with respect each modelling parameter evaluated at the independent variables.

    Then I go to a dead end. I have no idea how to proceed further. I mean, I can evaluate JTJ and such; but regarding the independent variables, do I just move forward with the matrix operations?
     
  10. Jul 7, 2017 #9

    BvU

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    I have a feeling it's not the notation that's the barrier here. You have some experiments where you vary something (the independent variables ##\vec x## ) and you measure something (the dependent variables ##\vec y##). Holding those separate is already a major task in many cases... :rolleyes:.

    And you have some idea in the form of a model that expresses the ##y_i## as a function of the ##x_j\, : \ \ \ \hat y_i = f_i (\vec x, \vec p) . \ \ ## Here##\ \hat y_i\ ## is the expected (predicted) value of the actual measurement ##\ y_i \, .\ \ ## And ##\ \vec p\ ## is a vector of parameters, the things you are investigating after.

    What you want to minimize is ##\displaystyle {\sum {(y_i - \hat y_i)^2\over \sigma_i^2} } ## by suitably varying the ##p_k .\ \ ## The ##\sigma_i## are the (estimated) standard deviations in the ##\ y_i##.

    This is a bit general; perhaps we can make this easier if you elaborate a bit more on what exactly you are doing, or by focusing on a comparable example ?
     
  11. Jul 7, 2017 #10
    Hi! Thank you very much for replying.

    Yes, I'm actually looking for a general case. What I'm trying to do is; say, I have this set of data:
    2hrle1u.png

    I wanted to fit the modelling parameters a, b, and z into Z(x,y). While this can be transformed into a multiple linear regression problem, I wanted to be able to do it using nonlinear regression. That's why I was asking if there is a numerical method tailored for these kinds of problems, or is there a way to extend, say for example, at the very least the Gauss-Newton method.

    EDIT: I think I understand the strategy you mentioned. Using guess values of the parameters a, b, and z, I'll evaluate the modelling function at those initial guess values and the points x, y. Then I'll get the sum of the squares of the residuals (something like (Zmeasured - Zpredicted)2). Then the problem becomes a minimization problem which I can kill using something similar to the Newton-Raphson for optimization or steepest descent methods. I do not remember using that estimated variance though (I don't even know how to calculate that)...

    My question was if there is a more 'robust' or algorithmic method in order to determine the parameters; something similar to the Levenberg-Marquardt.
     
  12. Jul 7, 2017 #11
    Hold on. I'm squealing here because of a tiny eureka moment. I'll try studying this on my own and see what happens.
     
  13. Jul 7, 2017 #12
    Aha. I made it work.

    Basically I just need to run the algorithm normally since the partial derivatives are with respect to the modelling parameters anyway and the method would just treat the data points as constants. I had this idea since your post reminded me of that generic method of attempting to reduce the sum of the squares of residuals.

    Moral lesson: do not over-analyze, lol.
    Thanks again!
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted