A Nonlinear regression in two or more independent variables

AI Thread Summary
The discussion centers on extending nonlinear regression methods, like Levenberg-Marquardt and Gauss-Newton, to cases with two or more independent variables. Participants emphasize the importance of consolidating independent variables into a single vector for mathematical clarity. The conversation highlights the challenge of managing multiple independent variables and the need for a robust numerical method for parameter estimation. One user shares a breakthrough realization about treating data points as constants while running the algorithm, simplifying the process. Overall, the thread illustrates the complexities of nonlinear regression with multiple variables and the strategies for overcoming them.
maistral
Messages
235
Reaction score
17
Hi. I wanted to learn more on this topic, but it seems all the available resources in the internet points to using R, SPSS, MINITAB or EXCEL.

Is there an established numerical method for such cases? I am aware of the Levenberg-Marquardt, Gauss-Newton and such methods for nonlinear regression on one independent variable only. I would like to ask how can I extend these methods for use on two or more independent variables.
 
Physics news on Phys.org
Hi,

I suggest you widen your search to include non-linear least squares minimization. Plenty stuff there.
 
Hi. Thanks for replying.

I did search for nonlinear LSM as well, but I keep on getting information only on single independent variables.
 
? In the link I gave ##\bf x## is a vector -- or is that not what you are referring to ?
 
Errr. If I understand this paper correctly x is a vector containing the values for the independent variable. My problem refers to a case where there are two independent variables (and thus, two vectors).

Unless I'm not reading this correctly, if this is the case I extend my apologies.
 
No need for apologies.
Two independent vectors with N and M components, respectively, make one vector with N+M components ...
 
  • Like
Likes FactChecker
Yes. In this context, all the independent variables are usually consolidated into a single vector, X. That makes the math notation much more concise in terms of vector operations.
 
This is making me feel sad. I am totally unable to comprehend the notation. I have no idea how to insert the second set of independent variables.

As how I understood it, it should look like a matrix of N rows (data) x M columns (independent variables). The Jacobian should be simple I guess as it's simply going to be the partial derivative of the function with respect each modelling parameter evaluated at the independent variables.

Then I go to a dead end. I have no idea how to proceed further. I mean, I can evaluate JTJ and such; but regarding the independent variables, do I just move forward with the matrix operations?
 
maistral said:
comprehend the notation. I have no idea how to insert the second set of independent variables.
I have a feeling it's not the notation that's the barrier here. You have some experiments where you vary something (the independent variables ##\vec x## ) and you measure something (the dependent variables ##\vec y##). Holding those separate is already a major task in many cases... :rolleyes:.

And you have some idea in the form of a model that expresses the ##y_i## as a function of the ##x_j\, : \ \ \ \hat y_i = f_i (\vec x, \vec p) . \ \ ## Here##\ \hat y_i\ ## is the expected (predicted) value of the actual measurement ##\ y_i \, .\ \ ## And ##\ \vec p\ ## is a vector of parameters, the things you are investigating after.

What you want to minimize is ##\displaystyle {\sum {(y_i - \hat y_i)^2\over \sigma_i^2} } ## by suitably varying the ##p_k .\ \ ## The ##\sigma_i## are the (estimated) standard deviations in the ##\ y_i##.

This is a bit general; perhaps we can make this easier if you elaborate a bit more on what exactly you are doing, or by focusing on a comparable example ?
 
  • #10
Hi! Thank you very much for replying.

Yes, I'm actually looking for a general case. What I'm trying to do is; say, I have this set of data:
2hrle1u.png


I wanted to fit the modelling parameters a, b, and z into Z(x,y). While this can be transformed into a multiple linear regression problem, I wanted to be able to do it using nonlinear regression. That's why I was asking if there is a numerical method tailored for these kinds of problems, or is there a way to extend, say for example, at the very least the Gauss-Newton method.

EDIT: I think I understand the strategy you mentioned. Using guess values of the parameters a, b, and z, I'll evaluate the modelling function at those initial guess values and the points x, y. Then I'll get the sum of the squares of the residuals (something like (Zmeasured - Zpredicted)2). Then the problem becomes a minimization problem which I can kill using something similar to the Newton-Raphson for optimization or steepest descent methods. I do not remember using that estimated variance though (I don't even know how to calculate that)...

My question was if there is a more 'robust' or algorithmic method in order to determine the parameters; something similar to the Levenberg-Marquardt.
 
  • #11
Hold on. I'm squealing here because of a tiny eureka moment. I'll try studying this on my own and see what happens.
 
  • #12
Aha. I made it work.

Basically I just need to run the algorithm normally since the partial derivatives are with respect to the modelling parameters anyway and the method would just treat the data points as constants. I had this idea since your post reminded me of that generic method of attempting to reduce the sum of the squares of residuals.

Moral lesson: do not over-analyze, lol.
Thanks again!
 
Back
Top