# A Nonlinear regression in two or more independent variables

1. Jul 6, 2017

### maistral

Hi. I wanted to learn more on this topic, but it seems all the available resources in the internet points to using R, SPSS, MINITAB or EXCEL.

Is there an established numerical method for such cases? I am aware of the Levenberg-Marquardt, Gauss-Newton and such methods for nonlinear regression on one independent variable only. I would like to ask how can I extend these methods for use on two or more independent variables.

2. Jul 6, 2017

### BvU

Hi,

I suggest you widen your search to include non-linear least squares minimization. Plenty stuff there.

3. Jul 6, 2017

### maistral

I did search for nonlinear LSM as well, but I keep on getting information only on single independent variables.

4. Jul 6, 2017

### BvU

? In the link I gave $\bf x$ is a vector -- or is that not what you are referring to ?

5. Jul 6, 2017

### maistral

Errr. If I understand this paper correctly x is a vector containing the values for the independent variable. My problem refers to a case where there are two independent variables (and thus, two vectors).

Unless I'm not reading this correctly, if this is the case I extend my apologies.

6. Jul 6, 2017

### BvU

No need for apologies.
Two independent vectors with N and M components, respectively, make one vector with N+M components ....

7. Jul 6, 2017

### FactChecker

Yes. In this context, all the independent variables are usually consolidated into a single vector, X. That makes the math notation much more concise in terms of vector operations.

8. Jul 7, 2017

### maistral

This is making me feel sad. I am totally unable to comprehend the notation. I have no idea how to insert the second set of independent variables.

As how I understood it, it should look like a matrix of N rows (data) x M columns (independent variables). The Jacobian should be simple I guess as it's simply going to be the partial derivative of the function with respect each modelling parameter evaluated at the independent variables.

Then I go to a dead end. I have no idea how to proceed further. I mean, I can evaluate JTJ and such; but regarding the independent variables, do I just move forward with the matrix operations?

9. Jul 7, 2017

### BvU

I have a feeling it's not the notation that's the barrier here. You have some experiments where you vary something (the independent variables $\vec x$ ) and you measure something (the dependent variables $\vec y$). Holding those separate is already a major task in many cases... .

And you have some idea in the form of a model that expresses the $y_i$ as a function of the $x_j\, : \ \ \ \hat y_i = f_i (\vec x, \vec p) . \ \$ Here$\ \hat y_i\$ is the expected (predicted) value of the actual measurement $\ y_i \, .\ \$ And $\ \vec p\$ is a vector of parameters, the things you are investigating after.

What you want to minimize is $\displaystyle {\sum {(y_i - \hat y_i)^2\over \sigma_i^2} }$ by suitably varying the $p_k .\ \$ The $\sigma_i$ are the (estimated) standard deviations in the $\ y_i$.

This is a bit general; perhaps we can make this easier if you elaborate a bit more on what exactly you are doing, or by focusing on a comparable example ?

10. Jul 7, 2017

### maistral

Hi! Thank you very much for replying.

Yes, I'm actually looking for a general case. What I'm trying to do is; say, I have this set of data:

I wanted to fit the modelling parameters a, b, and z into Z(x,y). While this can be transformed into a multiple linear regression problem, I wanted to be able to do it using nonlinear regression. That's why I was asking if there is a numerical method tailored for these kinds of problems, or is there a way to extend, say for example, at the very least the Gauss-Newton method.

EDIT: I think I understand the strategy you mentioned. Using guess values of the parameters a, b, and z, I'll evaluate the modelling function at those initial guess values and the points x, y. Then I'll get the sum of the squares of the residuals (something like (Zmeasured - Zpredicted)2). Then the problem becomes a minimization problem which I can kill using something similar to the Newton-Raphson for optimization or steepest descent methods. I do not remember using that estimated variance though (I don't even know how to calculate that)...

My question was if there is a more 'robust' or algorithmic method in order to determine the parameters; something similar to the Levenberg-Marquardt.

11. Jul 7, 2017

### maistral

Hold on. I'm squealing here because of a tiny eureka moment. I'll try studying this on my own and see what happens.

12. Jul 7, 2017