# Linearing a non-linear problem don't yeild the same solution.

• Mamed
So different methods give different answers.In summary, the speaker is discussing their approach to finding a curve that fits given data samples, using both a non-linear method and a linearized method. They mention that the solutions from the two methods are different and explain that this is because linearizing the function changes the problem. They also give an example of a simpler problem to illustrate this concept.

#### Mamed

Hi

I have some data samples and and my job is to find a curve that will fit these values.
The curve that i have gone for is in the form

y = k * sqrt(d1^x*d2^y)*d3^z

where kxyz are all unknowns that I need to find. I solve this with the MATLAB function fminsearch and the solution is just fine.

I did also try linearize the problem by log

log(y) = log(k) + xlog(d1)/2 + ylog(d2)/2 +zlog(n)

And this i solved with the function lsqnonlin

I should mention that i use the least square solution sum(Ymeasured-Yanalytical)

Anyways this brings me to my question. These two don't give the same solution!

According to wikipedia
"
In LLSQ the solution is unique, but in NLLSQ there may be multiple minima in the sum of squares.
"
Which means that either I'm doing something wrong or the solution after linearizing isn't necessarily the optimal solution.

So

Is the solution of a linear function the optimal solution or not?

The solution of a non-linear is not unique while it is for a linear function.

Shouldn't the linear function give the same or a better solution than the non linear?

Thanks

With your first method you are minimizing $\sum (y_m - y_e)^2$.

When you take logs, you are minimizing $\sum (\log y_m - \log y_e)^2$.

That means you will get different answers even when there is a unique solution to the NLLSQ problem.

But shouldn't the values of the parameters minimize both functions? Isnt the whole point of you linearizing the function so that you will have the best unique optimum, while the non-linear problem can have more than one solution and the one you have might just be a local minimum.

Mamed said:
Isnt the whole point of you linearizing the function so that you will have the best unique optimum.

Linearizing the function makes it easier to solve, but it also changes the problem.

Take a similar but simpler problem. Find the value of a that minimizes $y = e^{ax}$.

The least squares sum is
$$\sum (y_i - e^{ax_i})^2 = \sum y_i^2 - 2y_i e^{ax_i} + e^{2ax_i}$$
The minimum is when
$$\sum y_ix_i e^{ax_i} = \sum x_i e^{2ax_i}$$

On the other hand if you take logs and minimize $\log y = a \log x$, the minimum value is just
$$a = \left(\sum \log y_i \right) / \left(\sum \log x_i \right)$$.