Linearing a non-linear problem don't yeild the same solution.

  • Thread starter Thread starter Mamed
  • Start date Start date
  • Tags Tags
    Non-linear
Click For Summary

Discussion Overview

The discussion revolves around the differences in solutions obtained from fitting data using non-linear least squares (NLLSQ) versus linearized least squares (LLSQ) methods. Participants explore the implications of linearizing a non-linear problem and question whether the linearized approach should yield the same or better solutions.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant describes a fitting function in the form of a product of powers and expresses concern that linearizing the problem leads to different solutions from the original non-linear approach.
  • Another participant points out that minimizing the sum of squares in the original and linearized forms involves different functions, which can inherently lead to different parameter estimates.
  • A participant questions the rationale behind linearizing the function, suggesting that the goal is to achieve a unique optimum, while acknowledging that the non-linear problem may yield multiple solutions.
  • Another participant provides an analogy with a simpler problem, illustrating how linearization alters the nature of the optimization problem and leads to different minimization results.

Areas of Agreement / Disagreement

Participants express differing views on whether linearizing a non-linear problem should yield the same or a better solution. There is no consensus on the implications of linearization for the uniqueness and optimality of the solutions.

Contextual Notes

Participants note that linearizing the function changes the problem and may not preserve the same relationships between parameters as the original non-linear formulation. The discussion highlights the complexity of fitting models and the potential for local minima in non-linear optimization.

Who May Find This Useful

This discussion may be useful for researchers and practitioners involved in data fitting, optimization, and those exploring the implications of linearization in mathematical modeling.

Mamed
Messages
17
Reaction score
0
Hi

I have some data samples and and my job is to find a curve that will fit these values.
The curve that i have gone for is in the form

y = k * sqrt(d1^x*d2^y)*d3^z

where kxyz are all unknowns that I need to find. I solve this with the MATLAB function fminsearch and the solution is just fine.

I did also try linearize the problem by log

log(y) = log(k) + xlog(d1)/2 + ylog(d2)/2 +zlog(n)

And this i solved with the function lsqnonlin

I should mention that i use the least square solution sum(Ymeasured-Yanalytical)

Anyways this brings me to my question. These two don't give the same solution!

According to wikipedia
"
In LLSQ the solution is unique, but in NLLSQ there may be multiple minima in the sum of squares.
"
Which means that either I'm doing something wrong or the solution after linearizing isn't necessarily the optimal solution.

So

Is the solution of a linear function the optimal solution or not?

The solution of a non-linear is not unique while it is for a linear function.

Shouldn't the linear function give the same or a better solution than the non linear?

Thanks
 
Physics news on Phys.org
With your first method you are minimizing [itex]\sum (y_m - y_e)^2[/itex].

When you take logs, you are minimizing [itex]\sum (\log y_m - \log y_e)^2[/itex].

That means you will get different answers even when there is a unique solution to the NLLSQ problem.
 
But shouldn't the values of the parameters minimize both functions? Isnt the whole point of you linearizing the function so that you will have the best unique optimum, while the non-linear problem can have more than one solution and the one you have might just be a local minimum.
 
Mamed said:
Isnt the whole point of you linearizing the function so that you will have the best unique optimum.

Linearizing the function makes it easier to solve, but it also changes the problem.

Take a similar but simpler problem. Find the value of a that minimizes [itex]y = e^{ax}[/itex].

The least squares sum is
[tex]\sum (y_i - e^{ax_i})^2 = \sum y_i^2 - 2y_i e^{ax_i} + e^{2ax_i}[/tex]
The minimum is when
[tex]\sum y_ix_i e^{ax_i} = \sum x_i e^{2ax_i}[/tex]

On the other hand if you take logs and minimize [itex]\log y = a \log x[/itex], the minimum value is just
[tex]a = \left(\sum \log y_i \right) / \left(\sum \log x_i \right)[/tex].
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
Replies
2
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
Replies
2
Views
2K
  • · Replies 36 ·
2
Replies
36
Views
6K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
3K