- #1
Sitewinder
- 3
- 0
Hi,
I've to cope a little problem concerning general nonlinear regression. To cut a long story short: I've got a very complicated function y=f(a,x), and a lot of measured values for x and y. The goal is to fit the free parameter a.
Therefor, I wrote a simple c++ program that calculates the sums of the quadratic differences between f(a,x) and the measured y-values, using a lot of values in a large interval for a:
[tex]\sum_i{(f(a_j,x_i)-y_i)^2}[/tex]
The value a_j which causes the smallest sum, is taken as the fitted parameter a.
Though it works, this method is extremely inefficient due to the high number of measured values, the function's complexity and the large interval needed for a.
What I'm searching for is an introduction or an explanation for a more efficient least-square-method, for example the Levenberg-Marquardt-Method. Unfortunately I didn't find anything useful in the net
I hope that someone can help me.
Thanks in advance
Site
I've to cope a little problem concerning general nonlinear regression. To cut a long story short: I've got a very complicated function y=f(a,x), and a lot of measured values for x and y. The goal is to fit the free parameter a.
Therefor, I wrote a simple c++ program that calculates the sums of the quadratic differences between f(a,x) and the measured y-values, using a lot of values in a large interval for a:
[tex]\sum_i{(f(a_j,x_i)-y_i)^2}[/tex]
The value a_j which causes the smallest sum, is taken as the fitted parameter a.
Though it works, this method is extremely inefficient due to the high number of measured values, the function's complexity and the large interval needed for a.
What I'm searching for is an introduction or an explanation for a more efficient least-square-method, for example the Levenberg-Marquardt-Method. Unfortunately I didn't find anything useful in the net
I hope that someone can help me.
Thanks in advance
Site