MATLAB Minimizing function in Matlab and C

  • Thread starter Thread starter a.mlw.walker
  • Start date Start date
  • Tags Tags
    Function Matlab
AI Thread Summary
The discussion revolves around implementing the Nelder-Mead method for function minimization in both MATLAB and C, highlighting discrepancies in results. The MATLAB code successfully minimizes a function, yielding parameters a = 0.0250, b = 1.8944, and c = -0.3690, while the C code produces significantly different values: a = 3.52, b = 3.634, and c = -0.000073. Participants suggest that the C implementation may not be passing parameters correctly or could be using a different algorithm, leading to local minima rather than the global minimum found by MATLAB. Adjustments to algorithm parameters in the C code have shown some improvement, but inconsistencies remain, indicating that the C implementation may require further tuning to match MATLAB's results. The conversation emphasizes the importance of algorithm parameters and their impact on convergence and solution accuracy.
  • #51
Cool! Ok, I'll get it, take a look, and see how far I can get.
 
Physics news on Phys.org
  • #52
So I got levmar running on the rosenbrock function. Quite neat actually. Interesting that if you give it the jacobian it solves the problem with 10 times fewer iterations. Side note.

I can't find the function that is actually solving the problem - called dlevmar_dif. I want to add my arrays into the function calls so that I can pass the function information. the rosenbrock function doesn't need to be passed anything. anyway just to let you know where i;m at.
 
  • #53
Nice! :smile:
I take it you're getting a good solution?
Do you get the same solution or yet another, better or worse?
 
  • #54
Still looking for that function - can only find the pointer at the moment. I am going to do it globally because i can't find it - and just get an asnwer. I'll get back to you
 
  • #55
Hey, So I am trying to get my function now to match the style of the function being solved in levmar.c
The supplied function as the example is:
Code:
void ros(double *p, double *x, int m, int n, void *data)
{
register int i;

  for(i=0; i<n; ++i)
    x[i]=((1.0-p[0])*(1.0-p[0]) + ROSD*(p[1]-p[0]*p[0])*(p[1]-p[0]*p[0]));
}
My function is:
Code:
static double function(double *p, double *sum, int m, int n, void *data)
{

    double Fitted_Curve[5] = {0};
    double Error_Vector[5] = {0};
    int i;
    double a, b, c;
    sum = 0;
//    double  v1[5], v2[5], geom_inv[5], norm = 0, norm_2 = 0, v2sum = 0, x_coeff = 0;
//    Actual_Output[5] = {1.2, 2.693, 4.325, 6.131, 8.125};
   // printf("Actual_Output[0[ = %f", Actual_Output[0]);
a = p[0];//parameters
b=p[1];
c=p[2];

  for (i = 0; i <= 4; i++)
  {

    Fitted_Curve[i] = (1/(a*b))*(c-asinh(sinh(c)*exp(a*Input[i]*2*pi)));
    Error_Vector[i] = Actual_Output[i]-Fitted_Curve[i];
    printf("a(x[0]) = %f, b(x[1]) = %f, c(x[2]) = %f\n",p[0], p[1], p[2]);
    printf("Fitted_Curve= %f\n", Fitted_Curve[i]);
    printf("Error_Vector= %f\n", Error_Vector[i]);
    }

    for (i = 0; i <= 4; i++)
    {
        sum = sum + pow(Error_Vector[i],2);
        printf("sum= %f\n", sum);
    }

    a_global = a;
    b_global = b;
//    x_coeff_global = x_coeff;
    return sum;

}

p is the vector of my parameters, n and m at the moment I can't tell why there are two of them because I can't find the function that seems to solve the problem - although it runs, but between n and m, I think they state the number of parameters being solved.
I can't find any definition of data, but my problem is that the demo code doesn't have a return - it returns the array x[]. However I am trying to minimize the sum of the errors squared, so I want to return sum (I think), however I am not sure how to adjust the function so that I can do this. Did that make sense?
I have attached the output - It seems to only somplete one iteration, but I am not sure it knows I want to minimize sum?

EDIT: I found this on the website:
Code:
int dlevmar_dif(
void (*func)(double *p, double *hx, int m, int n, void *adata), /* functional relation describing measurements.
                                                                 * A p \in R^m yields a \hat{x} \in  R^n
                                                                 */
double *p,         /* I/O: initial parameter estimates. On output contains the estimated solution */
double *x,         /* I: measurement vector. NULL implies a zero vector */
int m,             /* I: parameter vector dimension (i.e. #unknowns) */
int n,             /* I: measurement vector dimension */
int itmax,         /* I: maximum number of iterations */
double opts[5],    /* I: opts[0-4] = minim. options [\tau, \epsilon1, \epsilon2, \epsilon3, \delta]. Respectively the
                    * scale factor for initial \mu, stopping thresholds for ||J^T e||_inf, ||Dp||_2 and ||e||_2 and the
                    * step used in difference approximation to the Jacobian. If \delta<0, the Jacobian is approximated
                    * with central differences which are more accurate (but slower!) compared to the forward differences
                    * employed by default. Set to NULL for defaults to be used.
 

Attachments

  • levmar.JPG
    levmar.JPG
    30 KB · Views: 511
Last edited:
  • #56
How did you call the dlevmar_dif() function? Which parameters?

In your case m=3 (3 parameters).
And n=1 (your result vector only contains your sse/sum).

For now, I'd call dlevmar_dif() with opts=NULL.

The void* adata parameter looks like the common way to pass custom-data to your function. This should also be a parameter to dlevmar_dif() that does not do anything except pass the parameter to your function.
I guess you don't need this (yet).

What is this "minimization info" that you're displaying?
I can tell you that "-1#.IND" represents "not-a-number", meaning that an calculation was done that is invalid (such as taking the power of a negative number).

And what does the "reason 6" mean?
 
  • #57
I just realized! You've changed the code of your function!

You replaced:
sum = sum + Error_Vector*Error_Vector;
by
sum = sum + pow(Error_Vector, 2);

That would cause the error you are seeing, since pow() is not defined for negative numbers, and your numbers do get negative!
 
  • #58
Hi thanks for the replies.
I changed the equation for sum however that didnt get rid of the #IND. It is only doing one iteration from the console screen.
The code where the function is called in main is
Code:
  m=3; n=5;
    p[0]=1.0; p[1]=1.0; p[2]=-1.0;
    for(i=0; i<n; i++) x[i]=0.0;
    //ret=dlevmar_der(ros, jacros, p, x, m, n, 1000, opts, info, NULL, NULL, NULL); // with analytic Jacobian
    ret=dlevmar_dif(function, p, x, m, n, 10000, NULL, info, NULL, NULL, NULL);  // no Jacobian
  break;
However at the beginning of main, is this:
Code:
double p[5], // 5 is max(2, 3, 5)
	   x[16]; // 16 is max(2, 3, 5, 6, 16)
x is what the originial example was passing back - second parameter in the function ros.
I am just passing a variable back, is that ok?

From changing n to 1, I get a message out saying that the minimum number of measurement can't be less than the number of unknowns, so i changed it to 5 - my number of measurements.
The minimization info comes from:
Code:
  for(i=0; i<LM_INFO_SZ; ++i)
    printf("%g ", info[i]);
  printf("\n");
I just don't know what info holds, and where it is set at the moment
 
  • #59
I just looked the web page over.
According to my interpretation you should:

At the beginning of main, you should have:
double p[3] = {0,0,0};
double x[5] = {0,0,0,0,0};
int n = 5;

In your function, set x to Error_Vector for each i (i=0..4).
That is, you should not calculate the sse/sum yourself.

The contents of 'info' are documented on the web page, but let's first call the function correctly, before interpreting it.
 
  • #60
Cool thanks for that. It fixed all the so far known issues. Check image - says completed no iterations - due to reason 7 (I need to find that). I have set m to 3 as that is what it was before.

Edit: found reason 7 on website:
7 - stopped by invalid (i.e. NaN or Inf) "func" values; a user error

The minimizing error:
O: information regarding the minimization. Set to NULL if don't care
* info[0]= ||e||_2 at initial p.
* info[1-4]=[ ||e||_2, ||J^T e||_inf, ||Dp||_2, \mu/max[J^T J]_ii ], all computed at estimated p.
* info[5]= # iterations,
* info[6]=reason for terminating: 1 - stopped by small gradient J^T e
* 2 - stopped by small Dp
* 3 - stopped by itmax
* 4 - singular matrix. Restart from current p with increased \mu
* 5 - no further error reduction is possible. Restart with increased mu
* 6 - stopped by small ||e||_2
* 7 - stopped by invalid (i.e. NaN or Inf) "func" values; a user error
* info[7]= # function evaluations
* info[8]= # Jacobian evaluations
* info[9]= # linear systems solved, i.e. # attempts for reducing error
 

Attachments

  • levmarim.JPG
    levmarim.JPG
    38.8 KB · Views: 542
Last edited:
  • #61
According to the web page:

reason 7 = stopped by invalid (i.e. NaN or Inf) "func" values; a user error
return value -1 = no iterations performed.

As you can see you have a Fitted_Curve of 1#.QNAN0, meaning your calculation failed.
You're somehow feeding your function data that it can't calculate.
I see no reason for that, so I think you made a programming error.

Can you show your function as it is now, and also the values you're feeding it, and the Fitted_Curve/Error_Vector results, when it gives the 1#.QNAN0?

Note that the only possible reason that you might not be able to calculate the result, is if you have either parameter a or b given as zero.
 
  • #62
definitions out side of functions:
Code:
#define pi 3.141592653589793238462643

double Input[5] = {0, 1, 2, 3, 4};
double Actual_Output[5] = {1.2, 2.693, 4.325, 6.131, 8.125};
Function
Code:
static double function(double *p, double *x, int m, int n, void *data)
{

    double Fitted_Curve[5] = {0};
   // double Error_Vector[5] = {0};
    int i;
    double a, b, c;

a = p[0];//parameters
b=p[1];
c=p[2];

  for (i = 0; i <= 4; i++)
  {

    Fitted_Curve[i] = (1/(a*b))*(c-asinh(sinh(c)*exp(a*Input[i]*2*pi)));
    x[i] = Actual_Output[i]-Fitted_Curve[i];
    printf("a(x[0]) = %f, b(x[1]) = %f, c(x[2]) = %f\n",p[0], p[1], p[2]);
    printf("Fitted_Curve= %f\n", Fitted_Curve[i]);
    printf("Error_Vector= %f\n", x[i]);
    }
}
main:
Code:
int main()
{
register int i;
int problem, ret;
double p[3] = {0,0,0};
double x[5] = {0,0,0,0,0};
int n = 5;
int m;
double opts[LM_OPTS_SZ], info[LM_INFO_SZ];
char *probname[]={
    "function"
};

  opts[0]=LM_INIT_MU; opts[1]=1E-15; opts[2]=1E-15; opts[3]=1E-20;
  opts[4]= LM_DIFF_DELTA; // relevant only if the Jacobian is approximated using finite differences; specifies forward differencing
  //opts[4]=-LM_DIFF_DELTA; // specifies central differencing to approximate Jacobian; more accurate but more expensive to compute!

  problem=
		  0; // Can have more functions if user requires

#ifdef HAVE_LAPACK

#else // no LAPACK
#ifdef _MSC_VER
#pragma message("LAPACK not available, some test problems cannot be used")
#else
#warning LAPACK not available, some test problems cannot be used
#endif // _MSC_VER

#endif /* HAVE_LAPACK */

  switch(problem){
  default: fprintf(stderr, "unknown problem specified (#%d)! Note that some minimization problems require LAPACK.\n", problem);
           exit(1);
    break;

  case 0:
  /* function */
    m=3;
    p[0]=1.0; p[1]=1.0; p[2]=-1.0;
    for(i=0; i<n; i++) x[i]=0.0;
    //ret=dlevmar_der(ros, jacros, p, x, m, n, 1000, opts, info, NULL, NULL, NULL); // with analytic Jacobian
    ret=dlevmar_dif(function, p, x, m, n, 100000, NULL, NULL, NULL, NULL, NULL);  // no Jacobian
  break;


  } /* switch */

  printf("Results for %s:\n", probname[problem]);
  printf("Levenberg-Marquardt returned %d in %g iter, reason %g\nSolution: ", ret, info[5], info[6]);
  for(i=0; i<m; ++i)
    printf("%.7g ", p[i]);
  printf("\n\nMinimization info:\n");
  for(i=0; i<LM_INFO_SZ; ++i)
    printf("%g ", info[i]);
  printf("\n");

  return 0;
}
 

Attachments

  • levmarim.jpg
    levmarim.jpg
    36.8 KB · Views: 523
  • #63
For some reason I'm not able to look at your attached screenshot.
Can you view it yourself?
Can you perhaps upload it again?

First things that attract my attention, is that the return value of your function is 'double', while it should be 'void'.
Then, I recommend initializing the info-array to zeroes.
Furthermore, you should pass the info-array as a parameter, if you want to inspect the output results.

Finally, your compiler can warn you of mistakes like these.
You should increase the warning-level of your compiler and fix mismatches such as the prototype of your function.
Which compiler do you use?
 
  • #64
I just changed the prototype to void, and initialised infor to zero's by putting it equal to {0}.
It runs much more succesfully now - 174 iterations, but stops due to reason 2 - stopped by small Dp.
The solution isn't correct though.
I have two warnings from GCC Compiler now, but both are generated due to warnings within the code such as
#warning LAPACK not available, some test problems cannot be used.
p[0] should = 0.025
p[1] should = 1.8944
p[2] should = -0.369
The solution it has found I think is the solution from when I was using neldermead. Now I am questioning the function
 

Attachments

  • levmarin.JPG
    levmarin.JPG
    41.3 KB · Views: 525
  • #65
I believe you used a different Input[] array before.
Shouldn't you use Input[] = {1,2,3,4,5}?
I believe an Input[0] value of 0, yields a Fitted_Curve of 0, but that was never in your output before.

Btw, it has to be a different solution than the one from Nelder-Mead.
It reports an SSE=0.0015893 which is even more accurate than what you got from Matlab!

What do you get with the Input[] array of {1,2,3,4,5}?

And if you want even more accurate results, you should use dlevmar_der() with the Jacobian specified.
Do you know how to calculate the Jacobian?
 
  • #66
Hey, sorry I did use the correct input, i just showed you the wrong one.
So now I am confused, the MATLAB produces a different result, but not a hugely different result. Those values produce a really good fit (matlab plotter). But the original author did not get those numbers.
Now to get rid of my globals...
 
Last edited:
  • #67
Well, you can expect an even better fit from your new solution. ;)
You should try it.
 
  • #68
I have the green line is the new fit, the red is the matlab. I trust our new levmar one now more!
So this *data pointer can pass the variables required by the function. Do you know how its used?
Also I have another minimization I would like to use levmar for, however it is the sum of the modulus of the function as supposed to the sum of the squares, I know that if you square and squareroot a function it is like the modulus, but will it make any difference?
 
  • #69
You seem to have left out the picture...

To minimize the sum of absolute values, I guess you can make it work by returning the square root of the absolute value of each Error_Vector.The *data pointer should be used to pass your Actual_Data[5] array.
It's like this:

Code:
void function(..., void *adata)
{
   double *Actual_Data[5] = (double *)adata;
   ...
}

main() 
{
   double Actual_Data[5] = {...};
   int ret = dlevmar_dif(..., Actual_Data);
}
 
  • #70
forgot graph...
 

Attachments

  • levmargraph.jpg
    levmargraph.jpg
    8.6 KB · Views: 525
  • #71
Yep, the green line fits better! :smile:
 
  • #72
Im getting an error on this line:
double *Actual_Data[5] = (double *)adata;
Invalid Initializer.This function (my other function) runs but produces very different values from the matlab
Code:
void function2(double *p, double *x, int m, int n, void *data)//equations for parameters eta, phi and omega^2 that hold wheel properties
{
double c1[23], x_coeff_2[23], v1[23], v2[23];
double eta = p[0]; //parameters
double phi=p[1]; //parameters
double omega_2=p[2]; //parameters

int i;
a_global = 0.025;
b_global = 1.8944;
//c_global = -0.3690;
for (i = 0; i<23; i++)
{
    v1[i] = (exp(a_global*2*pi)-cosh(a_global*b_global*init_T[i][0]));
    v2[i] = sinh(a_global*b_global*init_T[i][0]);
    x_coeff_2[i] = v1[i]/v2[i];
    c1[i]= b_global*b_global*(x_coeff_2[i]*x_coeff_2[i]-1);
    x[i] = sqrt(fabs(c1[i]*exp(-2*a_global*init_T[i][1])+eta*((1+0.5*(4*a_global*a_global+1))*cos(init_T[i][1]+phi)-2*a_global*sin(init_T[i][1]+phi))+b_global*b_global-omega_2));
}

}
I had to adjust it from the nelder mead (where it ran) because of the way levmar wants x[] passed back to it however the nelder mead matched the matlab. This issue is reason 2 (small Dp) again
 
  • #73
a.mlw.walker said:
Im getting an error on this line:
double *Actual_Data[5] = (double *)adata;
Invalid Initializer.

Sorry, I intended:
double *Actual_Data = (double *)adata;


a.mlw.walker said:
This function (my other function) runs but produces very different values from the matlab

I had to adjust it from the nelder mead (where it ran) because of the way levmar wants x[] passed back to it however the nelder mead matched the matlab. This issue is reason 2 (small Dp) again

Well, is it a good solution that it found?
What is info[1], which is the found SSE?

To be honest, I'm not sure Levenberg-Marquardt will work as intended with your function, since it is not intended for a sum of absolute errors, but for a sum of squared errors.
I believe it should still work, but it may not find an optimal solution.
Nelder-Mead may be a better choice for this problem (minimizing the sum of absolute errors).
 
  • #74
Hi so I have integrated the levmar and neldermead into one program so that I can use levmar to solve function1 and neldermead to solve function 2.What about prototypes tho? When do you need them and when do you not. For instance my functions function() and function2() don't have prototypes, however some other functions do have prototypes.

My third function:
Code:
static double rotor_function(int n, double z[],float ti[6])//equations for parameters eta, phi and omega^2 that hold wheel properties
{

double kappa = z[0], sum = 0,Rotor_Curve[6], Error_Vector[6], v0; //parameters
int i;
v0 = (1 + (kappa/2)*ti[0]*ti[0])/ti[0];
for (i = 0; i<6; i++)
{
Rotor_Curve[i] = (v0*ti[i]-(kappa/2)*pow(ti[i],2));
Error_Vector[i] = (i+1) - Rotor_Curve[i];
}
for (i = 0; i<6; i++)
{
    sum = sum + Error_Vector[i]*Error_Vector[i];
}
printf("\n z = %f v0 = %f, sum = %f\n", z[0], v0, sum);
    return sum;

}
doesnt have a prototype - however it also doesn't receive the initital guess at the parameter.
It is called like:
Code:
case 2:
  /*  function 2*/
    z[0] = 0.004;

    n=1;
    fopt = 0;
    //This is still not running correctly, check MATLAB for correct parameter values:
    if (NelderMeadSimplexMethod(n, rotor_function, z, length, &fopt, timeout, eps, ti) == success) {
		printf("reaching to minimum ");
	} else {
		printf("timeout  ");
	}
    kappa = z[0];
i need to pass the parameter as an array though I think not a variable
 
  • #75
a.mlw.walker said:
Hi so I have integrated the levmar and neldermead into one program so that I can use levmar to solve function1 and neldermead to solve function 2.What about prototypes tho? When do you need them and when do you not. For instance my functions function() and function2() don't have prototypes, however some other functions do have prototypes.

As long as you define your function (including the body) before using it, you don't need a function prototype.
If you need a prototype, you'll know, because the compiler will complain he doesn't know the function.

a.mlw.walker said:
I need to pass the parameter as an array though I think not a variable

I can't recall the list of parameters to your Nelder-Mead function.
But if it doesn't have a "void *data" pointer as a parameter, you need to use a different mechanism, which is as follows.

The way to do it, is to define a global variable to pass your Actual_Data[] array.
You use this global variable in your function.
And just before you call Nelder-Mead, you copy your Actual_Data[] array into the global variable.
 
  • #76
Neldermead function is like this:
Code:
status NelderMeadSimplexMethod(n, f, xinit, length, fopt, timeout, eps, array1)
my function that is failing is like this:
Code:
static double rotor_function(int n, double z[],float ti[6])//equations for parameters eta, phi and omega^2 that hold wheel properties
{

double kappa = z[0], sum = 0,Rotor_Curve[6], Error_Vector[6], v0; //parameters
int i;
v0 = (1 + (kappa/2)*ti[0]*ti[0])/ti[0];
for (i = 0; i<6; i++)
{
Rotor_Curve[i] = (v0*ti[i]-(kappa/2)*pow(ti[i],2));
Error_Vector[i] = (i+1) - Rotor_Curve[i];
}
for (i = 0; i<6; i++)
{
    sum = sum + Error_Vector[i]*Error_Vector[i];
}
printf("\n z = %f v0 = %f, sum = %f\n", z[0], v0, sum);
    return sum;

}
I am calling both like this:
Code:
z[0] = 0.004;

    n=1;
    fopt = 0;
    //This is still not running correctly, check MATLAB for correct parameter values:
    if (NelderMeadSimplexMethod(n, rotor_function, z, length, &fopt, timeout, eps, ti) == success) {
		printf("reaching to minimum ");
	} else {
		printf("timeout  ");
	}
    kappa = z[0];
It seems to me like function2 however this one doesn't ever have a value for z[0]
 
  • #77
I found the problem. z is getting there howere the maths is going haywire! v0 is not being calculated correctly - is this because of a problem with multiplying floats and doubles?
 

Attachments

  • Capture.JPG
    Capture.JPG
    13 KB · Views: 461
  • #78
It's weird to see a parameter like "float ti[6]".
I suspect it should be "double ti[6]".
The mismatch could cause your problems.

Don't you get a compiler warning?
What is the prototype of your Nelder-Mead function, and what function prototype does it expect exactly as a parameter?
 
  • #79
Hey how are you, been a while.
I am working more with the levenberg marquardt code now.
I am trying to get it to use just single precision - this is an option in levmar.h
Out of interest, what happens if a variable has been declared as double but in levmar.h i state i want single precision?
when I choose single precision I get an error - a strange one at that.
Axb_core.c|1103|error: 'FLT_EPSILON' undeclared (first use in this function)

It works fine with double but not with single. I find this odd considering the author offeres single precision accuracy.
Any ideas what the problem could be?
 

Similar threads

Back
Top