MATLAB Minimizing function in Matlab and C

  • Thread starter Thread starter a.mlw.walker
  • Start date Start date
  • Tags Tags
    Function Matlab
AI Thread Summary
The discussion revolves around implementing the Nelder-Mead method for function minimization in both MATLAB and C, highlighting discrepancies in results. The MATLAB code successfully minimizes a function, yielding parameters a = 0.0250, b = 1.8944, and c = -0.3690, while the C code produces significantly different values: a = 3.52, b = 3.634, and c = -0.000073. Participants suggest that the C implementation may not be passing parameters correctly or could be using a different algorithm, leading to local minima rather than the global minimum found by MATLAB. Adjustments to algorithm parameters in the C code have shown some improvement, but inconsistencies remain, indicating that the C implementation may require further tuning to match MATLAB's results. The conversation emphasizes the importance of algorithm parameters and their impact on convergence and solution accuracy.
a.mlw.walker
Messages
144
Reaction score
0
The foloowing code is the MATLAB code to minimize a function using the nelder mead method.

Code:
Starting=[1 1 -1]; options=optimset('Display','iter'); Estimates=fminsearch(@myfit,Starting,options,t_d,Data) Calling: function sse=myfit(params,Input,Actual_Output) a=params(1); b=params(2); c=params(3);

Fitted_Curve = (1/(a*b))*(c-asinh(sinh(c)*exp(a*Input*2*pi)));
Error_Vector=Actual_Output-Fitted_Curve ;
sse=sum(Error_Vector.^2);
This is supposed to be the equivelent C code, however it doesn't produce the same results. Can anyone see what I may have done wrong?
Code:
static double function(int n, double x[])
{
	double c;
    double Fitted_Curve[5];
    double Error_Vector[5];
    int i;
    double a, b, sum = 0;
    double  v1[5], v2[5], geom_inv[5], norm = 0, norm_2 = 0, v2sum = 0, x_coeff = 0;
//    Actual_Output[5] = {1.2, 2.693, 4.325, 6.131, 8.125};
a = x[0];
b=x[1];
c=x[2];
    for (i = 0; i <= 4; i++)
    {
      Fitted_Curve[i] = (1/(a*b))*(c-asinh(sinh(c)*exp(a*Input[i]*2*pi)));
      Error_Vector[i] = Actual_Output[i]-Fitted_Curve[i];
      }
      for (i = 0; i <= 4; i++)
      {
          sum = sum + Error_Vector[i]*Error_Vector[i];
      }
      printf("sum = %f\n", sum);
      a_global = a;
      b_global = b;
     // x_coeff_global = x_coeff;
      return sum;
}
Thanks
 
Physics news on Phys.org
Hi a.mlw.walker! :smile:

The C function that you're showing is functionally a match for the myFit() function that you specified as MATLAB code.

Do you also have an implementation of the fminsearch() function?
If so, what does it expect as input?
 
the fminsearch function is a MATLAB function implementing the NelderMead algorithm. I know the MATLAB is correct because I have the solution. However the C does not produce the same final result but does produce the same first result!
Matlab produces:
a = 0.0250 b= 1.8944 c= -0.3690
C code produces
a = 3.52, b = 3.634 c= -0.000073
Check the pics:
 

Attachments

  • fittedcure.JPG
    fittedcure.JPG
    18.1 KB · Views: 752
  • sse.JPG
    sse.JPG
    9.4 KB · Views: 738
You pictures seem to basically show that the C function is indeed a match for the MATLAB function.

So either your fminsearch is not doing its work like it does in matlab.
Or, what seems more likely, the parameters are syntactically not passed properly to the function.
Or the boundary conditions are not set properly.

Did you already print the values of a, b, and c during the iterations?

What are the values in your Input[] array?

Did you already check if the given solution by the C code is actually a solution for the problem, since it may have found another minimum (if your boundary conditions are set up differently)?
 
I'll get it to print earlier values of a, b and c and post them. Yeah i plotted the points found by C and it doesn't find a curve that fits. input is just [1 2 3 4 5], that's what it is in both MATLAB and C.
Its a strange error considering the values for the first iteration are the same, but I know the Matlab is correct, I have the solution to that from previous work - and it is a solution. The problem is in the C.
I'll get a b and c for the first iteration
 
While you're at it, also print the values of 'sum' (the sse) in your function, since we'd like to see if it converges or not.
 
in the C code right, not the matlab?
 
Both! :)
 
Ok here we go. three iterations of both, MATLAB in white, c in black...
 

Attachments

  • 3iterations.JPG
    3iterations.JPG
    43 KB · Views: 760
  • matlab 3 iter.JPG
    matlab 3 iter.JPG
    15.4 KB · Views: 794
  • #10
I'm just noticing that in all your pictures the sum/sse does not match the Error_Vector.
The sum/sse should be the sum of the squared entries of the Error_vector.
In all cases (except perhaps the last) it isn't.

In your original 2 pictures the sum and sse appear to be the square of only the first entry in the Error_Vector.

So what did you print for the sum/sse?

In your last 2 pictures, there are 5 sums displayed in the console dump.
They appear to be the squares of the individual Error_Vector entries, but they should still be added.
In the MATLAB output, there is only one sse, that may well be the sum needed.

This means the code you have shown in the opening post apparently does not match the code you have just executed.
 
  • #11
The original images were just to see whether it was doing the sum my calculator was expecting - i.e just the first Error_Vector^2. The latest images (my last post) is the actual output after three iterations from NelderMead. look at the latest images.
 
  • #12
Sorry your right, the C code for sum is not producing the correct values...
 
  • #13
a.mlw.walker said:
The original images were just to see whether it was doing the sum my calculator was expecting - i.e just the first Error_Vector^2. The latest images (my last post) is the actual output after three iterations from NelderMead. look at the latest images.

Yes, I believe I have.
The displayed sums for 1 iteration need to be added, and that has to be used as the result.
That should match the MATLAB output.
 
  • #14
Hmm, sorry I had removed that in a recent debugging session. I have put it back in, and attached again the first three iterations, but its still not right
I have changed the sum code to

Code:
for (i = 0; i <= 4; i++)
    {
        sum = sum + Error_Vector[i]*Error_Vector[i];
        printf("sum= %f\n", sum);
    }
 

Attachments

  • fittedcurelatest.JPG
    fittedcurelatest.JPG
    41 KB · Views: 723
  • #15
All right.
Now it shows the same result for the first iteration as matlab.

As you can see, it also converges.
But it converges much faster than the MATLAB version.

This suggests that you're using a different algorithm.
Since you've presumably used a function with the same name (fminsearch), do you have the option of giving method-parameters?
What you have in C might for instance be the Conjugate gradient method.

(You did use a function by the name of fminsearch didn't you?)
 
  • #16
The function for the C is called NelderMead.c. Fminsearch is a built in MATLAB function that according to Matlab is the Neldermead method. I'm not bother how fast it converges if the answer isn't correct though...
 
  • #17
There could be more than one solution.
Could you show the final a, b, c, and sse to judge that?

And perhaps you could also show the a, b, and c in each iteration to see how the algorithm progresses?
 
Last edited:
  • #18
I just checked.
Your problem does have multiple solutions.
So perhaps you want the solution closest to your initial guess?
Or doesn't it matter to you?
 
  • #19
really, mulitple solutions? i tried the values i earlier posted and it did not produce a solution - I am looking for a curve to fit data points, and the curve those values gave did not fit the data like the matlab
 
  • #20
Ah, I missed your solutions for a, b, c before.

They are both solutions.
The MATLAB version has an sse = 0.0037.
The C version has an sse = 0.099

This suggests that the C version stopped at a tolerance of 0.1.
You can probably tweak your parameters somewhere.

Btw, you should know that your data is almost on a straight line, and that whatever parameters you feed your formula, it's also almost a straight line.
So I think you shouldn't expect very interesting solutions.

What are you using this for?
 
  • #21
My boss wanted me to research finding parameters that will fit the curve to an equation. I agree the data is basically on a straight line, however the Matlab code gets the solution I am expecting the C goes off in a completely different direction, even though the methods are supposed to be the same. If it is as simple as the c is finding a different route then what can I do about it - the MATLAB is finding the right one?
By the way I have attached two graphs, the left is the atlab the right is the C, MATLAB is a much better ft...
 

Attachments

  • matlab vs C graphs.jpg
    matlab vs C graphs.jpg
    7.8 KB · Views: 752
Last edited:
  • #22
Guys what do you reckon. My graph above shows that my code may be finding a minimum of sorts, but that is not a good curve. I think it is finding a local minimum as supposed to the global minimum. Interesting that the MATLAB implementation finds the global minimum. The following website states that nelder mead method can fail in simple situations. i wonder if MATLAB is doing "more" than just a nelder mead. i have heard the rosenbrock algorithm can find global minimum, anyone used it? Or an ant search or something? anyone used that?
 
  • #23
Ok i have noticed something. In the file neldermead.c on line 29 it says
Code:
static dbl	al = 2, bt = 0.5, gm = 1;
these are parameters that control the adjustments made by the algorithm.
Matlab has a similar line, however the variable names are different. I fiddled with the values (just changing which one was equal to 1,2 and 0.5) and I get the correct answer for the three parameters I have been looking for, however using an alternative function to solve for, and it gives incorrect results again.
 
  • #24
My last post tho - about the parameters in NelderMead, swapping this around got me a solution to the function - the correct one, however for another function I have written it does not solve it correctly once I change these values.
When I change them back (static dbl al = 1, bt = 0.5, gm = 2, the second function works!
I think the method may be finding local minimums and when i swap the parameters it adjusts to a global minimum! Wotz goin On!
 
  • #25
I can think of 2 things.

First your C version uses a threshold of 0.1.
You should be able to find this threshold somewhere and adjust it.
Then you can probably get a better fit.

Second the MATLAB algorithm has an "options" parameter.
Among others it specifies an algorithm to use (I think).
With the right option, you should be able to get a fit that is as close to the initial guess as possible (I think that would be something like Levenberg-Marquardt).

Finally, you just have a formula for a curve that just won't fit very well when the curve is almost a straight line and you data points are almost a straight line.
 
  • #26
did you look at post #21? the image of the MATLAB fits is great, the C one not so good. I am working on it, I'll see if i get anywhere...
 
  • #27
Yes, the C one has a tolerance of 0.1, which is bigger than the MATLAB one...
And apparently the algorithm doesn't try to stay close to the initial gues...
 
  • #28
Have you seen the 0.1 tolerance anywhere, because I can't find it.
 
  • #29
I only saw it in the result (as posted earlier).
It has to be somewhere in your neldermead.c file, or another one that is used by it.
Since I do not have access to this file I can't tell you.
 
  • #30
would you like access to the files? i only haven't put them because its quite a lot of code. right now i am trying to get rid of all my global variables so it doesn't compile right now, because i have an error I am not sure about. but i can give it to you if your knowledge of c is any better than mine?
 
  • #31
If you upload neldermead.c, I'll take a look at it.
 
  • #32
neldermead.c, i have had to attach it as a textfile
 

Attachments

  • #33
Well, here are my first observations.

There does not appear to be a method you can select.

However, one of the parameters of the function NelderMeadSimplexMethod(), which I assume is the one you are using, is "eps". Another one is "timeout",
Apparently you have passed eps=0.1 to it.
You might try using e.g. eps=0.01.
"timeout" is the maximum number of iterations.
 
  • #34
my eps is:
double eps = 1.0e-12;//accuracy of Nelder Mead

?
 
  • #35
All right, that would mean that you have indeed a local minimum that won't approach closer than an sse=0.99.
I find the number a little weird since it is so close to a nice round number.

What did you use for the "timeout" and for the initial guess "xinit"?
 
  • #36
initial guess is the same as matlan, 1, 1, -1.
timeout is
int timeout = 1000000;

it is strange, i asked some guys on a c forum, they have told me i need to get rid of my global variables before i can know whether it is me or the algorithm. I think I am at a local minimum
 
  • #37
Yes, I also think you're a another local minimum, and as yet I can't explain why the C version would find another minimum than the MATLAB version.

It appears as if MATLAB is taking smaller steps, so perhaps they are using different (smaller?) parameters (al, bt, gm) to Nelder-Mead as you already tried.

Did you already print intermediate values of a,b,c?
For the C version as well as the MATLAB version?
This would show which stepsize has been taken, and also whether the same algorithm is used.

Btw, it won't matter (in this case) that you have global variables.
 
  • #38
Yeah that's what I reckon about the globals, they are not the solution to the problem. I'll get the full first 3 iterations and see what's doing what. I read about and nelder-mead is apparently known to fail.
EDIT:
The below values are the MATLAB values for the first three iterations - however sse is the total sum of the error, where as in the C (image) it shows the steps adding up sum, so the last value of sum in each iteration, should match sse in matlab.
Code:
a =
     1
b =
     1
c =
    -1
Fitted_Curve =
    6.1378   12.4210   18.7041   24.9873   31.2705
Error_Vector =
   -4.9378   -9.7280  -14.3791  -18.8563  -23.1455
sse =
  1.2171e+003
 
 Iteration   Func-count     min f(x)         Procedure
     0            1          1217.05         
a =
    1.0500
b =
     1
c =
    -1
Fitted_Curve =
    6.1447   12.4279   18.7111   24.9943   31.2774
Error_Vector =
   -4.9447   -9.7349  -14.3861  -18.8633  -23.1524
sse =
  1.2180e+003
a =
     1
b =
    1.0500
c =
    -1
Fitted_Curve =
    5.8455   11.8295   17.8135   23.7975   29.7814
Error_Vector =
   -4.6455   -9.1365  -13.4885  -17.6665  -21.6564
sse =
  1.0681e+003
Interesting, I just noticed Mtlab has a fourth parameter deciding 'next move' called shrink, where the C only has expand contract and reflect, MATLAB has sigma for shrinking, and each expand/contract/reflect then 'shrinks' it:
Code:
 % Perform an inside contraction
                xcc = (1-psi)*xbar + psi*v(:,end);
                x(:) = xcc; fxcc = funfcn(x,varargin{:});
                func_evals = func_evals+1;
                
                if fxcc < fv(:,end)
                    v(:,end) = xcc;
                    fv(:,end) = fxcc;
                    how = 'contract inside';
                else
                    % perform a shrink
                    how = 'shrink';
                end
            end
            if strcmp(how,'shrink')
                for j=two2np1
                    v(:,j)=v(:,1)+sigma*(v(:,j) - v(:,1));
                    x(:) = v(:,j); fv(:,j) = funfcn(x,varargin{:});
                end
                func_evals = func_evals + n;
            end
 

Attachments

  • 3_iters_C_latest.JPG
    3_iters_C_latest.JPG
    82.8 KB · Views: 476
Last edited:
  • #39
The attached is the latest information. Top left is MATLAB producing the required minimum
a = 0.025
b = 1.899
c = -0.369
top right is the c code when the parameters
al = 1, bt = 0.5, gm = 2;
a = 3.52
b = 3.63
c = -0.000073
bottom left is when parameters
al = 2, bt = 0.5, gm = 1;
a = 0.076
b = 3.45
c = -0.672
Very odd.
 

Attachments

  • matlab vs C.jpg
    matlab vs C.jpg
    11.8 KB · Views: 505
  • #40
As you can see MATLAB uses an entirely different stepsize and also takes entirely different steps.

I think MATLAB does not use Nelder-Mead, but something like the Conjugate gradient method.
As you can see it takes a small step in each of the parameters a,b,c.
This is the method to deduce the gradient.
Then the gradient is used to make a step.

I see that the MATLAB fminsearch() function returns a set of values [x,fval,exitflag,output].
The 'output' value contains the algorithm being used.
Can you print these return values?
 
  • #41
No probs, the ouput is:
Estimates =
0.0250 1.8944 -0.3690
FVAL =
0.0037
EXITFLAG =
1
OUTPUT =
iterations: 85
funcCount: 151
algorithm: 'Nelder-Mead simplex direct search'
message: [1x196 char]
 
  • #42
That looks like MATLAB does use Nelder-Mead. :)

So the question becomes: what should the first iteration of Nelder-Mead do?
You can find the algorithm with Google.

I'm not sure yet what's going on with the shrink step.
It may be somewhat "hidden" in the C code, but I haven't looked yet.
And there should be a 4th parameter for the shrink step.
 
  • #43
I think the scalar could be this line in the C
scalarvector(nvar, simp, 0.50, simp);
the 0.5 is the same as the shrink value in the matlab, but it doesn't solve the prob...
 
  • #44
From: http://www.mathworks.com/help/techdoc/math/bsotu2d.html#bsgpq6p-11
mathworks said:
The algorithm first makes a simplex around the initial guess x0 by adding 5% of each component x0(i) to x0, and using these n vectors as elements of the simplex in addition to x0.

This is what MATLAB does, but not what the C code does.
Looking at the C code, you'll see that the function initial_simplex() has a different way to initialize the initial simplex.
This means that MATLAB and the C code both implement different variants of the Nelder-Mead algorithm.
This is not wrong, but will lead to different results.

If you want more optimal results (more robust against degeneracy) you should for instance use Levenberg-Marquardt.
The catch is that this algorithm requires you to supply an extra function that calculates the derivatives of your curve fitting function with respect to each parameter.
 
Last edited by a moderator:
  • #45
Yeah, I thought that. I don't want to use leveneberg marquardt tho for that reason. I was wondering about a ant colony search - that seems pretty robust, but I don't think my roblem should require that. Do you think it is just the initial simplex that is different, and I could adjust the initial simplex code to match the 5% thingy?
 
  • #46
Ah, well, it's not that difficult to calculate the derivatives for your curve-fitting function and you'll get way better results!

But yes, I suspect you can adjust the initial simplex to match the 5% thingy.
Beyond the initial simplex, the algorithm appears to be deterministic.
 
  • #47
Levenberg Marquardt. Do you know of any code? It seems harder to implement than the downhill simplex
 
  • #48
I have the "Numerical Recipes in C" code (copyrighted I'm afraid though not expensive) that implements it and I have used that in the past.
I was very happy with it and swore to myself that would be the only algorithm I'd ever use to fit models (assuming I'd be able to calculate the derivatives).

It's just a function with a bunch of parameters just like your Nelder-Mead function.
The only real difference is that you need to supply a function as an argument that also calculates the derivatives for the set of parameters.

A quick Google search yielded this link:
http://www.ics.forth.gr/~lourakis/levmar/
which appears to be a freely downloadable implementation.
 
  • #49
levmar hey. if i get it, would you be able to help me implement it if i struggle?
 
  • #50
Sure! :)
 

Similar threads

Back
Top