Optimization Solver - BFGS method with bound constraints

In summary, the conversation discusses the speaker's research project which involves writing a solver for a complex objective function with 5 variables and bound constraints. The speaker has tried using the steepest descent method and a BFGS method, but the latter seems to work better as it is able to find the solution in fewer iterations. The speaker has also tried using other solvers such as GRG2 and L-BFGS-B with good results. However, when trying to implement the BFGS method with bound constraints, the speaker is facing difficulties and is looking for assistance. They have tried debugging their code and comparing it to the Excel solver, but are still unsure of what is going wrong. They have also shared their code and a reference from
  • #1
matts0611
7
0
Hello, I am working on a research project that requires me to write a solver for solving a particular problem. I could really use some math advice if anyone is willing to assist.

I need to minimize a non-linear objective functions of 5 variables.
It is a pretty complex function. Each of the variables has a min and max value that it can be.

I wrote an iterative solver for it in C that uses the "steepest descent" method to find the minimum. At each point it calculates the negative gradient by numerical differentiation and then does a line search in that direction to find the step size. The step size is done with quadratic interpolation. The step size in that direction is bounded by the constraints

This seems to give me a fairly good result except that it takes too long to converge (almost 3000 iterations). I need something that converges much faster than this.

I did some reading and it seems that for non-linear solvers the quasi-Newton (second order) method known as BFGS (Broyden-Fletcher-Goldfarb-Shanno) seems to be a pretty popular and agreed upon good method to use.

Basically it iteratively constructs an approximation of the second derivative as it steps through the solution space. The first step it simply uses the steepest descent direction. This method is traditionally for unconstrained problems.

I found a few solvers that use similar methods. One is called GRG2 (Generalized Reduced Gradient) and is used in many programs such as the MS Excel solver. I tried my problem in the Excel solver and it works very well. It finds the solution in about 25 or so iterations. This is about what I want. This solver was written by Leon Lasdon.

I also found another solver called L-BFGS-B that uses BFGS with bound constraints but is built for problems with a large number of variables (thats what the L stands for).
I found a Python wrapper for this solver and used it to solve my function and it performs well also.

I learned about how BFGS works by reading some texts and looking around online, basically every iteration is searched in the direction D = -(H^-1)*G
Where G is the gradient at the current position and H is the approximate Hessian matrix (matrix of second derivative values).

I tried to write a solver that uses this method to solve my problem but it doesn't seem to work correctly. I made the same modifications as to the steepest descent (to bound the step size by looking at the bounds of the variables in that direction) but I can't seem to get it to work correctly. It converges but it converges to an answer that is not very optimal.

It seems to work on other math problems that I've tried but for some reason not on the problem that I need to solve. I try to compare my iteration results with the Excel solver and even on the second iteration I move in the wrong direction, but I check the math by hand and it is indeed in the direction that I should be going in. I have double and triple checked my math and compared against example problems and I seem to be doing the right thing (without taking into account the bound constraints).

I think the problem lies with how I modified the algorithm to work with bound constraints but I'm not sure. I have tried reading papers by Byrd about the L-BFGS-B algorithm to see how it works but I'm having a hard time understanding what I'm doing wrong.

I know this post is really long but I'm in need of assistance on this.
Does anyone have any knowledge in this area and is willing to help?
I can compensate you for your time if needed.

Thank you,
Matt
 
Mathematics news on Phys.org
  • #2
If its going bad in the second iteration there is some obvious error.

This happens to me often. I'd suggest doing some sanity check experiments - like more tractable variants of your problem so you can actually print out *every* calculation and verify by hand.

Id offer to check the code unless its too much effort to share it etc..
 
  • #3
gopalakr said:
If its going bad in the second iteration there is some obvious error.

This happens to me often. I'd suggest doing some sanity check experiments - like more tractable variants of your problem so you can actually print out *every* calculation and verify by hand.

Id offer to check the code unless its too much effort to share it etc..

Thank you very much for replying.

So the zeroth iteration they both start at the same starting point. Then on iteration 1 they move to the same point. Then on the next iteration they move to different points.

I have verified my calculations by hand and my solver is doing what its *supposed* to do, according to my algorithm. But I guess mine and Excel solver are using different logic.

My algorithm basically works like this:

X = initial start position
H = identity matrix

For (0...max iterations)
{

Calculate negative gradient at X by numerical differentiation.
Set search direction = - inverse(H) * gradient

Calculate the step size in the search direction by doing a line search and then approximating a quadratic using 3 points and set X = minimum of the quadratic. I use the bounds of the variables to set a maximum step size in that direction.

Check for convergence (check if gradient is close enough to 0 or if the change in function value is very small for a many consecutive iterations)

If slope of X(i) > slope of X(i-1) Update the H matrix by using the BFGS formula
Else set H = identity matrix

}


I've attached two images from the book I read (the chapter was written by the developer of the Excel solver) on how I came up with this algorithm. (will help understand code also)

I also attached my code. (Its a Visual Studio project)

The functions that are most important are in main.cpp
main() - has main iteration loop
f () - returns function value at a particular point (calls function in another file)
grad () - returns gradient at point x through numerical differentiation
get_search_direct_bfgs() - returns search direction using BFGS
line_search_quad () - moves in given search direction by approximating quadratic using 3 points
bfgs_update() - updates the H matrix (approximate Hessian matrix)

It will print out most of the calculations it does each loop (also puts them in a log file log.txt)

I've also attached a paper I found on another solver called L-BFGS-B
They do some advanced things for handling a lot of variables that I don't need, but they handle bound constraints. I'm having a hard time understanding how their solver actually works though and the mathematics behind it.

My code:
http://dl.dropbox.com/u/35398210/steepest_grg.zip"

Algorithm I used for BFGS
http://dl.dropbox.com/u/35398210/BFGS.JPG"

Algorithm I used for step size
http://dl.dropbox.com/u/35398210/Quad%20Interp.JPG"

Paper on L-BFGS-B algorithm that I can't figure out:
http://dl.dropbox.com/u/35398210/Limited%20Memory%20-%20Bound%20Constrained%20Optimization.pdf"
 
Last edited by a moderator:
  • #4
I am not surprised you are not able to mimic the Excel solver, developers may provide neat descriptions but the implementation could be a whole lot different. I don't have Visual studio (or windows) so can't run it. But this could take a while, if ever I get the time to go over it.

One debug you can try is not do the line search but just specify a `small' but constant step size over the iterations. That way you would be more closer to identifying the bug, if its in there.
 
  • #5
2 questions in main.cpp, in the function line_search_quad->

i) on line 671:

inc=inc/5 seems a little too aggresive but you would know better.
If I understand correctly, you are essentially checking different inc values with the same f(x_old) (to check if g(1) > g(0))


ii) line 695: shouldn't the loop begin with alpha=0 ? (that is why we checked the g(1) > g(0) above)

on line 698

the condition you check is -
g(x_old, alpha, scaled_sd) > g(x_old, alpha - inc, scaled_sd)

based on the actual algorithm, should'nt this be something like

g(x_old, alpha, scaled_sd) > g(x_old, alpha - inc, scaled_sd) &&
g(x_old, alpha- (2*inc), scaled_sd) < g(x_old, alpha - inc, scaled_sd)

May be it is trivially met, but it doesn't seem to me. just check.
 
  • #6
gopalakr said:
2 questions in main.cpp, in the function line_search_quad->

i) on line 671:

inc=inc/5 seems a little too aggresive but you would know better.
If I understand correctly, you are essentially checking different inc values with the same f(x_old) (to check if g(1) > g(0))ii) line 695: shouldn't the loop begin with alpha=0 ? (that is why we checked the g(1) > g(0) above)

on line 698

the condition you check is -
g(x_old, alpha, scaled_sd) > g(x_old, alpha - inc, scaled_sd)

based on the actual algorithm, should'nt this be something like

g(x_old, alpha, scaled_sd) > g(x_old, alpha - inc, scaled_sd) &&
g(x_old, alpha- (2*inc), scaled_sd) < g(x_old, alpha - inc, scaled_sd)

May be it is trivially met, but it doesn't seem to me. just check.

Thank you for helping me, I really appreciate it this.

i) yes, you are right, inc is just the increment step size to search the line, so I want to choose a starting increment that does not result in too big of a step. So I make sure I choose one where the increment step at least results in a smaller value than the current point.

ii) yes I believe is in fact trivially met.

Because we know by moving a step size of "inc" on the line from our current position will result in a smaller value, and then we stop searching at the first increase. This guarantees that finding the first increase at step size alpha will result in g(alpha) > g(alpha-inc), since we know this is by definition an increase in the function along that line. And then since it was by definition the "first increase" we know that g(alpha-inc) < g(alpha-inc*2).
( g(alpha-inc) must be < g(alpha-inc*2) because we want the first increase to set as our C value)

Comparing it to the excel solver, its not so much that it moves in the wrong step size, but it in fact moves in the wrong "direction" after the first iteration (in which it simply moves in the steepest descent direction.)

Since I'll never fully know what the Excel solver does this may be futile to try and mimic what it does exactly. Perhaps I am better off trying to more closely follow how the L-BFGS-B solver works. I linked the paper that the authors of that solver wrote, but I'm having a real tough time following what they wrote to figure out what they're doing.

Also, I found this book that's available free online:
"Iterative methods for optimization" by Kelley
http://www.caam.rice.edu/~zhang/caam554/KelleyBooks/fr18_book.pdf"

Chapter 4 talks about BFGS and Chapter 5 talks about using steepest descent and BFGS for bound constrained problems. I find it hard to figure out what they are doing differently from me (I know they are using a different step size but I think quadratic interpolation is a better way to go).

I'm confused on what the "correct" way is to handle the constraints. weather the way I'm doing it: simply setting a max step size each iteration based on the bounds and direction is ok, or if I need to do something more complex because of how the hessian matrix is built up iteratively. Maybe because I'm using constraints the hessian is not building correctly but I'm not sure. I can't find any examples online of how to solve this kind of problem.
 
Last edited by a moderator:
  • #7
I do not know about the L-BGFS-B, and the paper does seem dense.

I think this is a low-memory version of the original problem and involves storing only parts of matrices and cute hacks like that. They may improve the efficiency but won't significantly change the algorithm path. So may be you should get this working before worrying about performance optimizations (Since you already have the entire code in place..)

Note that step-size and the direction are not essentially complementary, where you are (x) determines the slope at that point f'(x). right ?
 
  • #8
gopalakr said:
I do not know about the L-BGFS-B, and the paper does seem dense.

I think this is a low-memory version of the original problem and involves storing only parts of matrices and cute hacks like that. They may improve the efficiency but won't significantly change the algorithm path. So may be you should get this working before worrying about performance optimizations (Since you already have the entire code in place..)

Note that step-size and the direction are not essentially complementary, where you are (x) determines the slope at that point f'(x). right ?

Yes, L-BFGS-B has complexities to handle problems of large amount of variables so they can optionally do some approximations to make the Hessian matrix smaller. But this complexities should be able to be ignored since I'm not concerned with that part. There are options in the solver to set how big the Hessian matrix can be and if its set high enough than it should be equivalent to not having any of those optimizations.

Yes, every point the gradient of the function at X will be evaluated again to get a new f'(x).
 
  • #9
matts0611 said:
I'm confused on what the "correct" way is to handle the constraints. weather the way I'm doing it: simply setting a max step size each iteration based on the bounds and direction is ok, or if I need to do something more complex because of how the hessian matrix is built up iteratively. Maybe because I'm using constraints the hessian is not building correctly but I'm not sure. I can't find any examples online of how to solve this kind of problem.

the correct way (I thought you were doing this ?) is to incorporate the constraints into the objective function- using lagrange multipliers, such a way that the new function will have that many more variables (one lagrange multiplier for each constraint) but the minimum can still be obtained by Newton-like methods. I don't know if that insight is of any help here though.

for more formal constrained optimization, you can see Boyd's book/lectures (all free)

http://www.stanford.edu/~boyd/cvxbook/

http://www.stanford.edu/class/ee364a/

http://www.stanford.edu/class/ee364b/
 
  • #10
gopalakr said:
the correct way (I thought you were doing this ?) is to incorporate the constraints into the objective function- using lagrange multipliers, such a way that the new function will have that many more variables (one lagrange multiplier for each constraint) but the minimum can still be obtained by Newton-like methods. I don't know if that insight is of any help here though.

for more formal constrained optimization, you can see Boyd's book/lectures (all free)

http://www.stanford.edu/~boyd/cvxbook/

http://www.stanford.edu/class/ee364a/

http://www.stanford.edu/class/ee364b/

No, I do not use lagrange multipliers (I honestly don't know much about them, in the small amount of reading I did on them while I was learning about optimization, I was sort of confused on how to use them or why).

My objective function (and my solver only handles) constraints of the form l(i) < X(i) < u(i) for each objective function variable. These vales are stored in the arrays x_min and x_max.

If a variable is already at its bounds, the search direction is calculated as to not go outside the bounds. So if X(1) can be at max 5 and is currently 5 and the search direction points in the direction of increasing X(1), then the search direction will be modified as to zero out the (1) element of the search direction vector. Because that variable can not be increased any more.

After a search direction is chosen, and we want to calculate a step size in that direction, the maximum step size that can be moved is calculated by get_max_step(). This sets an upper bound on how much can be moved in that direction. This will always keep us inside the box of valid values.

Is this not a valid way to handle these type of constraints?
 
  • #11
matts0611 said:
My objective function (and my solver only handles) constraints of the form l(i) < X(i) < u(i) for each objective function variable. These vales are stored in the arrays x_min and x_max.

If a variable is already at its bounds, the search direction is calculated as to not go outside the bounds. So if X(1) can be at max 5 and is currently 5 and the search direction points in the direction of increasing X(1), then the search direction will be modified as to zero out the (1) element of the search direction vector. Because that variable can not be increased any more.

After a search direction is chosen, and we want to calculate a step size in that direction, the maximum step size that can be moved is calculated by get_max_step(). This sets an upper bound on how much can be moved in that direction. This will always keep us inside the box of valid values.

Is this not a valid way to handle these type of constraints?

Well, by `post-fixing' the bounds, you are no more optimizing the multivariate function. If a bound is reached on X(1) and you are holding it there for subsequent iterations, you won't really be reaching the right minimum. You want some point within the constraints that has the least f'(x) and may be there are no good minima at X(1,:,...) .. Does that make sense ? visualize geometrically.

Why don't you try just removing the constraints, just for the heck of it, to make sure the optimization is going in a reasonable path??

Lagrangians are intuitive to induce constraints into objective functions, if you have the time, they are the right thing to do! Following is a very sloppy crash course in constrained optimization :P

let objective function to minimize: f(x)
let inequality constraint: g(x) [itex]\leq[/itex] k ; it can be rewritten as the equation g(x)+c=k, where *slack variable* c >0

you minimize a new objective function [ f(x) +[itex] \lambda_{1} [/itex] (g(x)+c-k) ] which would automatically take care of both minimizing the original objective function and being within constraints. [itex] \lambda_{1} [/itex] is called the lagrange multiplier. There are whole areas LP, QP, etc dedicated to this.
 
  • #12
gopalakr said:
Well, by `post-fixing' the bounds, you are no more optimizing the multivariate function. If a bound is reached on X(1) and you are holding it there for subsequent iterations, you won't really be reaching the right minimum. You want some point within the constraints that has the least f'(x) and may be there are no good minima at X(1,:,...) .. Does that make sense ? visualize geometrically.

Why don't you try just removing the constraints, just for the heck of it, to make sure the optimization is going in a reasonable path??

Lagrangians are intuitive to induce constraints into objective functions, if you have the time, they are the right thing to do! Following is a very sloppy crash course in constrained optimization :P

let objective function to minimize: f(x)
let inequality constraint: g(x) [itex]\leq[/itex] k ; it can be rewritten as the equation g(x)+c=k, where *slack variable* c >0

you minimize a new objective function [ f(x) +[itex] \lambda_{1} [/itex] (g(x)+c-k) ] which would automatically take care of both minimizing the original objective function and being within constraints. [itex] \lambda_{1} [/itex] is called the lagrange multiplier. There are whole areas LP, QP, etc dedicated to this.

Say you want to minimize (something simple) the function F(x1, x2) = -(x1^2) + 2*x2
with the bound constraints:

-5 < x1 < 4
2 < x2 < 10

What would be the objective function using lagrange multipliers?
 
  • #13
matts0611 said:
Say you want to minimize (something simple) the function F(x1, x2) = -(x1^2) + 2*x2
with the bound constraints:

-5 < x1 < 4
2 < x2 < 10

What would be the objective function using lagrange multipliers?

OK, I am not completely sure this is it but I believe it should be something like -

(-x1^2 + 2*x2) + [itex]\lambda_{1}[/itex](-x1 + 5 + c1) + [itex]\lambda_{2}[/itex] (x1 -4 + c2)[itex] + \lambda_{3}[/itex](x2-10+c3) + [itex]\lambda_{4}[/itex] (-x2 -2 +c4)

for non-linear objective functions(here quadratic) with inequality constraints, there are additional http://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions" that the lagrange multipliers and slack variables must satisfy that gives us enough equations to solve all the variables. I think need Quadratic Programming to solve this.

I'm fast reaching the end of my expertise in optimization... I may be totally misleading you with all this... we need more experienced folk on this thread ! somebody SOS...
 
Last edited by a moderator:

1. What is the BFGS method?

The BFGS (Broyden-Fletcher-Goldfarb-Shanno) method is an optimization algorithm used to solve unconstrained nonlinear optimization problems. It is an iterative method that approximates the inverse Hessian matrix of the objective function to efficiently find the optimal solution.

2. How does the BFGS method work?

The BFGS method works by iteratively updating the approximate inverse Hessian matrix using gradient information from the objective function. This updated matrix is then used to determine the search direction for the next iteration, with the goal of finding the optimal solution.

3. What are bound constraints in optimization?

Bound constraints in optimization refer to limits set on the values that the variables can take. These constraints restrict the search space and can help ensure that the optimal solution is within a given range of values.

4. How does the BFGS method handle bound constraints?

The BFGS method can handle bound constraints by using a projected gradient approach. This involves projecting the search direction onto the feasible region defined by the bound constraints, ensuring that the solution remains within the specified bounds.

5. What are the advantages of using the BFGS method with bound constraints?

The BFGS method with bound constraints offers several advantages, including its ability to handle nonlinear objective functions and its efficient use of gradient information. It also allows for a more direct approach to handling bound constraints compared to other optimization methods.

Similar threads

  • General Math
Replies
5
Views
829
Replies
3
Views
415
  • General Math
Replies
6
Views
778
Replies
1
Views
3K
Replies
13
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
Replies
2
Views
857
Replies
1
Views
9K
Replies
1
Views
2K
  • Introductory Physics Homework Help
Replies
28
Views
1K
Back
Top