Gradient ascent with constraints

eren
Messages
2
Reaction score
0
Hi,
I have a convex function F(x,y) that I want to optimize. Since, derivative of F does not closed form, I want to use gradient ascent. The problem is, I have constrains on x and y.
I don't know how to incorporate this into gradient search. If there was a closed form, I would use Lagrange multipliers. What should I do in this case?
Thanks,
 
Physics news on Phys.org
You should find a way to incorporate the constraints into your programming of steepest ascent, such that if the maximization path meets a constraint (say an upper bound on x, U[x])) then it should set x = U[x] and follow the steepest descent conditional on x = U[x] (that is, by changing y only). It should continue doing so until and unless x becomes free again (x < U[x]).
 
Thanks a lot for the reply.
Does this mean it has nothing to do with Lagrange multipliers? Indeed, I have a concave function F(x,y) in which x and y are vectors and the constraints are upper bounds on the magnitudes of x and y. What exactly do you mean with "the maximization path meets the constraint" ? The steepest descent from my understanding does not usually meet the constraint.
 

Similar threads

Back
Top