Gradient ascent with constraints

  • Context: Graduate 
  • Thread starter Thread starter eren
  • Start date Start date
  • Tags Tags
    Constraints Gradient
Click For Summary
SUMMARY

The discussion centers on optimizing a convex function F(x,y) using gradient ascent while adhering to constraints on the variables x and y. The participants emphasize the need to incorporate these constraints directly into the gradient ascent algorithm, particularly when the maximization path encounters an upper bound on x, denoted as U[x]. Instead of relying on Lagrange multipliers, the approach involves adjusting the optimization path to maintain compliance with the constraints, allowing for conditional updates to y while keeping x fixed at U[x] when necessary.

PREREQUISITES
  • Understanding of convex functions and their properties
  • Familiarity with gradient ascent optimization techniques
  • Knowledge of constraints in optimization problems
  • Basic concepts of steepest descent methods
NEXT STEPS
  • Study the implementation of gradient ascent with constraints in Python using libraries like NumPy
  • Learn about projection methods for constrained optimization
  • Explore the relationship between convex functions and Lagrange multipliers
  • Investigate the use of penalty methods in optimization to handle constraints
USEFUL FOR

Mathematicians, data scientists, and machine learning practitioners who are involved in optimizing functions under constraints, particularly those utilizing gradient ascent methods.

eren
Messages
2
Reaction score
0
Hi,
I have a convex function F(x,y) that I want to optimize. Since, derivative of F does not closed form, I want to use gradient ascent. The problem is, I have constrains on x and y.
I don't know how to incorporate this into gradient search. If there was a closed form, I would use Lagrange multipliers. What should I do in this case?
Thanks,
 
Physics news on Phys.org
You should find a way to incorporate the constraints into your programming of steepest ascent, such that if the maximization path meets a constraint (say an upper bound on x, U[x])) then it should set x = U[x] and follow the steepest descent conditional on x = U[x] (that is, by changing y only). It should continue doing so until and unless x becomes free again (x < U[x]).
 
Thanks a lot for the reply.
Does this mean it has nothing to do with Lagrange multipliers? Indeed, I have a concave function F(x,y) in which x and y are vectors and the constraints are upper bounds on the magnitudes of x and y. What exactly do you mean with "the maximization path meets the constraint" ? The steepest descent from my understanding does not usually meet the constraint.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K