Gradient ascent with constraints

In summary, the person is seeking advice on how to optimize a convex function with constraints on x and y. They mention using gradient ascent, but are unsure of how to incorporate the constraints. The responder suggests incorporating the constraints into the programming of steepest ascent, setting x = U[x] if it meets a constraint, and continuing until x becomes free again. The person asks for clarification on whether this approach involves Lagrange multipliers, and clarifies that they are dealing with a concave function and upper bounds on x and y.
  • #1
eren
2
0
Hi,
I have a convex function F(x,y) that I want to optimize. Since, derivative of F does not closed form, I want to use gradient ascent. The problem is, I have constrains on x and y.
I don't know how to incorporate this into gradient search. If there was a closed form, I would use Lagrange multipliers. What should I do in this case?
Thanks,
 
Physics news on Phys.org
  • #2
You should find a way to incorporate the constraints into your programming of steepest ascent, such that if the maximization path meets a constraint (say an upper bound on x, U[x])) then it should set x = U[x] and follow the steepest descent conditional on x = U[x] (that is, by changing y only). It should continue doing so until and unless x becomes free again (x < U[x]).
 
  • #3
Thanks a lot for the reply.
Does this mean it has nothing to do with Lagrange multipliers? Indeed, I have a concave function F(x,y) in which x and y are vectors and the constraints are upper bounds on the magnitudes of x and y. What exactly do you mean with "the maximization path meets the constraint" ? The steepest descent from my understanding does not usually meet the constraint.
 

1. What is gradient ascent with constraints?

Gradient ascent with constraints is a mathematical optimization technique used to find the maximum value of a function while satisfying certain constraints. It involves iteratively adjusting the input variables in the direction of the steepest ascent of the function until a maximum value is reached.

2. How does gradient ascent with constraints differ from traditional gradient ascent?

Traditional gradient ascent only considers the objective function, while gradient ascent with constraints takes into account additional constraints that must be satisfied. This means that the optimization process may be more complex and may require additional steps to ensure that the constraints are met.

3. What types of constraints can be used in gradient ascent with constraints?

There are various types of constraints that can be used, including equality constraints (where a function must equal a certain value), inequality constraints (where a function must be greater than or less than a certain value), and box constraints (where input variables must fall within a certain range).

4. What are some applications of gradient ascent with constraints?

Gradient ascent with constraints has many applications in fields such as machine learning, engineering, and economics. It can be used to optimize parameters in a neural network, design optimal structures for buildings or bridges, or find the maximum profit in a business model subject to certain constraints.

5. What are some potential challenges when using gradient ascent with constraints?

One challenge is finding the right set of constraints to accurately represent the problem at hand. Additionally, the optimization process may take longer due to the added complexity of satisfying the constraints. There is also a risk of getting stuck in a local maximum instead of the global maximum when using gradient ascent with constraints.

Similar threads

Replies
18
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
470
Replies
8
Views
2K
Replies
3
Views
1K
Replies
1
Views
1K
Replies
7
Views
2K
Replies
4
Views
2K
Replies
2
Views
2K
Replies
9
Views
2K
Replies
9
Views
2K
Back
Top