Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Gradient ascent with constraints

  1. Feb 13, 2008 #1
    Hi,
    I have a convex function F(x,y) that I want to optimize. Since, derivative of F does not closed form, I want to use gradient ascent. The problem is, I have constrains on x and y.
    I don't know how to incorporate this into gradient search. If there was a closed form, I would use Lagrange multipliers. What should I do in this case?
    Thanks,
     
  2. jcsd
  3. Feb 13, 2008 #2

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    You should find a way to incorporate the constraints into your programming of steepest ascent, such that if the maximization path meets a constraint (say an upper bound on x, U[x])) then it should set x = U[x] and follow the steepest descent conditional on x = U[x] (that is, by changing y only). It should continue doing so until and unless x becomes free again (x < U[x]).
     
  4. Feb 14, 2008 #3
    Thanks a lot for the reply.
    Does this mean it has nothing to do with Lagrange multipliers? Indeed, I have a concave function F(x,y) in which x and y are vectors and the constraints are upper bounds on the magnitudes of x and y. What exactly do you mean with "the maximization path meets the constraint" ? The steepest descent from my understanding does not usually meet the constraint.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Gradient ascent with constraints
  1. Gradient of r (Replies: 1)

  2. Gradient Confusion (Replies: 22)

  3. Gradient Intuition (Replies: 7)

Loading...