Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Constrained gradient? (optimization theory)

  1. Apr 11, 2012 #1
    The first order KKT condition for constrained optimization requires that the gradient for the Lagrangian is equal to zero, however this does not necessarily imply that the objective function's gradient is equal to zero. Is it absurd to include in one's Lagrangian the requirement that the entry wise components of the objective gradient are themselves zero? I realize that if one already has other constraints then this requirement may become infeasible if in fact both sets of constraints are mutually disjoint..... however, is it a practical method for finding "better" solutions (ones that satisfy the KKT for constrained optimization as well as satisfying the unconstrained optimization).... My idea is to direct a constraint to an area where, had one started close to such a solution, any constraint would have effectively been unnecessary.

    In one dimension the idea is simple (I use ± because different texts write the Lagrangian differently):

    f(x) = x^2, f ' (x) = 2x. Define the Lagrangian: L = f ± λ (f ' (x)) : such that f ' (x) =0.
    Then x must equal zero. Therefore, L = 0 ± λ*0.

    The purpose is for (multidimensional, nonlinear) numerical optimization, so if I do this, then when I compute the first derivative I would effectively have to have the second derivative (hessian) and when computing the second derivative I would have to compute the sum of the parts of the third derivative. Seems messy
     
  2. jcsd
  3. Apr 12, 2012 #2

    Office_Shredder

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    If there is an extremal point which is not contained on the boundary, the KKT constraints ARE that the gradient must be zero. If the extremal point is on the boundary, requiring that the gradient must be zero is going to cause you to not find that point. So all you're doing is throwing away possible solutions on the boundary of the feasible set
     
  4. Apr 12, 2012 #3
    The KKT requirement is that:

    ∇ f(x) + \sum_{i=1}^m μ_i *∇ g_i(x) + \sum_{j=1}^l λ_j ∇ h_j(x^*) = 0,

    NOT that ∇ f(x) = 0.

    Merely that the gradient of the lagrangian is zero, NOT the gradient of the objective function.
    Does that change your opinion?
     
  5. Apr 12, 2012 #4

    Office_Shredder

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Assuming you are using the notation of wikipedia
    http://en.wikipedia.org/wiki/Karush–Kuhn–Tucker_conditions

    If you're in an unconstrained region, it means you have no hjs since those are equality constraints that must be satisfied. Furthermore, all the μ_i s are zero because μ_i gi(x) = 0 at the critical point, and if the g constraint isn't satisfed gi must be strictly smaller than zero. So my opinion is unchanged and I suggest you really think over the geometric implications of the KKT multipliers some more
     
  6. Apr 12, 2012 #5
    However, this is not unconstrained optimization. I already have equality and inequality constraints and I'm asking if adding the constraint for the gradient is a good idea? It seems like you are saying that it may be impossible to satisfy all constraints simultaneously (fyi, I already know about why the multipliers must have a certain sign or be zero).
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Constrained gradient? (optimization theory)
  1. Gradient Intuition (Replies: 7)

Loading...