The first order KKT condition for constrained optimization requires that the gradient for the Lagrangian is equal to zero, however this does not necessarily imply that the objective function's gradient is equal to zero. Is it absurd to include in one's Lagrangian the requirement that the entry wise components of the objective gradient are themselves zero? I realize that if one already has other constraints then this requirement may become infeasible if in fact both sets of constraints are mutually disjoint..... however, is it a practical method for finding "better" solutions (ones that satisfy the KKT for constrained optimization as well as satisfying the unconstrained optimization).... My idea is to direct a constraint to an area where, had one started close to such a solution, any constraint would have effectively been unnecessary.(adsbygoogle = window.adsbygoogle || []).push({});

In one dimension the idea is simple (I use ± because different texts write the Lagrangian differently):

f(x) = x^2, f ' (x) = 2x. Define the Lagrangian: L = f ± λ (f ' (x)) : such that f ' (x) =0.

Then x must equal zero. Therefore, L = 0 ± λ*0.

The purpose is for (multidimensional, nonlinear) numerical optimization, so if I do this, then when I compute the first derivative I would effectively have to have the second derivative (hessian) and when computing the second derivative I would have to compute the sum of the parts of the third derivative. Seems messy

**Physics Forums | Science Articles, Homework Help, Discussion**

Join Physics Forums Today!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Constrained gradient? (optimization theory)

**Physics Forums | Science Articles, Homework Help, Discussion**