# Optimization subject to inequality constraint

1. Sep 10, 2011

### KevinL

For my economics/game theory thesis I need to optimize a function subject to an inequality constraint.

maximize f(x1, x2) = 1/(x1+x2+y1+y2-w) subject to g(x1, x2) = x1+x2+y1+y2 < w

This isnt particularly important, but the x and y variables are quantity of production by a firm. The objective function given is the cost function, and its setup such that each additional unit of production is more costly. But for obvious reasons, the total quantity of production needs to be less than w or else the cost function would be positive.

This would be very easy to optimize with lagrange multipliers if the constraint was an equality, but I never learned how to optimize subject to an inequality. Ive looked up KKT conditions on wikipedia but it may as well be written in Greek (and for the most part, it is :D).

Can someone point me in the right direction for how to solve something like this?

EDIT: I gave the cost function so obviously it would make more sense to minimize. What my objective function ACTUALLY is is a profit function, which is why I said maximize.

2. Sep 10, 2011

### Ray Vickson

Optimization problems with strict inequalities might not have any solutions; for example, the problem min x subject to x > 0 has no solution. Usually we use non-strict inequalities. You also need the constraints x1,x2,y1,y2 >= 0 if they represent actual production levels.

Rather than maximizing f, let's minimize 1/f = x1+x2+y1+y2-w, so now we have the problem:
min x1+x2+y1+y2-w, s.t. x1+x2+y1+y2-w <= 0, x1,x2,y1,y2>= 0. This is a simple 1-constraint linear program (since we don't include variables >= 0 in the constraint count.) This problem has no solution (or, rather, has an *unbounded* solution): we can take x1=x2=y1=y2=0 and let w --> infinity. The constraints all hold, and the objective --> -infinity, which is certainly a minimum in anybody's book.

RGV

3. Sep 11, 2011

### KevinL

Intuitively that answer makes a lot of sense. Of course to minimize a cost function of this sort, production would need to be zero. And since we require production to be greater than zero, its unbounded. So perhaps it would be wise to include the entire profit function and maximize that. Intuitively this should NOT have such a trivial answer.

f(x1,x2) = [a-(x1+y1)]*x1 + [a-(x2+y2)]*x2 + (x1+x2)*[1/(x1+x2+y1+y2-w)] s.t. x1+x2+y1+y2 < w

The first two terms represent revenue (production times price) and the cost function represents an increasing cost industry (not important mathematically). So all together we want to maximize profit.

4. Sep 11, 2011

### Office_Shredder

Staff Emeritus
KKT multipliers are exactly the same as Lagrange multipliers, they just aren't explained very welll in that context.

Some of your constraints will be equalities and some of them will be strict inequalities at the extremum. All KKT says is that you get the Lagrange multiplier condition, but you only look at the constraints which are equalities at the maximum

5. Sep 11, 2011

### Ray Vickson

The only differences between KKT and simple Lagrange are: (i) the signs of Lagrange multipliers of inequality constraints are restricted; that is, for a problem of the form max f subject to g <= 0, if the Lagrangian is written as L = f - u*g, then we need u >= 0. (ii) the derivatives of the Lagrangian need not be zero at x=0 in a problem with sign constraints x >= 0. In fact, if the problem has the form max f subject to g <= 0, h = 0 and x >= 0, then the complete set of optimality conditions are: (a) (d/dx)[f - u*g - v*h] <= 0, with "=" holding if x > 0; (b) u >= 0; (c) g <= 0; (d) either u = 0 or g = 0; (e) h = 0; and (f) x >= 0. [Note: the Lagrange multiplier v of the equality constraint is not sign-restricted.] The reason that inequalility-constrained problems are harder to solve than equality-constrained ones is because of the "multiplier choice" aspect: either dL/dx = 0 or x = 0, and either u = 0 or g = 0 (and, of course, these choices apply separately to each g and each component of x).

So, how do you keep all these things straight in your head? Here are some hints.
(I) Form the Lagrangian so that at feasible points it is better than the objective; that is, in a max problem, the :Lagrangian should be >= the objective, and in a min problem it should be <=. So, if we have max f s.t. g <= 0 we want to *subtract* a positive multiple of g from f, so that the result is larger than f when g is < 0; that is, L = f - u*g where u >= 0. On the other hand, if the problem is min f s.t g <= 0 we want to add a positive multiple of g (so we get something smaller than f), hence L = f + u*g, with u >= 0.

For equality constraints it does not matter which sign you choose, since the corresponding Lagrange multiplier can be > 0, < 0 or = 0. (However, later "Economic" interpretations in post-optimality analysis do depend on how you write the problem.)

(II) What about the derivatives of L? Think of a simple unconstrained problem such as "optimize f(x) subject to x >= 0". It the optimum occurs at x > 0 then, of course, df/dx = 0 at that point. If the problem is one of maximization and x = 0 is the solution, then we need df/dx <= 0 at x = 0 (f(x) must be going down as we increase x up from 0---so we don't want to increase x). If the problem is min f(x) and the solution is x = 0 then we need df/dx >= 0 at x = 0 (f(x) must be going up as we increase x up from 0---so we don't want to increase x). Now just apply these considerations to L instead of to f.

I hope this de-mystifies some of the Greek of the Wiki article.

RGV