# Find optima of function

1. Aug 13, 2011

### Inertigratus

1. The problem statement, all variables and given/known data
Find the optima of $f(x, y)$ which satisfy the equation $g(x, y) = 3$ and $x > 0$.

2. Relevant equations
$f(x, y) = x + y$
$g(x, y) = x^2 + xy + y^2$
$\nabla f(x, y) = (1, 1)$
$\nabla g(x, y) = (2x + y, 2y + x)$

3. The attempt at a solution
$\nabla f(x, y) = \lambda \nabla g(x, y) <=> x = y => f(x, x) = f(x) = 2x => g(x, x) = g(x) = 3x^2 = 3 <=> x = y = 1 (x > 0) => f(1, 1) = 2$.

Now, how do I get the minima...?
Minima is supposed to be $-\sqrt{3}$.

2. Aug 13, 2011

### Ray Vickson

Be VERY careful in stating the problem. The problem with x > 0 has NO MINIMUM; however, the problem with x >= 0 does have a minimum. Never, never, never write optimization problems with strict inequalities unless you absolutely have to. (Simplest example: minimize x subject to x > 0 has no solution, while minimize x subject to x >= 0 does have a solution.)

If you have taken the Karush-Kuhn-Tucker conditions you can use them. (These are generalizations of Lagrange to handle inequality constraints.) However, in this case you can get away without using them. A solution either satisfies x > 0 (so you get the ordinary Lagrange solution) or else satisfies x = 0. In this last case you need dL/dy = 0 and dL/dx <= 0 for a maximum, or dL/dx >= 0 for a minimum.

RGV

3. Aug 13, 2011

### upsidedowntop

You used Lagrange multipliers to find the critical points. There was only one which was at (1,1).
Optima necessarily occur either at critical points or at the boundary of the region under consideration. There are two boundary points that satisfy the constraint g(x,y)=3, they are (0,3^.5) and (0,-3^.5). The maximum is max{f(1,1),f(0,3^.5),f(0,-3^.5)} = 2 and the minimum is min{f(1,1),f(0,3^.5),f(0,-3^.5)} = -3^.5.

4. Aug 14, 2011

### Inertigratus

Ohh, sorry, you're right. It's supposed to be >= 0, maybe I even missed that myself.
I thought that was just more of a restriction and not actually the boundary.
So the derivative of the Lagrange multiplier can be used when either variable is zero?

Right, I totally missed that. Thanks to both!

5. Aug 14, 2011

### Ray Vickson

Yes, the derivative of the Lagrangian (NOT the Lagrange multiplier) can be used when the variable is zero, but the derivative need not be zero there. However, it should be >= 0 for a minimum or <= 0 for a maximum. As I said, you need the so-called Karush-Kuhn-Tucker conditions. See, eg., http://en.wikipedia.org/wiki/Karush–Kuhn–Tucker_conditions .

RGV