How Do You Find the Optima of a Function Under Constraints?

  • Thread starter Thread starter Inertigratus
  • Start date Start date
  • Tags Tags
    Function
Inertigratus
Messages
123
Reaction score
0

Homework Statement


Find the optima of f(x, y) which satisfy the equation g(x, y) = 3 and x > 0.

Homework Equations


f(x, y) = x + y
g(x, y) = x^2 + xy + y^2
\nabla f(x, y) = (1, 1)
\nabla g(x, y) = (2x + y, 2y + x)

The Attempt at a Solution


\nabla f(x, y) = \lambda \nabla g(x, y) <=> x = y => f(x, x) = f(x) = 2x => g(x, x) = g(x) = 3x^2 = 3 <=> x = y = 1 (x > 0) => f(1, 1) = 2.

Now, how do I get the minima...?
Minima is supposed to be -\sqrt{3}.
 
Physics news on Phys.org
Inertigratus said:

Homework Statement


Find the optima of f(x, y) which satisfy the equation g(x, y) = 3 and x > 0.


Homework Equations


f(x, y) = x + y
g(x, y) = x^2 + xy + y^2
\nabla f(x, y) = (1, 1)
\nabla g(x, y) = (2x + y, 2y + x)


The Attempt at a Solution


\nabla f(x, y) = \lambda \nabla g(x, y) <=> x = y => f(x, x) = f(x) = 2x => g(x, x) = g(x) = 3x^2 = 3 <=> x = y = 1 (x > 0) => f(1, 1) = 2.

Now, how do I get the minima...?
Minima is supposed to be -\sqrt{3}.

Be VERY careful in stating the problem. The problem with x > 0 has NO MINIMUM; however, the problem with x >= 0 does have a minimum. Never, never, never write optimization problems with strict inequalities unless you absolutely have to. (Simplest example: minimize x subject to x > 0 has no solution, while minimize x subject to x >= 0 does have a solution.)

If you have taken the Karush-Kuhn-Tucker conditions you can use them. (These are generalizations of Lagrange to handle inequality constraints.) However, in this case you can get away without using them. A solution either satisfies x > 0 (so you get the ordinary Lagrange solution) or else satisfies x = 0. In this last case you need dL/dy = 0 and dL/dx <= 0 for a maximum, or dL/dx >= 0 for a minimum.

RGV
 
You used Lagrange multipliers to find the critical points. There was only one which was at (1,1).
Optima necessarily occur either at critical points or at the boundary of the region under consideration. There are two boundary points that satisfy the constraint g(x,y)=3, they are (0,3^.5) and (0,-3^.5). The maximum is max{f(1,1),f(0,3^.5),f(0,-3^.5)} = 2 and the minimum is min{f(1,1),f(0,3^.5),f(0,-3^.5)} = -3^.5.
 
Ray Vickson said:
Be VERY careful in stating the problem. The problem with x > 0 has NO MINIMUM; however, the problem with x >= 0 does have a minimum. Never, never, never write optimization problems with strict inequalities unless you absolutely have to. (Simplest example: minimize x subject to x > 0 has no solution, while minimize x subject to x >= 0 does have a solution.)

If you have taken the Karush-Kuhn-Tucker conditions you can use them. (These are generalizations of Lagrange to handle inequality constraints.) However, in this case you can get away without using them. A solution either satisfies x > 0 (so you get the ordinary Lagrange solution) or else satisfies x = 0. In this last case you need dL/dy = 0 and dL/dx <= 0 for a maximum, or dL/dx >= 0 for a minimum.

RGV

Ohh, sorry, you're right. It's supposed to be >= 0, maybe I even missed that myself.
I thought that was just more of a restriction and not actually the boundary.
So the derivative of the Lagrange multiplier can be used when either variable is zero?

upsidedowntop said:
You used Lagrange multipliers to find the critical points. There was only one which was at (1,1).
Optima necessarily occur either at critical points or at the boundary of the region under consideration. There are two boundary points that satisfy the constraint g(x,y)=3, they are (0,3^.5) and (0,-3^.5). The maximum is max{f(1,1),f(0,3^.5),f(0,-3^.5)} = 2 and the minimum is min{f(1,1),f(0,3^.5),f(0,-3^.5)} = -3^.5.

Right, I totally missed that. Thanks to both!
 
Inertigratus said:
Ohh, sorry, you're right. It's supposed to be >= 0, maybe I even missed that myself.
I thought that was just more of a restriction and not actually the boundary.
So the derivative of the Lagrange multiplier can be used when either variable is zero?



Right, I totally missed that. Thanks to both!

Yes, the derivative of the Lagrangian (NOT the Lagrange multiplier) can be used when the variable is zero, but the derivative need not be zero there. However, it should be >= 0 for a minimum or <= 0 for a maximum. As I said, you need the so-called Karush-Kuhn-Tucker conditions. See, eg., http://en.wikipedia.org/wiki/Karush–Kuhn–Tucker_conditions .

RGV
 
Prove $$\int\limits_0^{\sqrt2/4}\frac{1}{\sqrt{x-x^2}}\arcsin\sqrt{\frac{(x-1)\left(x-1+x\sqrt{9-16x}\right)}{1-2x}} \, \mathrm dx = \frac{\pi^2}{8}.$$ Let $$I = \int\limits_0^{\sqrt 2 / 4}\frac{1}{\sqrt{x-x^2}}\arcsin\sqrt{\frac{(x-1)\left(x-1+x\sqrt{9-16x}\right)}{1-2x}} \, \mathrm dx. \tag{1}$$ The representation integral of ##\arcsin## is $$\arcsin u = \int\limits_{0}^{1} \frac{\mathrm dt}{\sqrt{1-t^2}}, \qquad 0 \leqslant u \leqslant 1.$$ Plugging identity above into ##(1)## with ##u...
Back
Top