How Do You Find the Optima of a Function Under Constraints?

  • Thread starter Thread starter Inertigratus
  • Start date Start date
  • Tags Tags
    Function
Click For Summary

Homework Help Overview

The discussion revolves around finding the optima of a function f(x, y) under the constraint g(x, y) = 3, with the additional condition that x > 0. The functions involved are f(x, y) = x + y and g(x, y) = x^2 + xy + y^2. Participants are exploring the implications of the constraints on the existence of minima and maxima.

Discussion Character

  • Exploratory, Assumption checking, Conceptual clarification

Approaches and Questions Raised

  • Participants discuss the use of Lagrange multipliers to find critical points and the implications of strict versus non-strict inequalities in optimization problems. Questions arise about the conditions under which minima and maxima can be determined, particularly in relation to the boundary defined by the constraints.

Discussion Status

The discussion is active, with participants providing insights into the nature of the constraints and their effects on the optimization problem. There is recognition of the importance of correctly stating the problem, particularly regarding the implications of using strict inequalities. Some participants suggest that the Karush-Kuhn-Tucker conditions may be relevant for handling inequality constraints.

Contextual Notes

There is a noted distinction between the scenarios of x > 0 and x >= 0, with implications for the existence of minima. Participants are also considering boundary points that satisfy the constraint g(x, y) = 3.

Inertigratus
Messages
123
Reaction score
0

Homework Statement


Find the optima of [itex]f(x, y)[/itex] which satisfy the equation [itex]g(x, y) = 3[/itex] and [itex]x > 0[/itex].

Homework Equations


[itex]f(x, y) = x + y[/itex]
[itex]g(x, y) = x^2 + xy + y^2[/itex]
[itex]\nabla f(x, y) = (1, 1)[/itex]
[itex]\nabla g(x, y) = (2x + y, 2y + x)[/itex]

The Attempt at a Solution


[itex]\nabla f(x, y) = \lambda \nabla g(x, y) <=> x = y => f(x, x) = f(x) = 2x => g(x, x) = g(x) = 3x^2 = 3 <=> x = y = 1 (x > 0) => f(1, 1) = 2[/itex].

Now, how do I get the minima...?
Minima is supposed to be [itex]-\sqrt{3}[/itex].
 
Physics news on Phys.org
Inertigratus said:

Homework Statement


Find the optima of [itex]f(x, y)[/itex] which satisfy the equation [itex]g(x, y) = 3[/itex] and [itex]x > 0[/itex].


Homework Equations


[itex]f(x, y) = x + y[/itex]
[itex]g(x, y) = x^2 + xy + y^2[/itex]
[itex]\nabla f(x, y) = (1, 1)[/itex]
[itex]\nabla g(x, y) = (2x + y, 2y + x)[/itex]


The Attempt at a Solution


[itex]\nabla f(x, y) = \lambda \nabla g(x, y) <=> x = y => f(x, x) = f(x) = 2x => g(x, x) = g(x) = 3x^2 = 3 <=> x = y = 1 (x > 0) => f(1, 1) = 2[/itex].

Now, how do I get the minima...?
Minima is supposed to be [itex]-\sqrt{3}[/itex].

Be VERY careful in stating the problem. The problem with x > 0 has NO MINIMUM; however, the problem with x >= 0 does have a minimum. Never, never, never write optimization problems with strict inequalities unless you absolutely have to. (Simplest example: minimize x subject to x > 0 has no solution, while minimize x subject to x >= 0 does have a solution.)

If you have taken the Karush-Kuhn-Tucker conditions you can use them. (These are generalizations of Lagrange to handle inequality constraints.) However, in this case you can get away without using them. A solution either satisfies x > 0 (so you get the ordinary Lagrange solution) or else satisfies x = 0. In this last case you need dL/dy = 0 and dL/dx <= 0 for a maximum, or dL/dx >= 0 for a minimum.

RGV
 
You used Lagrange multipliers to find the critical points. There was only one which was at (1,1).
Optima necessarily occur either at critical points or at the boundary of the region under consideration. There are two boundary points that satisfy the constraint g(x,y)=3, they are (0,3^.5) and (0,-3^.5). The maximum is max{f(1,1),f(0,3^.5),f(0,-3^.5)} = 2 and the minimum is min{f(1,1),f(0,3^.5),f(0,-3^.5)} = -3^.5.
 
Ray Vickson said:
Be VERY careful in stating the problem. The problem with x > 0 has NO MINIMUM; however, the problem with x >= 0 does have a minimum. Never, never, never write optimization problems with strict inequalities unless you absolutely have to. (Simplest example: minimize x subject to x > 0 has no solution, while minimize x subject to x >= 0 does have a solution.)

If you have taken the Karush-Kuhn-Tucker conditions you can use them. (These are generalizations of Lagrange to handle inequality constraints.) However, in this case you can get away without using them. A solution either satisfies x > 0 (so you get the ordinary Lagrange solution) or else satisfies x = 0. In this last case you need dL/dy = 0 and dL/dx <= 0 for a maximum, or dL/dx >= 0 for a minimum.

RGV

Ohh, sorry, you're right. It's supposed to be >= 0, maybe I even missed that myself.
I thought that was just more of a restriction and not actually the boundary.
So the derivative of the Lagrange multiplier can be used when either variable is zero?

upsidedowntop said:
You used Lagrange multipliers to find the critical points. There was only one which was at (1,1).
Optima necessarily occur either at critical points or at the boundary of the region under consideration. There are two boundary points that satisfy the constraint g(x,y)=3, they are (0,3^.5) and (0,-3^.5). The maximum is max{f(1,1),f(0,3^.5),f(0,-3^.5)} = 2 and the minimum is min{f(1,1),f(0,3^.5),f(0,-3^.5)} = -3^.5.

Right, I totally missed that. Thanks to both!
 
Inertigratus said:
Ohh, sorry, you're right. It's supposed to be >= 0, maybe I even missed that myself.
I thought that was just more of a restriction and not actually the boundary.
So the derivative of the Lagrange multiplier can be used when either variable is zero?



Right, I totally missed that. Thanks to both!

Yes, the derivative of the Lagrangian (NOT the Lagrange multiplier) can be used when the variable is zero, but the derivative need not be zero there. However, it should be >= 0 for a minimum or <= 0 for a maximum. As I said, you need the so-called Karush-Kuhn-Tucker conditions. See, eg., http://en.wikipedia.org/wiki/Karush–Kuhn–Tucker_conditions .

RGV
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 26 ·
Replies
26
Views
3K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 9 ·
Replies
9
Views
2K
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
1
Views
2K