Lagrange Multiplers with Two Constraints

  • Thread starter keemosabi
  • Start date
  • #1
109
0

Main Question or Discussion Point

Why when doing a Lagrange Multipler with two constraints, why do you add the gradients of the two constriant funcions and set it parallel to the function to be maximized.

http://www.libraryofmath.com/pages/lagrange-multipliers-with-two-parameters/Images/lagrange-multipliers-with-two-parameters_gr_20.gif

If g and h are the two constraint functions, why would you add their gradients?
 

Answers and Replies

  • #2
arildno
Science Advisor
Homework Helper
Gold Member
Dearly Missed
9,970
131
Let's say you are to find the extrema of f(x,y,z) under the constraints g(x,y,z)=0 AND h(x,y,z)=0 (for illustration of the argument I have regarded f,g,h as 3-variable functions, the argument holds equally well for an arbitrarily large number of free variables)

Consider the five-variable function:
[tex]F(x,y,z,\gamma,\mu)=f-\gamma{g}-\mu{h}[/tex]

Note the following:
In the region (x,y,z) where g=h=0, F is identically equal to f. Thus, whatever extrema F might have there, will also be extrema of f!!

As with any other function, the critical points of F lies where the total derivative of F equals 0.

This yields, for the partial differentiations of F with respect to (x,y,z) (using [itex]\nabla[/itex] as the (x,y,z)-gradient):
[tex]\nabla{F}=\nabla{f}-\gamma\nabla{g}-\mu\nabla{h}=0[/tex]
The partial differentiations of F with respect to [itex]\gamma[/itex]- and [itex]\mu[/itex] simply yields:
g=0 and h=0.
 
  • #3
109
0
Thank you for the reply, but it seems as if something did not come out correctly. Do you think that you could please re-post?
 
  • #4
arildno
Science Advisor
Homework Helper
Gold Member
Dearly Missed
9,970
131
Yeah, I know, Latex doesn't work!

Here's the gist of it:

For a multivariable function (without constraints), we know that extrema will be where the gradient is 0.

This gives us the trick to find where the extrema must be in the case of constraints!

Now, letting L stand for one of the Lagrangian multiplier, M the other,
consider the function F(x,y,z,L,M)=f(x,y,z)-Lg(x,y,z)-Mh(x,y,z)

Clearly, at the region where g=h=0, F coincides with f, and hence, F's extrema there must equal f's extrema there!

But, by the clever construction of the linear sum of the constraint functions, F's extrema will precisely be located within that region!

The partial derivative of F with respect to L will give the equation g=0 when we require that the gradient of F is to be zero.
Similarly, the partial derivative of F with respect to M will give the equation h=0 when we require that the gradient of F is to be zero.
The partial derivatives of F with respect to x,y and z yields the familiar gradient condition upon f.
 
  • #5
109
0
Yeah, I know, Latex doesn't work!

Here's the gist of it:

For a multivariable function (without constraints), we know that extrema will be where the gradient is 0.

This gives us the trick to find where the extrema must be in the case of constraints!

Now, letting L stand for one of the Lagrangian multiplier, M the other,
consider the function F(x,y,z,L,M)=f(x,y,z)-Lg(x,y,z)-Mh(x,y,z)

Clearly, at the region where g=h=0, F coincides with f, and hence, F's extrema there must equal f's extrema there!

But, by the clever construction of the linear sum of the constraint functions, F's extrema will precisely be located within that region!

The partial derivative of F with respect to L will give the equation g=0 when we require that the gradient of F is to be zero.
Similarly, the partial derivative of F with respect to M will give the equation h=0 when we require that the gradient of F is to be zero.
The partial derivatives of F with respect to x,y and z yields the familiar gradient condition upon f.
I think I understand what you're saying. The one thing that I'm wondering is that in my textbook they have a few example problems, so I worked backwards and plugged the answer pairs (x,y,z) into my gradient functions and did not get 0. Shouldn't the gradient be 0 when I have reached an extremum?

Also, is our goal to get the two constraint functions times the Lagrange Multipliers equal to each other, so that they cancel out in the equation? This would leave us with the familiar gradient condition of the gradient of F being equal to the gradient of f.
 

Related Threads for: Lagrange Multiplers with Two Constraints

Replies
4
Views
1K
Replies
1
Views
596
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
10
Views
4K
  • Last Post
Replies
1
Views
1K
  • Last Post
Replies
2
Views
2K
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
10
Views
5K
Top