Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Lagrange Multiplers with Two Constraints

  1. Apr 25, 2009 #1
    Why when doing a Lagrange Multipler with two constraints, why do you add the gradients of the two constriant funcions and set it parallel to the function to be maximized.

    http://www.libraryofmath.com/pages/lagrange-multipliers-with-two-parameters/Images/lagrange-multipliers-with-two-parameters_gr_20.gif

    If g and h are the two constraint functions, why would you add their gradients?
     
  2. jcsd
  3. Apr 26, 2009 #2

    arildno

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    Dearly Missed

    Let's say you are to find the extrema of f(x,y,z) under the constraints g(x,y,z)=0 AND h(x,y,z)=0 (for illustration of the argument I have regarded f,g,h as 3-variable functions, the argument holds equally well for an arbitrarily large number of free variables)

    Consider the five-variable function:
    [tex]F(x,y,z,\gamma,\mu)=f-\gamma{g}-\mu{h}[/tex]

    Note the following:
    In the region (x,y,z) where g=h=0, F is identically equal to f. Thus, whatever extrema F might have there, will also be extrema of f!!

    As with any other function, the critical points of F lies where the total derivative of F equals 0.

    This yields, for the partial differentiations of F with respect to (x,y,z) (using [itex]\nabla[/itex] as the (x,y,z)-gradient):
    [tex]\nabla{F}=\nabla{f}-\gamma\nabla{g}-\mu\nabla{h}=0[/tex]
    The partial differentiations of F with respect to [itex]\gamma[/itex]- and [itex]\mu[/itex] simply yields:
    g=0 and h=0.
     
  4. Apr 26, 2009 #3
    Thank you for the reply, but it seems as if something did not come out correctly. Do you think that you could please re-post?
     
  5. Apr 26, 2009 #4

    arildno

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    Dearly Missed

    Yeah, I know, Latex doesn't work!

    Here's the gist of it:

    For a multivariable function (without constraints), we know that extrema will be where the gradient is 0.

    This gives us the trick to find where the extrema must be in the case of constraints!

    Now, letting L stand for one of the Lagrangian multiplier, M the other,
    consider the function F(x,y,z,L,M)=f(x,y,z)-Lg(x,y,z)-Mh(x,y,z)

    Clearly, at the region where g=h=0, F coincides with f, and hence, F's extrema there must equal f's extrema there!

    But, by the clever construction of the linear sum of the constraint functions, F's extrema will precisely be located within that region!

    The partial derivative of F with respect to L will give the equation g=0 when we require that the gradient of F is to be zero.
    Similarly, the partial derivative of F with respect to M will give the equation h=0 when we require that the gradient of F is to be zero.
    The partial derivatives of F with respect to x,y and z yields the familiar gradient condition upon f.
     
  6. Apr 26, 2009 #5
    I think I understand what you're saying. The one thing that I'm wondering is that in my textbook they have a few example problems, so I worked backwards and plugged the answer pairs (x,y,z) into my gradient functions and did not get 0. Shouldn't the gradient be 0 when I have reached an extremum?

    Also, is our goal to get the two constraint functions times the Lagrange Multipliers equal to each other, so that they cancel out in the equation? This would leave us with the familiar gradient condition of the gradient of F being equal to the gradient of f.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Lagrange Multiplers with Two Constraints
  1. Lagrange multiplier (Replies: 6)

Loading...