Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Help with Lagrange multipliers

  1. Jul 11, 2004 #1
    A bit of a tough one!

    Find the maximum of [tex]ln x + ln y + 3 ln z[/tex] on part of the sphere [tex]x^2 + y^2 + z^2 = 5r^2[/tex] where x>0, y>0 and z>0.

    I know I need to use Lagrange multipliers but how should I go about it? Any help would be appreciated thanks!!!
     
    Last edited: Jul 12, 2004
  2. jcsd
  3. Jul 11, 2004 #2
    Why wont my Latex graphics show?
     
  4. Jul 12, 2004 #3

    Galileo

    User Avatar
    Science Advisor
    Homework Helper

    The LateX graphics are showing fine for me.

    About using Langrange multipliers:
    Let [tex]G=\ln x + \ln y +3\ln z - \lambda(x^2 +y^2 +z^2)[/tex]
    Set
    [tex]\[ \frac{\partial G}{\partial x}, \frac{\partial G}{\partial y}, \frac{\partial G}{\partial z}\][/tex] equal to zero.
    Together with [tex]x^2+y^2+z^2=5r^2[/tex] these are four equations in 4 unknowns. Generally, solving these equations isn`t easy, but in this case it isn't so bad: Write [tex]x^2, y^2[/tex] and [tex]z^2[/tex] as functions of lambda and use the constraint function to solve for [tex]\lambda[/tex].
     
  5. Jul 13, 2004 #4
    I also have a question about Lagrange multipliers: what is the underlying principle behind how they work?

    I know it has something to do with the fact that the constraint and the maximum "contour" intersect at a point where they share a tangent line and their normal lines are the same (similar to indifference curves and budget constraints in economics I suppose.) Could anyone elaborate on that though? I'm not sure of the details.
     
  6. Jul 13, 2004 #5
    Yeah Lagrange's Multipliers are very useful in applications such as Trigonometric Optimization. You might want to see

    http://ndp.jct.ac.il/tutorials/Infitut2/node33.html

    http://www-math.mit.edu/~djk/18_022/chapter04/section03.html

    http://mathworld.wolfram.com/LagrangeMultiplier.html

    for more information about the principles involved (and of course google in general). A knowledge of vector calculus helps in understanding.
     
  7. Jul 14, 2004 #6

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor


    The simplest way to look at Lagrange Multipliers is to look at it geometrically. Given a function F(x,y) and a curve C, what point on C maximizes F? Since the gradient of any function always points in the direction of fastest increase, we could start at any point and "follow the gradient" to find the maximum value. Of course, since we are constrained to C, we can't actually do that. What we would do is move in the direction along C closest to the direction of the direction of C.

    When can we NOT do that? When the gradient is exactly perpendicular to C! Of course, if C is defined by a function g(x,y)= constant, its gradient is also perpendicular to C so the condition is that the two gradients are parallel: one is a multiple (the Lagrange Multiplier) of the other.
     
  8. Jul 14, 2004 #7

    arildno

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    Dearly Missed

    Here's another simple argument about Lagrange multipliers:

    Let [tex]f(x_{1},x_{2},...,x_{n})[/tex] be the function we want to minimize under conditions [tex]G_{i}(x_{1},...,x_{n})=0,i,...{k}<n[/tex]

    Construct the function F:
    [tex]F(x_{1},...,x_{n},\lambda_{1},...,\lambda_{k})=f+\sum_{i=1}^{k}\lambda_{i}G_{i}[/tex]

    A couple of observations:
    1) As usual F's stationary points are found by requiring it's gradient to be zero there.
    That is, we have the k+n equations determining F's stationary points:
    [tex]\frac{\partial{F}}{\partial{x}_{i}}=0,i=1,..,n, \frac{\partial{F}}{\partial{\lambda}_{i}}=0,i=1,..,k[/tex]

    2) f is simply F (or close enough, anyways) when we restrict our attention to that part of F's domain determined by the having the G's equal to zero!

    Surely, F's stationary points do not move about simply if we look at a particular region of it's domain!
    In particular, any stationary point of F that happens to be in the restricted region must surely be a stationary point for the function that is "F restricted to that region" (that is, f).

    The equations given in 1) are the ones you're after.
     
    Last edited: Jul 14, 2004
  9. Jul 14, 2004 #8
    It is easy if you consider two cases first, that of constrained optimization when you have one constraint and second, that of constrained optimization when you have multiple constraints. We're really looking for an optimal solution to the problem: a set of values for each of the variables, which gives us the exact coordinates (or n-tuplet) of the point of maxima or minima.

    So what exactly is a constraint? Simply put, it is a function on the variables, which fixes exactly what family of acceptable optimal solutions is the particular solution to the problem at hand.

    Supposing f and g are two differentiable and continuous functions of 2 variables and C is a fixed constant value of g then, we can write

    [tex]
    g(x, y) = C \qquad
    [/tex]

    f is also called the objective function. Note that at a stationary point,

    [tex]
    \frac{\partial f}{\partial x}\delta x + \frac{\partial f}{\partial y}\delta y = 0
    [/tex]
    [tex]
    \frac{\partial g}{\partial x}\delta x + \frac{\partial g}{\partial y}\delta y = 0
    [/tex]

    The trick then is the introduction of a scalar quantity [tex]\lambda[/tex] which we appropriately call the Lagrangian Multiplier. We take the first equation above and add it to [tex]\lambda[/tex] times the second equation, getting as a result another equation, which is

    [tex]
    (\frac{\partial f}{\partial x} + \lambda \frac{\partial g}{\partial x}) \delta x + (\frac{\partial f}{\partial y} + \lambda \frac{\partial g}{\partial y}) \delta y = 0
    [/tex]

    Now suppose that we choose the parameter [tex]\lambda[/tex] such that the term in the first bracket on the left hand side is equal to zero, that is

    [tex]
    \frac{\partial f}{\partial x} + \lambda \frac{\partial g}{\partial x} = 0
    [/tex]

    This automatically implies that the term in the second bracket is also zero ([tex]\delta y[/tex] is not zero). This gives another condition, namely

    [tex]
    \frac{\partial f}{\partial y} + \lambda \frac{\partial g}{\partial y} = 0
    [/tex]

    Now we have three (2 + 1) equations to solve for the optimal solution [tex](x_{0}, y_{0})[/tex] which is a solution to each of these equations. The third equation, evidently, is the constraint g = C. We need to reintroduce it to ensure a particular solution and not an arbitary one.

    More generally if

    [tex]
    x_{k} \qquad \forall \qquad 1\leq k \leq n
    [/tex]

    are n unknowns and

    [tex]
    g_{j}(x_{1},x_{2},x_{3},....,x_{n}) = C_{j} \qquad \forall \qquad 1\leq j \leq m
    [/tex]

    are m constraints, then we get a system of (n+m) equations (and evidently there are m multipliers, [tex]\lambda_{1}, \lambda_{2},.....,\lambda_{m}[/tex]).

    [tex]
    \frac{\partial{f}}{\partial{x_k}} + \sum_{j=1}^m \lambda_j
    \frac{\partial{g_j}}{\partial{x_k}} = 0 \qquad \forall \qquad
    1\leq k \leq n
    [/tex]
    [tex]
    g_{j}(x_{1},x_{2},x_{3},....,x_{n}) = C_{j} \qquad \forall \qquad 1\leq j \leq m
    [/tex]
     
    Last edited: Aug 17, 2004
  10. Jul 17, 2004 #9
    Errr don't have to manipulate the constraint before taking partial derivatives? I mean

    [tex]L(x,y,z,\lamda, r) = \ln x + \ln y + 3 \ln z - \lambda(x^2+y^2+z^2-5r^2) [/tex]

    Taking partial derivatives [tex]\partial L_x ,\ \partial L_y,\ \partial L_z,\ \partial L_\lambda,\ \partial L_r[/tex] and setting them all equal to zero.
     
  11. Jul 20, 2004 #10

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    That's one way to do it but I think it is simpler just to use the fact that
    grad (ln x+ ln y+ 3ln z) is parallel to grad (x2+ y2+ z2).
     
    Last edited: Jul 20, 2004
  12. Jul 23, 2004 #11

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    I enjoyed the nice explanation of Halls of Ivy to follow the gradient as well as possible to maximize a function on a curve.

    Another simple way to at least say it is, if the function is maximum then the gradient is zero. Also if the function restricted to the curve, is a maximum then its gradient is zero.

    But the gradient of the restricted function is the restriction of the gradient. The restriction of the gradient to a curve is just the component of the gradient in the direction of the curve. So the restricted gradient is zero when the original gradient is perpendicular to the curve on which the maximum is sought.
     
    Last edited: Jul 23, 2004
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Help with Lagrange multipliers
Loading...