1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Setting up the Lagrangian Multipliers method for spherical coords

  1. Jul 18, 2014 #1
    This isn't really a homework question, but may be similar to a typical example problem so I posted it here.

    1. The problem statement, all variables and given/known data
    I want to find the max and min dot product of a 3d vector and all points in a sphere constrained by angles in spherical coordinates.

    2. Relevant equations
    A point on the sphere can be expressed using spherical parameters (R is constant) as:
    [itex]x=R*sin\theta*cos\phi[/itex]
    [itex]y=R*sin\theta*sin\phi[/itex]
    [itex]z=R*cos\theta[/itex]

    3. The attempt at a solution
    The dot product between any point on the sphere and a vector [A,B,C] can be written as:
    [itex] f(\theta,\phi) = A*R*sin\theta*cos\phi + B*R*sin\theta*sin\phi + R*cos\theta [/itex]

    We can set some constraint [itex]\theta = \theta_A[/itex] on [itex]f[/itex] and then find the extrema to find the max and min.

    From Wikipedia, I can set up Lagrange Multipliers with my function and the constraint using
    [itex] \Lambda(x,y,\lambda) = f(x,y) + \lambda \cdot \Big(g(x,y)-c\Big), [/itex]
    and solve
    [itex] \nabla_{x,y,\lambda} \Lambda(x , y, \lambda)=0. [/itex]

    The problem is I'm not sure how to set up the gradient. I know that the gradient has a slightly different definition in spherical coordinates. Which definition of the gradient should I be using here?
     
  2. jcsd
  3. Jul 18, 2014 #2

    pasmith

    User Avatar
    Homework Helper

    I believe this should be
    [tex]
    f(\theta, \phi) = AR\sin\theta \cos \phi + BR\sin\theta \sin\phi + CR\cos\theta
    [/tex]

    You don't need a Lagrange multiplier here; you can just maximize [itex]f(\theta,\phi)[/itex] subject to the limits on [itex]\theta[/itex] and [itex]\phi[/itex]. It is possible that extrema will occur on the boundary.

    If you had kept the dot product in terms of cartesian coordinates then you would need a Lagrange multiplier to enforce the constraint [itex]x^2 + y^2 + z^2 = R^2[/itex], but when working in spherical coordinates that constraint is automatically enforced.

    The form of the gradient here is [tex]
    \frac1R \frac{\partial f}{\partial \theta}\mathbf{e}_\theta + \frac{1}{R \sin \theta} \frac{\partial f}{\partial \phi}\mathbf{e}_\phi[/tex]
     
  4. Jul 18, 2014 #3
    Thanks for the reply.

    I misstated my original constraint. Its an inequality constraint of the form [itex]\theta_K \leq \theta \leq \theta_L[/itex] (and [itex]\phi_M \leq \phi \leq \phi_N[/itex] as well). I can take the partial derivates:

    [itex]\frac{\partial f}{\partial \theta}[/itex] and [itex]\frac{\partial f}{\partial \phi}[/itex]

    And set them = 0 to find critical points. Not really sure how to proceed with the inequality constraints I mentioned though. If the critical point lies within the range, thats fine... but it likely won't. What do I do in the general case (sorry if its obvious)?
     
  5. Jul 20, 2014 #4
    If there aren't any critical points in the region, then it follows that the extreme values should lie on the boundary of the region (assuming it's a nice function).
     
  6. Jul 21, 2014 #5

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    The "gradient" referred to in that article is not the "gradient vector" as understood in physics; it is just the collection of partial derivatives. Therefore, in any coordinate system ##(u,v)##, with ##x = \xi(u,v)## and ##y = \eta(u,v)## and with new Lagrangian ##M(u,v,\lambda) = F(u,v) + \lambda G(u,v)## (where ##F(u,v) = f(\xi(u,v),\eta(u,v))##, etc) we just have
    [tex] \frac{\partial M}{\partial u} = 0, \: \frac{\partial M}{\partial v} = 0.[/tex]

    However, these conditions do not hold in general in inequality-constrained problems; if fact, you need the so-called Karush-Kuhn-Tucker condtions. For a problem of the form
    [tex] \min F(u,v)\\
    \text{subject to } \: G(u,v) = 0\\
    \text{and } \: a \leq u \leq b, \: c \leq v \leq d \\
    a,b,c,d \:\text{= constants}
    [/tex]
    we have the following. Setting ##M = F + \lambda G##, the necessary conditions for ##(u,v)## to be a (local) constrained minimum are:
    [tex]
    \begin{array}{l}
    \partial M/ \partial u \geq 0 \text{ if } u = a\\
    \partial M/ \partial u \leq 0 \text{ if } u = b\\
    \partial M/ \partial u = 0 \text{ if } a < u < b\\
    \\
    \partial M/ \partial v \geq 0 \text{ if } v = c\\
    \partial M/ \partial v \leq 0 \text{ if } v = d\\
    \partial M/ \partial v = 0 \text{ if } c < v < d
    \end{array}[/tex]

    For a maximization problem the inequalities should be swapped in the above.

    To understand those conditions intuitively (and to help you to remember them) just think of minimizing a 1-variable function ##f(x)## on an interval ##[a,b]##. If the minimum occurs at the left-hand endpoint ##x = a## the function ##f## must be going up (increasing) near ##x = a##---or at least, not be strictly decreasing there. So, we need ##f'(a) \geq 0##. Similarly you need ##f'(b) \leq 0## at the right-hand endpoint ##x = b##. The inequalities should be opposite if you are maximizing instead of minimizing. The conditions given previously just apply these ideas to the multivariate version and use the Lagrangian instead of the function ##F## itself.
     
    Last edited: Jul 21, 2014
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted