Help with Lagrange multipliers

In summary: Now, all we need to do is solve for \lambda and we're good to go!The underlying principle behind Lagrange Multipliers is that when two functions, one of which is constrained, intersect at a point, their gradients intersect at that point as well. This point is called the constrained point, and the functions that intersect at it are said to be constrained by the constraint at that point. In order to find a maximum or minimum point on the constrained function, one must use the Lagrange Multiplier to extend the constrained function to a new domain where the constraint
  • #1
Theelectricchild
260
0
A bit of a tough one!

Find the maximum of [tex]ln x + ln y + 3 ln z[/tex] on part of the sphere [tex]x^2 + y^2 + z^2 = 5r^2[/tex] where x>0, y>0 and z>0.

I know I need to use Lagrange multipliers but how should I go about it? Any help would be appreciated thanks!
 
Last edited:
Physics news on Phys.org
  • #2
Why won't my Latex graphics show?
 
  • #3
The LateX graphics are showing fine for me.

About using Langrange multipliers:
Let [tex]G=\ln x + \ln y +3\ln z - \lambda(x^2 +y^2 +z^2)[/tex]
Set
[tex]\[ \frac{\partial G}{\partial x}, \frac{\partial G}{\partial y}, \frac{\partial G}{\partial z}\][/tex] equal to zero.
Together with [tex]x^2+y^2+z^2=5r^2[/tex] these are four equations in 4 unknowns. Generally, solving these equations isn`t easy, but in this case it isn't so bad: Write [tex]x^2, y^2[/tex] and [tex]z^2[/tex] as functions of lambda and use the constraint function to solve for [tex]\lambda[/tex].
 
  • #4
I also have a question about Lagrange multipliers: what is the underlying principle behind how they work?

I know it has something to do with the fact that the constraint and the maximum "contour" intersect at a point where they share a tangent line and their normal lines are the same (similar to indifference curves and budget constraints in economics I suppose.) Could anyone elaborate on that though? I'm not sure of the details.
 
  • #6
JierenChen said:
I also have a question about Lagrange multipliers: what is the underlying principle behind how they work?

I know it has something to do with the fact that the constraint and the maximum "contour" intersect at a point where they share a tangent line and their normal lines are the same (similar to indifference curves and budget constraints in economics I suppose.) Could anyone elaborate on that though? I'm not sure of the details.


The simplest way to look at Lagrange Multipliers is to look at it geometrically. Given a function F(x,y) and a curve C, what point on C maximizes F? Since the gradient of any function always points in the direction of fastest increase, we could start at any point and "follow the gradient" to find the maximum value. Of course, since we are constrained to C, we can't actually do that. What we would do is move in the direction along C closest to the direction of the direction of C.

When can we NOT do that? When the gradient is exactly perpendicular to C! Of course, if C is defined by a function g(x,y)= constant, its gradient is also perpendicular to C so the condition is that the two gradients are parallel: one is a multiple (the Lagrange Multiplier) of the other.
 
  • #7
Here's another simple argument about Lagrange multipliers:

Let [tex]f(x_{1},x_{2},...,x_{n})[/tex] be the function we want to minimize under conditions [tex]G_{i}(x_{1},...,x_{n})=0,i,...{k}<n[/tex]

Construct the function F:
[tex]F(x_{1},...,x_{n},\lambda_{1},...,\lambda_{k})=f+\sum_{i=1}^{k}\lambda_{i}G_{i}[/tex]

A couple of observations:
1) As usual F's stationary points are found by requiring it's gradient to be zero there.
That is, we have the k+n equations determining F's stationary points:
[tex]\frac{\partial{F}}{\partial{x}_{i}}=0,i=1,..,n, \frac{\partial{F}}{\partial{\lambda}_{i}}=0,i=1,..,k[/tex]

2) f is simply F (or close enough, anyways) when we restrict our attention to that part of F's domain determined by the having the G's equal to zero!

Surely, F's stationary points do not move about simply if we look at a particular region of it's domain!
In particular, any stationary point of F that happens to be in the restricted region must surely be a stationary point for the function that is "F restricted to that region" (that is, f).

The equations given in 1) are the ones you're after.
 
Last edited:
  • #8
It is easy if you consider two cases first, that of constrained optimization when you have one constraint and second, that of constrained optimization when you have multiple constraints. We're really looking for an optimal solution to the problem: a set of values for each of the variables, which gives us the exact coordinates (or n-tuplet) of the point of maxima or minima.

So what exactly is a constraint? Simply put, it is a function on the variables, which fixes exactly what family of acceptable optimal solutions is the particular solution to the problem at hand.

Supposing f and g are two differentiable and continuous functions of 2 variables and C is a fixed constant value of g then, we can write

[tex]
g(x, y) = C \qquad
[/tex]

f is also called the objective function. Note that at a stationary point,

[tex]
\frac{\partial f}{\partial x}\delta x + \frac{\partial f}{\partial y}\delta y = 0
[/tex]
[tex]
\frac{\partial g}{\partial x}\delta x + \frac{\partial g}{\partial y}\delta y = 0
[/tex]

The trick then is the introduction of a scalar quantity [tex]\lambda[/tex] which we appropriately call the Lagrangian Multiplier. We take the first equation above and add it to [tex]\lambda[/tex] times the second equation, getting as a result another equation, which is

[tex]
(\frac{\partial f}{\partial x} + \lambda \frac{\partial g}{\partial x}) \delta x + (\frac{\partial f}{\partial y} + \lambda \frac{\partial g}{\partial y}) \delta y = 0
[/tex]

Now suppose that we choose the parameter [tex]\lambda[/tex] such that the term in the first bracket on the left hand side is equal to zero, that is

[tex]
\frac{\partial f}{\partial x} + \lambda \frac{\partial g}{\partial x} = 0
[/tex]

This automatically implies that the term in the second bracket is also zero ([tex]\delta y[/tex] is not zero). This gives another condition, namely

[tex]
\frac{\partial f}{\partial y} + \lambda \frac{\partial g}{\partial y} = 0
[/tex]

Now we have three (2 + 1) equations to solve for the optimal solution [tex](x_{0}, y_{0})[/tex] which is a solution to each of these equations. The third equation, evidently, is the constraint g = C. We need to reintroduce it to ensure a particular solution and not an arbitary one.

More generally if

[tex]
x_{k} \qquad \forall \qquad 1\leq k \leq n
[/tex]

are n unknowns and

[tex]
g_{j}(x_{1},x_{2},x_{3},...,x_{n}) = C_{j} \qquad \forall \qquad 1\leq j \leq m
[/tex]

are m constraints, then we get a system of (n+m) equations (and evidently there are m multipliers, [tex]\lambda_{1}, \lambda_{2},...,\lambda_{m}[/tex]).

[tex]
\frac{\partial{f}}{\partial{x_k}} + \sum_{j=1}^m \lambda_j
\frac{\partial{g_j}}{\partial{x_k}} = 0 \qquad \forall \qquad
1\leq k \leq n
[/tex]
[tex]
g_{j}(x_{1},x_{2},x_{3},...,x_{n}) = C_{j} \qquad \forall \qquad 1\leq j \leq m
[/tex]
 
Last edited:
  • #9
Errr don't have to manipulate the constraint before taking partial derivatives? I mean

[tex]L(x,y,z,\lamda, r) = \ln x + \ln y + 3 \ln z - \lambda(x^2+y^2+z^2-5r^2) [/tex]

Taking partial derivatives [tex]\partial L_x ,\ \partial L_y,\ \partial L_z,\ \partial L_\lambda,\ \partial L_r[/tex] and setting them all equal to zero.
 
  • #10
Corneo said:
Errr don't have to manipulate the constraint before taking partial derivatives? I mean

[tex]L(x,y,z,\lamda, r) = \ln x + \ln y + 3 \ln z - \lambda(x^2+y^2+z^2-5r^2) [/tex]

Taking partial derivatives [tex]\partial L_x ,\ \partial L_y,\ \partial L_z,\ \partial L_\lambda,\ \partial L_r[/tex] and setting them all equal to zero.

That's one way to do it but I think it is simpler just to use the fact that
grad (ln x+ ln y+ 3ln z) is parallel to grad (x2+ y2+ z2).
 
Last edited by a moderator:
  • #11
I enjoyed the nice explanation of Halls of Ivy to follow the gradient as well as possible to maximize a function on a curve.

Another simple way to at least say it is, if the function is maximum then the gradient is zero. Also if the function restricted to the curve, is a maximum then its gradient is zero.

But the gradient of the restricted function is the restriction of the gradient. The restriction of the gradient to a curve is just the component of the gradient in the direction of the curve. So the restricted gradient is zero when the original gradient is perpendicular to the curve on which the maximum is sought.
 
Last edited:

1. What are Lagrange multipliers and how are they used?

Lagrange multipliers are a mathematical tool used to find the maximum or minimum of a function subject to a set of constraints. They allow us to find the optimal values for multiple variables simultaneously.

2. When should Lagrange multipliers be used?

Lagrange multipliers should be used when optimizing a function with multiple variables and constraints. They are particularly useful in physics, engineering, and economics.

3. How do Lagrange multipliers work?

Lagrange multipliers work by introducing a new variable, called a multiplier, into the original function. This creates a new function, known as the Lagrangian, which can then be solved using calculus to find the optimal values for the variables.

4. Can Lagrange multipliers be used for non-linear functions?

Yes, Lagrange multipliers can be used for both linear and non-linear functions. However, they are most commonly used for linear functions due to their simplicity and ease of calculation.

5. Are there any limitations to using Lagrange multipliers?

One limitation of Lagrange multipliers is that they can only be used for functions with continuous derivatives. Additionally, they may not always provide the global optimum solution, but rather a local optimum.

Similar threads

Replies
3
Views
1K
Replies
1
Views
936
Replies
13
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
469
Replies
2
Views
938
Replies
3
Views
327
Replies
7
Views
2K
Replies
4
Views
3K
Replies
9
Views
2K
Back
Top