# Help with Lagrange multipliers

1. Jul 11, 2004

### Theelectricchild

A bit of a tough one!

Find the maximum of $$ln x + ln y + 3 ln z$$ on part of the sphere $$x^2 + y^2 + z^2 = 5r^2$$ where x>0, y>0 and z>0.

I know I need to use Lagrange multipliers but how should I go about it? Any help would be appreciated thanks!!!

Last edited: Jul 12, 2004
2. Jul 11, 2004

### Theelectricchild

Why wont my Latex graphics show?

3. Jul 12, 2004

### Galileo

The LateX graphics are showing fine for me.

About using Langrange multipliers:
Let $$G=\ln x + \ln y +3\ln z - \lambda(x^2 +y^2 +z^2)$$
Set
$$$\frac{\partial G}{\partial x}, \frac{\partial G}{\partial y}, \frac{\partial G}{\partial z}$$$ equal to zero.
Together with $$x^2+y^2+z^2=5r^2$$ these are four equations in 4 unknowns. Generally, solving these equations isn`t easy, but in this case it isn't so bad: Write $$x^2, y^2$$ and $$z^2$$ as functions of lambda and use the constraint function to solve for $$\lambda$$.

4. Jul 13, 2004

### JierenChen

I also have a question about Lagrange multipliers: what is the underlying principle behind how they work?

I know it has something to do with the fact that the constraint and the maximum "contour" intersect at a point where they share a tangent line and their normal lines are the same (similar to indifference curves and budget constraints in economics I suppose.) Could anyone elaborate on that though? I'm not sure of the details.

5. Jul 13, 2004

### maverick280857

Yeah Lagrange's Multipliers are very useful in applications such as Trigonometric Optimization. You might want to see

http://ndp.jct.ac.il/tutorials/Infitut2/node33.html

http://www-math.mit.edu/~djk/18_022/chapter04/section03.html

http://mathworld.wolfram.com/LagrangeMultiplier.html

for more information about the principles involved (and of course google in general). A knowledge of vector calculus helps in understanding.

6. Jul 14, 2004

### HallsofIvy

Staff Emeritus

The simplest way to look at Lagrange Multipliers is to look at it geometrically. Given a function F(x,y) and a curve C, what point on C maximizes F? Since the gradient of any function always points in the direction of fastest increase, we could start at any point and "follow the gradient" to find the maximum value. Of course, since we are constrained to C, we can't actually do that. What we would do is move in the direction along C closest to the direction of the direction of C.

When can we NOT do that? When the gradient is exactly perpendicular to C! Of course, if C is defined by a function g(x,y)= constant, its gradient is also perpendicular to C so the condition is that the two gradients are parallel: one is a multiple (the Lagrange Multiplier) of the other.

7. Jul 14, 2004

### arildno

Here's another simple argument about Lagrange multipliers:

Let $$f(x_{1},x_{2},...,x_{n})$$ be the function we want to minimize under conditions $$G_{i}(x_{1},...,x_{n})=0,i,...{k}<n$$

Construct the function F:
$$F(x_{1},...,x_{n},\lambda_{1},...,\lambda_{k})=f+\sum_{i=1}^{k}\lambda_{i}G_{i}$$

A couple of observations:
1) As usual F's stationary points are found by requiring it's gradient to be zero there.
That is, we have the k+n equations determining F's stationary points:
$$\frac{\partial{F}}{\partial{x}_{i}}=0,i=1,..,n, \frac{\partial{F}}{\partial{\lambda}_{i}}=0,i=1,..,k$$

2) f is simply F (or close enough, anyways) when we restrict our attention to that part of F's domain determined by the having the G's equal to zero!

Surely, F's stationary points do not move about simply if we look at a particular region of it's domain!
In particular, any stationary point of F that happens to be in the restricted region must surely be a stationary point for the function that is "F restricted to that region" (that is, f).

The equations given in 1) are the ones you're after.

Last edited: Jul 14, 2004
8. Jul 14, 2004

### maverick280857

It is easy if you consider two cases first, that of constrained optimization when you have one constraint and second, that of constrained optimization when you have multiple constraints. We're really looking for an optimal solution to the problem: a set of values for each of the variables, which gives us the exact coordinates (or n-tuplet) of the point of maxima or minima.

So what exactly is a constraint? Simply put, it is a function on the variables, which fixes exactly what family of acceptable optimal solutions is the particular solution to the problem at hand.

Supposing f and g are two differentiable and continuous functions of 2 variables and C is a fixed constant value of g then, we can write

$$g(x, y) = C \qquad$$

f is also called the objective function. Note that at a stationary point,

$$\frac{\partial f}{\partial x}\delta x + \frac{\partial f}{\partial y}\delta y = 0$$
$$\frac{\partial g}{\partial x}\delta x + \frac{\partial g}{\partial y}\delta y = 0$$

The trick then is the introduction of a scalar quantity $$\lambda$$ which we appropriately call the Lagrangian Multiplier. We take the first equation above and add it to $$\lambda$$ times the second equation, getting as a result another equation, which is

$$(\frac{\partial f}{\partial x} + \lambda \frac{\partial g}{\partial x}) \delta x + (\frac{\partial f}{\partial y} + \lambda \frac{\partial g}{\partial y}) \delta y = 0$$

Now suppose that we choose the parameter $$\lambda$$ such that the term in the first bracket on the left hand side is equal to zero, that is

$$\frac{\partial f}{\partial x} + \lambda \frac{\partial g}{\partial x} = 0$$

This automatically implies that the term in the second bracket is also zero ($$\delta y$$ is not zero). This gives another condition, namely

$$\frac{\partial f}{\partial y} + \lambda \frac{\partial g}{\partial y} = 0$$

Now we have three (2 + 1) equations to solve for the optimal solution $$(x_{0}, y_{0})$$ which is a solution to each of these equations. The third equation, evidently, is the constraint g = C. We need to reintroduce it to ensure a particular solution and not an arbitary one.

More generally if

$$x_{k} \qquad \forall \qquad 1\leq k \leq n$$

are n unknowns and

$$g_{j}(x_{1},x_{2},x_{3},....,x_{n}) = C_{j} \qquad \forall \qquad 1\leq j \leq m$$

are m constraints, then we get a system of (n+m) equations (and evidently there are m multipliers, $$\lambda_{1}, \lambda_{2},.....,\lambda_{m}$$).

$$\frac{\partial{f}}{\partial{x_k}} + \sum_{j=1}^m \lambda_j \frac{\partial{g_j}}{\partial{x_k}} = 0 \qquad \forall \qquad 1\leq k \leq n$$
$$g_{j}(x_{1},x_{2},x_{3},....,x_{n}) = C_{j} \qquad \forall \qquad 1\leq j \leq m$$

Last edited: Aug 17, 2004
9. Jul 17, 2004

### Corneo

Errr don't have to manipulate the constraint before taking partial derivatives? I mean

$$L(x,y,z,\lamda, r) = \ln x + \ln y + 3 \ln z - \lambda(x^2+y^2+z^2-5r^2)$$

Taking partial derivatives $$\partial L_x ,\ \partial L_y,\ \partial L_z,\ \partial L_\lambda,\ \partial L_r$$ and setting them all equal to zero.

10. Jul 20, 2004

### HallsofIvy

Staff Emeritus
That's one way to do it but I think it is simpler just to use the fact that
grad (ln x+ ln y+ 3ln z) is parallel to grad (x2+ y2+ z2).

Last edited: Jul 20, 2004
11. Jul 23, 2004

### mathwonk

I enjoyed the nice explanation of Halls of Ivy to follow the gradient as well as possible to maximize a function on a curve.

Another simple way to at least say it is, if the function is maximum then the gradient is zero. Also if the function restricted to the curve, is a maximum then its gradient is zero.

But the gradient of the restricted function is the restriction of the gradient. The restriction of the gradient to a curve is just the component of the gradient in the direction of the curve. So the restricted gradient is zero when the original gradient is perpendicular to the curve on which the maximum is sought.

Last edited: Jul 23, 2004