Lagrange Multipliers Global vs Local

In summary: Yes, this is analogous to Lagrange's method. You can get the critical points subject to the constraints by using the Lagrange multipliers.
  • #1
kingwinner
1,270
0
http://www.geocities.com/asdfasdf23135/advcal29.JPG

I am wondering whether the above statement is true.
"A necessary condition for the constrained optimization problem to have a GLOBAL min or max is that..."
Should the word local replace global?

I am confused about the method of Lagrange Multipliers as well. When we use this method, we get a bunch of points A1,A2,A3,...An.
If we then compute f(A1),f(A2),...,f(An) and take the largest and smallest, are these guaranteed to be the GLOBAL max and min, respectively? If not, under what conditions would this be the case?

Also, when using the Lagrange Multipliers method, do we still need to check all the boundary points separately?

Could someone explain? I would really appreciate!:smile:
 
Last edited:
Physics news on Phys.org
  • #2
Read the statement carefully. It says 'a necessary condition for'. It doesn't say 'a sufficient condition for'. Look those phrases up if you have to.
 
  • #3
So before doing every question about Lagrange Multipliers, we must first justify the existence of global max/min, right?

Also, is it a 'sufficient condition for' global max/min or local max/min? In the proof, they only talk about local max/min and never talk about global max/min...yet in the statement of the theorem, they put global...
 
  • #4
A global min/max is also a local min/max in the context of lagrange multipliers. That's why they say necessary and NOT sufficient. There is NOTHING in there that says you don't need to check whether a local min/max is a global min/max. You still have to do it. Face it. If I had said 'global' in this context, I'd be regretting it. Not because what they say is wrong, but because certain people are taking it the wrong way.
 
  • #5
Notice that Dick said "A global min/max is also a local min/max". The other way is not true! A local max/min is not necessarily a global max/min. But since any global max/min must be among the local max/min, it is sufficient to look through all the local max to find the gobal max.

That has nothing to do with "Lagrange multipliers". Surely you did the same thing back in 1 dimensional calculus.
 
  • #6
Oh no, I typed something not I really menat...sorry about that. I meant "necessary" not sufficient.

Call grad f =(lambda_1)(grad(g_1)) + ... + (lambda_k)(grad(g_k)) condition (*)

In the statement, is (*) a necessary condition for global max/min or is it a necessary condition for local max/min?
i.e.
Global max/min => condition (*)
Local max/min => condition (*)
Which one is true? The second one seems true to me, but the theorem says the first one is true which I doubt.



"A global min/max is also a local min/max"
From 1st year calculus, say considering f on the interval [a,b], the global max/min can occur at a and/or b which isn't a local min/max, it's an endpoint.
The Lagrange Multiplier method for sure gives the critical points in (a,b), but are the endpoints a and b included as well, or do we always have to check the endpoints separately?


Thanks for clearing my doubts!
 
Last edited:
  • #7
LOCAL max/min at a point => at that point, grad f = (lambda_1)(grad(g_1)) + ... + (lambda_k)(grad(g_k)) for some constants lambda_i

But is the converse true?
 
Last edited:
  • #8
No. No. No. No. No. Forget lagrange multipliers. A zero derivative doesn't imply it's a min/max. It could be a saddle point. You've taken calculus. You know this. A local min/max implies zero derivative. The converse is not true. No. No. No. No. Why are you going on about this? Zero derivative is a NECESSARY condition for a local min/max. It's NOT SUFFICIENT. I think I said this ages ago.
 
  • #9
Dick said:
No. No. No. No. No. Forget lagrange multipliers. A zero derivative doesn't imply it's a min/max. It could be a saddle point. You've taken calculus. You know this. A local min/max implies zero derivative. The converse is not true. No. No. No. No. Why are you going on about this? Zero derivative is a NECESSARY condition for a local min/max. It's NOT SUFFICIENT. I think I said this ages ago.

Yes, I understand this one. e.g. y=x^3, y'=3x^2, y'(0)=0, so x=0 is critical, but x=0 does not give a local max or a local min.

But is this idea exactly analogous to Lagrange's method?
grad f = (lambda_1)(grad(g_1)) + ... + (lambda_k)(grad(g_k)) for some constants lambda_i
Does this condition give precisely the "critical points" subject to the constraints g1=0, ..., gk=0? (so that the converse of post #7 is false)

Or is it the case that LOCAL max/min at a point <=> at that point, grad f = (lambda_1)(grad(g_1)) + ... + (lambda_k)(grad(g_k)) for some constants lambda_i?
 
  • #10
It's exactly analogous. The Lagrange multiplier method gives you points where the gradient is zero (critical points). That doesn't tell you any more about min/max than f'(x)=0 in calculus. It is not the case that <=> holds.
 
  • #11
Dick said:
It's exactly analogous. The Lagrange multiplier method gives you points where the gradient is zero (critical points). That doesn't tell you any more about min/max than f'(x)=0 in calculus. It is not the case that <=> holds.
Um...are you sure that the gradient is zero? Shouldn't it be a linear combination of grad(g_i)?


Also, I don't understand why a global min/max is also a local min/max.
e.g. Consider y=x^2 on x E [0,2]
global max=4
global min=0
But neither of these is a local max or min.
 
  • #12
Same thing. The lagrange multiplier thing just enforces a continuous constraint on the the system. The linear combination of the grads just tells you what the grad is on the constrained submanifold. It does NOT deal with discontinuous constraints like being in [0,2]. YOU HAVE TO FIGURE THOSE OUT. JUST LIKE f'(x)=0 in calculus. HOW MANY TIMES DO I HAVE TO SAY THIS?
 
  • #13
kingwinner said:
Also, I don't understand why a global min/max is also a local min/max.
e.g. Consider y=x^2 on x E [0,2]
global max=4
global min=0
But neither of these is a local max or min.

Then how do you define "local" max or min? Certainly 4 is larger than any other value of x2 near x= 2 in the given set.
 
  • #14
HallsofIvy said:
Then how do you define "local" max or min? Certainly 4 is larger than any other value of x2 near x= 2 in the given set.
Assume the functions are differentiable.

Local max/min at a => a is a critical value <=> f '(a)=0

But y=x^2, y'=2x, y'(2)=2(2)=4 => not local max/min
 
  • #15
That is most definitely NOT the definition of "local max or min"- it is a property you can derive assuming that the local max or min is on the interior of the set. Try to quote exactly, from your textbook, the definition of "local max" or "local min".
 

1. What is the difference between Lagrange multipliers in global optimization and local optimization?

The main difference between Lagrange multipliers in global optimization and local optimization is the scope of the optimization problem. In global optimization, the goal is to find the absolute minimum or maximum of a function over the entire feasible region. This means that the Lagrange multipliers are used to optimize the function over the entire range of values. In local optimization, the goal is to find the minimum or maximum of a function within a specific region or domain. This means that the Lagrange multipliers are used to optimize the function within a smaller, localized range of values.

2. How are Lagrange multipliers used in global optimization?

In global optimization, Lagrange multipliers are used to optimize a function with multiple constraints. The constraints are incorporated into the objective function using a Lagrange multiplier for each constraint. The resulting system of equations is then solved to find the optimal values for both the objective function and the Lagrange multipliers. This allows for the identification of the absolute minimum or maximum of the function over the entire feasible region.

3. Can Lagrange multipliers be used for non-linear optimization problems?

Yes, Lagrange multipliers can be used for non-linear optimization problems. The method for solving non-linear problems with Lagrange multipliers is essentially the same as for linear problems. The main difference is that the equations will be non-linear and may require numerical methods to solve.

4. What are some advantages of using Lagrange multipliers in global optimization?

One advantage of using Lagrange multipliers in global optimization is that it provides a systematic method for incorporating constraints into the objective function. This allows for the optimization of complex functions with multiple constraints. Additionally, Lagrange multipliers provide a way to find the global optimum, which may not be possible with other optimization methods.

5. Are there any limitations to using Lagrange multipliers in global optimization?

One limitation of using Lagrange multipliers in global optimization is that the method relies on the differentiability of the objective function and constraints. This means that the method may not be applicable to non-differentiable functions. Additionally, the method may become computationally intensive for complex optimization problems with many constraints.

Similar threads

  • Calculus and Beyond Homework Help
Replies
8
Views
343
  • Calculus and Beyond Homework Help
Replies
4
Views
797
  • Calculus and Beyond Homework Help
Replies
15
Views
3K
  • Calculus and Beyond Homework Help
Replies
6
Views
5K
  • Calculus and Beyond Homework Help
Replies
13
Views
3K
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
13
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
2K
Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
2K
Back
Top