Optimizing fractions and Lagrange Multiplier

On a separate note, I'm interested in this entire process of finding the lowest eigenvalues because I am trying to solve ##Lu=\lambda Mu##. If we instead look at a simpler problem ##Lu=\lambda u## I know the lowest eigenvalue is $$\lambda_1=\inf \frac{(Lu,u)}{(u,u)}.$$ Letting $$u=\sum_{i=1}^N a_i\psi_i$$ where ##\psi_i## are orthonormal basis vectors that satisfy boundary conditions, we deduce$$(Lu,u) = \sum_{i,k=1}^NF_{ik}a_ia_k:F_{
  • #1
member 428835
Hi PF!

When minimizing some fraction ##f(x)/g(x)## can we use Lagrange multipliers and say we are trying to optimize ##f## subject to the constraint ##g=1##?

Thanks
 
Physics news on Phys.org
  • #2
No, I don't think so, if I understand your question correctly. Why would that be true? Consider ##f(x) = e^{-x}## and ##g(x) = 1 + x^2##.
 
  • Like
Likes member 428835
  • #3
Yea those are good counter examples. I am curious because I am reading about the Ritz method for approximating eigenvalues. My book says: consider the eigenvalue problem ##Lu=\lambda Mu## where ##L## and ##M## are linear operators, ##u## is a function and ##\lambda## is the eigenvalue. Then the smallest eigenvalue ##\lambda_1## is given by $$\lambda_1=\min\frac{(Lu,u)}{(Mu,u)}$$ or equivalently $$\lambda_1=\min\limits_{(Mu,u)=1}(Lu,u)$$ where ##(f,g)=\int_a^bfg\,dx##.
 
  • #4
Ah ok, yes, but this problem has a special structure. Let's write
$$
\lambda_1 := \min_{(Mu,u) \neq 0}\frac{(Lu,u)}{(Mu,u)}
$$
and
$$
\mu_1 := \min_{(Mu,u)=1}(Lu,u)
$$
Your book now claims that ##\lambda_1 = \mu_1##. How would you prove this directly (i.e. no Lagrange multipliers or something like that)?
 
  • Like
Likes member 428835
  • #5
Krylov said:
Ah ok, yes, but this problem has a special structure. Let's write
$$
\lambda_1 := \min_{(Mu,u) \neq 0}\frac{(Lu,u)}{(Mu,u)}
$$
and
$$
\mu_1 := \min_{(Mu,u)=1}(Lu,u)
$$
Your book now claims that ##\lambda_1 = \mu_1##. How would you prove this directly (i.e. no Lagrange multipliers or something like that)?
Take a derivative of what we're trying to minimize w.r.t...hmmm well now I'm not so sure. Any ideas?
 
  • #6
joshmccraney said:
Any ideas?
Yes, but I would like you to puzzle a little bit, too.

You have not given much of the precise context. Let's assume that
  • ##H## is a real Hilbert space with inner product ##(\cdot,\cdot)##,
  • ##L## and ##M## are operators with domains ##D(L) = D(M) = H##,
  • ##M## is a symmetric and positive operator, i.e. ##0 \le (Mu, u)## for all ##u \in H##.
To be safe, I also replace your minima by infima. Let's make the minimization problems a bit more explicit using set notation. Write
$$
\Lambda_1 := \left\{\frac{(Lu,u)}{(Mu,u)}\,:\, u \in H,\, (Mu,u) > 0 \right\}, \qquad M_1 := \left\{(Lv,v)\,:\, v \in H,\, (Mv,v) = 1 \right\},
$$
so ##\lambda_1 = \inf{\Lambda_1}## and ##\mu_1 = \inf{M_1}##. (Note that the positivity of ##M## allowed me to replace the condition ##(Mu,u) \neq 0## by ##(Mu,u) > 0## in ##\Lambda_1##.)

Now observe that ##M_1 \subseteq \Lambda_1##. (This already gives ##\lambda_1 \le \mu_1##.) With a little bit (but not much) more work, you also show the reverse inclusion:
$$
M_1 \supseteq \Lambda_1 \qquad (\ast)
$$
Once this is done, you have ##\Lambda_1 = M_1##, so ##\lambda_1 = \mu_1## follows. If you like, try to deduce ##(\ast)## yourself.

(The essential property you will use, is that both the numerator and the denominator of the original function
$$
u \mapsto \frac{(Lu,u)}{(Mu,u)}
$$
are homogeneous of degree two, so any scaling of ##u## does not change the function value.)
 
  • #7
Krylov said:
Now observe that ##M_1 \subseteq \Lambda_1##. (This already gives ##\lambda_1 \le \mu_1##.) With a little bit (but not much) more work, you also show the reverse inclusion:
$$
M_1 \supseteq \Lambda_1 \qquad (\ast)
$$
Once this is done, you have ##\Lambda_1 = M_1##, so ##\lambda_1 = \mu_1## follows. If you like, try to deduce ##(\ast)## yourself.

(The essential property you will use, is that both the numerator and the denominator of the original function
$$
u \mapsto \frac{(Lu,u)}{(Mu,u)}
$$
are homogeneous of degree two, so any scaling of ##u## does not change the function value.)
Ok, so I'm thinking if the denominator was ##(u,u)## (a less general case) then all we would have to do is let ##v = u/||u||##. But with the ##M## operator present, I'm unsure how to proceed...I'll think more on this, but feel free to give the spoiler.

On a separate note, I'm interested in this entire process of finding the lowest eigenvalues because I am trying to solve ##Lu=\lambda Mu##. If we instead look at a simpler problem ##Lu=\lambda u## I know the lowest eigenvalue is $$\lambda_1=\inf \frac{(Lu,u)}{(u,u)}.$$ Letting $$u=\sum_{i=1}^N a_i\psi_i$$ where ##\psi_i## are orthonormal basis vectors that satisfy boundary conditions, we deduce
$$(Lu,u) = \sum_{i,k=1}^NF_{ik}a_ia_k:F_{ik}\equiv (L\psi_i,\psi_k)
$$
To find the infimum from above, we can reduce the minimizing problem to minimizing ##(Lu,u)## subject to the constraint ##(u,u)=1##. This evokes Lagrange multipliers. Letting ##\Lambda## be the Lagrange multiplier, take $$\frac{\partial}{\partial a_j}\left( \sum_{i,k=1}^NF_{ik}a_ia_k + \Lambda \sum_{i,k=1}^Na_ia_k(\psi_i,\psi_k)\right)=0\implies\\
\frac{\partial}{\partial a_j}\left( \sum_{i,k=1}^NF_{ik}a_ia_k + \Lambda \sum_{i}^Na_i^2\right)=0\implies\\
\sum_{k=1}^NF_{ki}a_k + \Lambda a_i=0\implies\\
\det(F_{ki}-\Lambda\delta_{ki})=0.
$$
Similarly, when solving the problem ##Lu=\lambda M u##, if we let ##(Mu,u) = \sum_{i,k=1}^ND_{ik}a_ia_k:D_{ik}\equiv (M\psi_i,\psi_k)## then the solution should be $$\det(F_{ki}-\Lambda D_{ki})=0$$ where ##\lambda_1## is the lowest root ##\Lambda##. Does this look correct to you?

I'm under the impression this technique is called the Rayleigh-Ritz variational approach.
 
  • #8
joshmccraney said:
Ok, so I'm thinking if the denominator was ##(u,u)## (a less general case) then all we would have to do is let ##v = u/||u||##. But with the ##M## operator present, I'm unsure how to proceed...I'll think more on this, but feel free to give the spoiler.
The symmetry and - in particular - the positivity of the operator ##M## imply that ##(u_1,u_2) \mapsto (Mu_1,u_2)## defines a bilinear form on ##H \times H## that has all the properties of an inner product, with the exception that ##(Mu,u) = 0## may not imply ##u = 0##. In any case, for the problem at hand you can just let
$$
v =\frac{1}{\sqrt{(Mu,u)}}u
$$
(In other words, you scale by ##(Mu,u)^{-\frac{1}{2}}## instead of ##(u,u)^{-\frac{1}{2}}##.)
 
  • Like
Likes member 428835 and Greg Bernhardt
  • #9
Thanks!
 
  • #10
Id say, if ##f,g## are differentiable, you can use the constraint ## f'g-fg'=g^2 ## when ##g \neq 0 ; g(x) \neq 0 ## or not defined , since extremes are reached at critical points.
 

1. What is the purpose of optimizing fractions using Lagrange Multiplier?

The purpose of optimizing fractions using Lagrange Multiplier is to find the maximum or minimum value of a function subject to certain constraints. This method allows for the incorporation of constraints into the optimization process, providing more accurate and efficient solutions.

2. How does Lagrange Multiplier work in optimizing fractions?

Lagrange Multiplier works by creating a new function, known as the Lagrangian, which combines the original objective function with the constraints using a Lagrange multiplier. This new function is then optimized to find the maximum or minimum value of the original function subject to the given constraints.

3. What are the benefits of using Lagrange Multiplier in optimizing fractions?

One of the main benefits of using Lagrange Multiplier is that it allows for the consideration of constraints in the optimization process, which can lead to more accurate and efficient solutions. Additionally, it can also be used to solve optimization problems with multiple variables and constraints, which may be difficult or impossible to solve using other methods.

4. Are there any limitations to using Lagrange Multiplier in optimizing fractions?

One limitation of using Lagrange Multiplier is that it may not always provide the global optimum solution, as it can only find local extrema. Additionally, it may be computationally expensive for problems with a large number of variables and constraints.

5. Can Lagrange Multiplier be applied to any type of optimization problem?

Yes, Lagrange Multiplier can be applied to a wide range of optimization problems, including those with linear and nonlinear objective functions and constraints. However, it may be more effective for problems with convex objective functions and constraints. It is important to consider the specific problem and constraints when deciding whether to use Lagrange Multiplier for optimization.

Similar threads

Replies
3
Views
1K
Replies
13
Views
1K
Replies
1
Views
944
Replies
7
Views
2K
  • Calculus and Beyond Homework Help
Replies
8
Views
476
Replies
1
Views
1K
Replies
2
Views
942
  • Calculus
Replies
2
Views
1K
Replies
9
Views
2K
Replies
9
Views
2K
Back
Top