Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Constrained variational problem

  1. Jun 14, 2018 #1

    joshmccraney

    User Avatar
    Gold Member

    Hi PF!

    I'm trying to show that the eigenvalue problem $$L u = \lambda M u$$ is equivalent to solving $$\min_\phi (L\phi,\phi) : (M\phi,\phi) = 1$$
    where ##\phi## is a real function of ##x## and ##L,M## are Hermitian operators and ##\lambda## is the Lagrange multiplier constant.

    Applying Lagrange multipliers to the constrained problem yields a functional
    $$ J = (L\phi,\phi) - \lambda\left[ (M\phi,\phi) - 1\right] \implies\\
    \delta J = \delta (L\phi,\phi) - \delta\left[\lambda ((M\phi,\phi) - 1)\right]\\
    = 2(L\phi,\delta\phi) - \delta\lambda ((M\phi,\phi) - 1) - 2\lambda(M\phi,\delta\phi)\\
    =2(L\phi-\lambda M\phi,\delta\phi) - \delta\lambda ((M\phi,\phi) - 1) = 0.
    $$
    Now I know ##\delta\lambda ((M\phi,\phi) - 1) = 0## implies ##L u = \lambda M u##, but why is ##\delta\lambda ((M\phi,\phi) - 1) = 0##? Am I missing something? I think ##\delta \lambda = 0## (since ##\lambda## is a constant). What do you think?

    Also, how is the operator ##\delta## above defined? I've been treating it as a derivative, but what's the formal definition? I've read several different websites now but can't find a direct definition.
     
  2. jcsd
  3. Jun 19, 2018 at 7:08 AM #2

    jambaugh

    User Avatar
    Science Advisor
    Gold Member

    That second variation term recovers your original constraint.
    [tex]\delta\lambda((M\phi,\phi)-1)=0 \implies (M\phi,\phi)-1 = 0[/tex]
    since [itex]\delta\lambda[/itex] is arbitrary.
     
  4. Jun 19, 2018 at 7:39 AM #3

    jambaugh

    User Avatar
    Science Advisor
    Gold Member

    As to your second question: (prepare for long exposition):

    The variation operator [itex]\delta[/itex] is a differential operator just like [itex]d[/itex] in the sense that [itex]df(x)=f'(x)dx[/itex]. However it is a differential on a functional which is itself a function with domain in a function space. As such there are two levels of differentiation and you have to distinguish them.

    Consider a scalar function [itex]f[/itex] and its "graph" i.e. it's use to relate two coordinate variables: [itex]y=f(x)[/itex]. The differential of a function emerges as a local linearization of this relation:
    [tex]dy = d f(x) \equiv f'(x)dx[/tex]
    View this in terms of [itex]x[/itex] and [itex]y[/itex] being fixed and you're referencing a second point in the xy plane at [itex](x+dx,y+dy)[/itex]. This is the "modern" (non-infinitesimal) interpretation of differential variables.

    Now consider the same for a vector function, allow [itex]x[/itex] and [itex]y[/itex] to be vectors in their own spaces. The same relationship holds excepting that [itex]dx[/itex] and [itex]dy[/itex] are now vectors and the derivative is now an Operator Valued function of the original variables. I will often write the relation in this form:
    [tex]d\mathbf{y} = F'(\mathbf{x})[d\mathbf{x}] [/tex]
    with the brackets indicating the operation of a linear operator. If you express your vectors as column matrices then the general derivative [itex]F'[/itex] will take the form of a matrix of partial derivatives with respect to the components. For the specific case where [itex]F[/itex] is a coordiant transformation its derivative is the Jacobian matrix... the matrix whose determinant is the Jacobian.

    Ok this is conventional differentials. Now for the next stage. Let [itex]F[/itex] be a functional, a function from a space of functions to [itex]\mathbb{R}[/itex]. Now I'll use curly brackets to indicate functional evaluation of a function and let [itex]\phi[/itex] be my archtypical function. For:
    [tex]\psi = F\{\phi\}[/tex]
    the differential relation (expressing a local linear approximation) is:
    [tex] \delta F\{\phi\} = F'\{\phi\}[\delta\phi][/tex]
    Note that just as [itex]dx[/itex] and [itex]dy[/itex] were new independent variables expressing deviations from the original variables [itex]x,y[/itex] so too now are [itex]\delta\psi[/itex] and [itex]\delta\phi[/itex] new indepent variables (in function space) expressing deviations from the original (function valued) variables.

    But those variables as function valued variables are subject to the original differential operator [itex] d[/itex]. So [itex] d: \phi(t) \mapsto d\phi(t,dt) = \phi'(t)dt[/itex] is a distinct and still present mathematical operation and must be distinguished from [itex]\delta\phi[/itex].

    I hope this makes some sense of it. Look up Gâteaux derivative which generalizes nicely the concept of differential and derivative to mutivariable and functional spaces.
    In short define the differential before the derivative and then the derivative as the relationship between differential variables:
    [tex] d F(X) \equiv \lim_{h\to 0} \frac{1}{h}\left[ F(X+hdX) - F(X)\right] \equiv F'(X)[dX][/tex]
    Its relatively simple to proved the limit of the generalized difference quotient is a linear function(al) of the differential variable [itex]dX[/itex]. All you need otherwise is that the domain is a linear space be it scalars, vectors, functions or something more.

    This is something I've only come to understand fully in recent years after many years teaching mutivar calculus.
     
  5. Jun 19, 2018 at 7:56 AM #4

    jambaugh

    User Avatar
    Science Advisor
    Gold Member

    One more point. As this functional differential is just a conventional differential on a function space it obeys the same rules. It commutes with linear operators:
    [itex]\delta L[\phi] = L[\delta\phi][/itex] and more generally obeys the Leibniz rule for multilinear forms:
    [tex]\delta F[\phi_1,\phi_2,\phi_3,\ldots] = F[\delta \phi_1,\phi_2,\phi_3,\ldots] + F[\phi_1,\delta \phi_2,\phi_3,\ldots] + \cdots[/tex]

    This is how you evaluated the differential of your inner product. Note that since an definite integral is a linear functional on its argument you get likewise:
    [tex] \delta \int_{\Omega}Fdx = \int_{\Omega} \delta F dx[/tex]

    Oh and since [itex]d[/itex] is linear [itex]\delta [/itex] and [itex]d[/itex] commute:
    [tex]\delta dF = d\delta F[/tex]
    That's a crucial step in deriving Euler-Lagrange equations as you may recall.
     
  6. Jun 19, 2018 at 8:11 AM #5

    jambaugh

    User Avatar
    Science Advisor
    Gold Member

    And yet one final point (really, this time. I promise!).

    There's a slight difficulty with your method in that, as I said [itex]\delta[/itex] is a differential of functions on function space. Now you're also applying it to [itex]\lambda[/itex]. If you are treating [itex]\lambda[/itex] as a constant or as an independent variable which your functions in your function space may depend upon then [itex]\delta \lambda = 0[/itex]. But if instead (and this is I think the proper case) your Lagrange multiplier is a scalar dependent variable, with dependence in the form of some (unknown) functional of your function then [itex]\delta \lambda[/itex] is properly an independent variable which allows you to recover your constraint.

    This is one of those nit-picking details that doesn't really affect calculations but can be the Achilles heel of an attempted proof allowing you to miss a pathological counter example.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted