Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Questioning an assumption in calculus of variations

  1. Jul 13, 2015 #1
    When deriving stationary points of a function defined by a 1-D integral (think lagranian mechanics, Fermat's priniciple, geodesics, etc) and arriving at the Euler Lagrange equation, there seems to me to be an unjustified assumption in the derivation. The derivations I have seen start with something along the following lines: assume some function x(t) is the function we are looking for, let x'(t) = x(t) + η(t) be a nearby path... The derivation will then go on to show the conditions for the original function x(t), namely that the function satisfy the Euler Lagrange equation.

    It seems a little odd that we assume, without proof, that this function exists and then sort out its properties. How do we know such a function exists? Does it always exist? Are there conditions on this? Isn't it a little shady to be discussing properties of something if we havent proved yet that it exists?

    On the other hand, once we complete the derivation, it seems clear to me that a function which satisfies the Euler Lagrange equation will be a stationary function. I think.

    I'm still left feeling uncomfortable however about this. Is there some outside proof which shows that this function must exist?

    I shoud give the caveat that I have only seen this derivation in physics books, I dont own any math books on calculus of variations
     
  2. jcsd
  3. Jul 13, 2015 #2
    Functions that represent reasonable things in physics are real and have reasonable mathematical properties (continuous, derivatives exist, etc.)
     
  4. Jul 13, 2015 #3
    I understand that, but we are asking for something more here, the existence of some extreme values. In calculus on R we can say that on compact subsets of R, the extreme values exist. I don't know what the analogy would be here when I am not looking at R, but some subset of all continuous functions.

    I think I am looking for some topology on the space of functions and hope to see some compact set or something. Maybe there an easier way, I don't know.
     
  5. Jul 13, 2015 #4
    The minimal function certainly does not always exist mathematically. I haven't done this type of analysis for a long time but couldn't you just take say [itex] C^1([0,1]) [/itex] as your space of functions with an action functional given by [itex] S(f)=\int_{0}^1 f(x) dx [/itex]? Surely this can't have a local min/max because you could always just remove a tiny portion of the original function and glue in a Gaussian of the appropriate size in a continuous way to make the integral a tiny bit bigger/smaller than any proposed min/max function.

    What it is saying is simply that if the minimizer does exist, then it must satisfy these equations. So we can replace the problem of finding a minimizer with the problem of solving a differential equation which is usually more tractable. Of course not every differential equation has a solution so the nonexistence of a minimizer will manifest itself in the nonexistence of a solution to the differential equation. In the example I gave above, the Euler lagrange equations simply become 1=0 so no solution to the Euler Lagrange equation exist as expected.


    If you want to know the conditions about when the existence of a minimizer is guaranteed, generally you will need to use some functional analysis (although in the one dimensional case things may be much simpler, I don't really know) and it is much more complicated than a simple compactness argument. For example, I remember a theorem from a PDE course that stated if you take a reflexive Banach space B (which is some space of functions for calculus of variations applications) with a subset [itex] A \subseteq B [/itex] which is weakly closed in [itex] B [/itex] and if [itex] S:A\to \mathbb{R} [/itex] is a coercive, weakly lower semicontinuous functional then it is bounded below and achieves it's minimum in [itex] A [/itex]. I'm not sure why I remember this theorem since I don't even remember the precise definitions of the conditions anymore but in any case, the existence of the minimum is a hard problem to solve and and the answer depends quite a bit on the assumptions on your space of functions and on the properties of the functional you are trying to minimize.
     
    Last edited: Jul 13, 2015
  6. Jul 20, 2015 #5
    What Terandol said. The thing is that the Euler-Lagrange equations are a necessary condition. If you look closely, what the theorem actually says, is that if the minimizer exists, then it has to satisfy the E-L equations.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook