Questioning an assumption in calculus of variations

Click For Summary
SUMMARY

The discussion centers on the assumptions made in deriving the Euler-Lagrange equation within the calculus of variations, particularly regarding the existence of a function x(t) that satisfies the equation. Participants express concerns about the lack of proof for the existence of such functions and question the validity of discussing their properties without establishing existence first. The conversation highlights the need for a deeper understanding of the conditions under which minimizers exist, referencing functional analysis and theorems related to reflexive Banach spaces and weakly closed subsets.

PREREQUISITES
  • Understanding of the Euler-Lagrange equation in calculus of variations.
  • Familiarity with functional analysis concepts, particularly reflexive Banach spaces.
  • Knowledge of compactness arguments in mathematical analysis.
  • Basic principles of differential equations and their solutions.
NEXT STEPS
  • Research the conditions for the existence of minimizers in calculus of variations.
  • Study the properties of reflexive Banach spaces and their applications in functional analysis.
  • Explore the implications of weakly closed subsets in the context of optimization problems.
  • Learn about coercive and weakly lower semicontinuous functionals in calculus of variations.
USEFUL FOR

Mathematicians, physicists, and students of advanced calculus or functional analysis who are exploring the foundations and implications of the calculus of variations and the Euler-Lagrange equation.

hideelo
Messages
88
Reaction score
15
When deriving stationary points of a function defined by a 1-D integral (think lagranian mechanics, Fermat's priniciple, geodesics, etc) and arriving at the Euler Lagrange equation, there seems to me to be an unjustified assumption in the derivation. The derivations I have seen start with something along the following lines: assume some function x(t) is the function we are looking for, let x'(t) = x(t) + η(t) be a nearby path... The derivation will then go on to show the conditions for the original function x(t), namely that the function satisfy the Euler Lagrange equation.

It seems a little odd that we assume, without proof, that this function exists and then sort out its properties. How do we know such a function exists? Does it always exist? Are there conditions on this? Isn't it a little shady to be discussing properties of something if we haven't proved yet that it exists?

On the other hand, once we complete the derivation, it seems clear to me that a function which satisfies the Euler Lagrange equation will be a stationary function. I think.

I'm still left feeling uncomfortable however about this. Is there some outside proof which shows that this function must exist?

I shoud give the caveat that I have only seen this derivation in physics books, I don't own any math books on calculus of variations
 
Physics news on Phys.org
Functions that represent reasonable things in physics are real and have reasonable mathematical properties (continuous, derivatives exist, etc.)
 
Dr. Courtney said:
Functions that represent reasonable things in physics are real and have reasonable mathematical properties (continuous, derivatives exist, etc.)

I understand that, but we are asking for something more here, the existence of some extreme values. In calculus on R we can say that on compact subsets of R, the extreme values exist. I don't know what the analogy would be here when I am not looking at R, but some subset of all continuous functions.

I think I am looking for some topology on the space of functions and hope to see some compact set or something. Maybe there an easier way, I don't know.
 
hideelo said:
It seems a little odd that we assume, without proof, that this function exists and then sort out its properties. How do we know such a function exists? Does it always exist? Are there conditions on this? Isn't it a little shady to be discussing properties of something if we haven't proved yet that it exists?

The minimal function certainly does not always exist mathematically. I haven't done this type of analysis for a long time but couldn't you just take say C^1([0,1]) as your space of functions with an action functional given by S(f)=\int_{0}^1 f(x) dx? Surely this can't have a local min/max because you could always just remove a tiny portion of the original function and glue in a Gaussian of the appropriate size in a continuous way to make the integral a tiny bit bigger/smaller than any proposed min/max function.

What it is saying is simply that if the minimizer does exist, then it must satisfy these equations. So we can replace the problem of finding a minimizer with the problem of solving a differential equation which is usually more tractable. Of course not every differential equation has a solution so the nonexistence of a minimizer will manifest itself in the nonexistence of a solution to the differential equation. In the example I gave above, the Euler lagrange equations simply become 1=0 so no solution to the Euler Lagrange equation exist as expected.
hideelo said:
I think I am looking for some topology on the space of functions and hope to see some compact set or something. Maybe there an easier way, I don't know.

If you want to know the conditions about when the existence of a minimizer is guaranteed, generally you will need to use some functional analysis (although in the one dimensional case things may be much simpler, I don't really know) and it is much more complicated than a simple compactness argument. For example, I remember a theorem from a PDE course that stated if you take a reflexive Banach space B (which is some space of functions for calculus of variations applications) with a subset A \subseteq B which is weakly closed in B and if S:A\to \mathbb{R} is a coercive, weakly lower semicontinuous functional then it is bounded below and achieves it's minimum in A. I'm not sure why I remember this theorem since I don't even remember the precise definitions of the conditions anymore but in any case, the existence of the minimum is a hard problem to solve and and the answer depends quite a bit on the assumptions on your space of functions and on the properties of the functional you are trying to minimize.
 
Last edited:
What Terandol said. The thing is that the Euler-Lagrange equations are a necessary condition. If you look closely, what the theorem actually says, is that if the minimizer exists, then it has to satisfy the E-L equations.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
3
Views
5K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 22 ·
Replies
22
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K