Bad proof in Fomin's Calculus of Variations?

Click For Summary
SUMMARY

The discussion centers on a proof from Fomin's "Calculus of Variations" regarding the lemma that states if \(\alpha(x)\) is continuous on \([a,b]\) and \(\int_a^b \alpha(x) h'(x) dx=0\) for all \(h(x) \in D_1(a,b)\) with \(h(a)=h(b)=0\), then \(\alpha(x)=c\) for all \(x\) in \([a,b]\). The proof is validated by demonstrating that if the condition holds for all functions \(h(x)\), it must also hold for a specific function constructed from \(\alpha(x)\) and a constant \(c\). The argument emphasizes that the integral \(\int_a^b (\alpha(x)-c)^2 dx\) equals zero only if \(\alpha(x) = c\), confirming the lemma's validity.

PREREQUISITES
  • Understanding of continuous functions and their properties
  • Familiarity with the concept of integrals and integration by parts
  • Knowledge of the space of once differentiable functions, denoted as \(D_1(a,b)\)
  • Basic principles of the calculus of variations
NEXT STEPS
  • Study the properties of continuous functions in the context of calculus of variations
  • Explore integration techniques, particularly integration by parts and its applications
  • Investigate the implications of the fundamental lemma of calculus of variations
  • Examine examples of proofs in Fomin's "Calculus of Variations" for deeper understanding
USEFUL FOR

Mathematicians, students of advanced calculus, and researchers in the field of calculus of variations seeking to deepen their understanding of proofs and lemmas in this area.

genericusrnme
Messages
618
Reaction score
2
I was just reading through the first few pages of Fomin's Calculus of Variations and I came across this proof, which really doesn't seem to prove the Lemma (I may be missing something though) could someone give me a second opinion and perhaps some clarification?
It goes like this;

If \alpha(x) is continuous in [a,b] and if \int_a^b \alpha(x) h'(x) dx=0 for every function h(x)\in D_1(a,b) such that h(a)=h(b)=0 then \alpha(x)=c for all x in [a,b], where c is a constant.
Where D_1(a,b) is the space of all once differentiable functions.

Now, here's the given proof;

Let c be the constant defined by the condition \int_a^b (\alpha(x) - c)dx=0 and let h(x) = \int_a^x (\alpha(\xi) - c) d \xi so that h(x) automatically belongs to D_1(a,b) and satisfies the conditions h(a)=h(b)=0. Then on the one hand;
\int_a^b(\alpha(x) - c)h'(x)dx = \int_a^b\alpha(x)h'(x)dx - c (h(b)-h(a))=0
while on the other hand;
\int_a^b(\alpha(x)-c)h'(x)dx = \int_a^b(\alpha(x)-c)^2dx.
It follows that \alpha(x)-c=0 for all x in [a,b]

It just seems to me that this only proves the lemma for one specific case and that we've used the 'then' in the proof of the theorem.. Am I wrong in thinking this?

Thanks in advance! :biggrin:
 
Physics news on Phys.org
The proof look OK to me. The point is that if the condition is true for EVERY function h(x), then it must be true for one PARTICULAR function h(x) that you can invent. You pick a function such that the integral is ##\int_a^b (a(x)-c)^2\,dx >= 0## and it can only be 0 if ##a(x) = c##.

You will find the same type of argument quite often in calculos of variations: if something is true for all functions meeting some condition, it must be true for the "worst case" that you can invent.

FWIW the converse argument is trivial. If ##a(x) = c##, then ##\int_a^b a(x)h'(x)\, dx = c(h(b) - h(a)) = 0##
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
Replies
1
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 14 ·
Replies
14
Views
4K
Replies
19
Views
4K
  • · Replies 8 ·
Replies
8
Views
3K