Bad proof in Fomin's Calculus of Variations?

genericusrnme
Messages
618
Reaction score
2
I was just reading through the first few pages of Fomin's Calculus of Variations and I came across this proof, which really doesn't seem to prove the Lemma (I may be missing something though) could someone give me a second opinion and perhaps some clarification?
It goes like this;

If \alpha(x) is continuous in [a,b] and if \int_a^b \alpha(x) h'(x) dx=0 for every function h(x)\in D_1(a,b) such that h(a)=h(b)=0 then \alpha(x)=c for all x in [a,b], where c is a constant.
Where D_1(a,b) is the space of all once differentiable functions.

Now, here's the given proof;

Let c be the constant defined by the condition \int_a^b (\alpha(x) - c)dx=0 and let h(x) = \int_a^x (\alpha(\xi) - c) d \xi so that h(x) automatically belongs to D_1(a,b) and satisfies the conditions h(a)=h(b)=0. Then on the one hand;
\int_a^b(\alpha(x) - c)h'(x)dx = \int_a^b\alpha(x)h'(x)dx - c (h(b)-h(a))=0
while on the other hand;
\int_a^b(\alpha(x)-c)h'(x)dx = \int_a^b(\alpha(x)-c)^2dx.
It follows that \alpha(x)-c=0 for all x in [a,b]

It just seems to me that this only proves the lemma for one specific case and that we've used the 'then' in the proof of the theorem.. Am I wrong in thinking this?

Thanks in advance! :biggrin:
 
Physics news on Phys.org
The proof look OK to me. The point is that if the condition is true for EVERY function h(x), then it must be true for one PARTICULAR function h(x) that you can invent. You pick a function such that the integral is ##\int_a^b (a(x)-c)^2\,dx >= 0## and it can only be 0 if ##a(x) = c##.

You will find the same type of argument quite often in calculos of variations: if something is true for all functions meeting some condition, it must be true for the "worst case" that you can invent.

FWIW the converse argument is trivial. If ##a(x) = c##, then ##\int_a^b a(x)h'(x)\, dx = c(h(b) - h(a)) = 0##
 

Similar threads

Back
Top