- #1
sponsoredwalk
- 533
- 5
The following theorem is called "the 'basic lemma' of the calculus of variations" on page 1
of this book:
"If f is a continuous function in [a,b] s.t. ∫abη(x)f(x)dx = 0 for an arbitrary function η
continuous in [a,b] subject to the condition that η(a) = η(b) = 0 then f(x) = 0 in [a,b]"
If you read the proof you'll see they go ahead & specify the function η by (x - x₁)(x₂ - x)
& prove the claim using that, but technically does that not just prove the theorem for this
function alone, not for any arbitrary function?
Also, if we arbitrarily choose η to be the zero function s.t. η(x) is zero on [a,b] then f need
not equal zero on [a,b] for this theorem to hold. Surely I'm missing something?
Assuming I'm right, we must modify the hypothesis to make η non-zero at least once on
[a,b] & choose η so that it is non-zero at least once on [a,b], now could it be considered a
proof by way of contradiction to simply take advantage of the limit of a sum formulation of
the integral & try to prove it using an arbitrary η:
Using |∑η(xᵢ)f(xᵢ)δxᵢ - 0| < ε we see that this reduces to|∑η(xᵢ)f(xᵢ)δxᵢ| < ε.
As we've assumed η can be arbitrary if it's non-zero at least once on [a,b] then the sum
∑η(xᵢ)f(xᵢ)δxᵢ will equal at least one definite value as f is assumed to be non-zero on
[a,b]. But now there exists an ε ≤ |∑η(xᵢ)f(xᵢ)δxᵢ|, contradicting our original assumption.
But this brings into question another concern, f could be non-zero at every other
point on [a,b] except the non-zero value η(cᵢ) we're forced to assume exists as above,
what I mean is:
∑η(xᵢ)f(xᵢ)δxᵢ = η(x₁)f(x₁)δx₁ + η(x₂)f(x₂)δx₂ + ... = 0·f(x₁)δx₁ + 0·f(x₂)δx₂ + ... + η(cᵢ)·0δxᵢ + ...
Here you'd satisfy the hypothesis by having the sum equal to zero but the conclusion
doesn't follow! The flaw lies in the inclusion of the phrase "arbitrary function" as far as
I can see.
I really feel I must be making a basic, basic, error in my interpretation of this frankly but
as it stands I just don't see where I'm wrong. Please let me know
of this book:
"If f is a continuous function in [a,b] s.t. ∫abη(x)f(x)dx = 0 for an arbitrary function η
continuous in [a,b] subject to the condition that η(a) = η(b) = 0 then f(x) = 0 in [a,b]"
If you read the proof you'll see they go ahead & specify the function η by (x - x₁)(x₂ - x)
& prove the claim using that, but technically does that not just prove the theorem for this
function alone, not for any arbitrary function?
Also, if we arbitrarily choose η to be the zero function s.t. η(x) is zero on [a,b] then f need
not equal zero on [a,b] for this theorem to hold. Surely I'm missing something?
Assuming I'm right, we must modify the hypothesis to make η non-zero at least once on
[a,b] & choose η so that it is non-zero at least once on [a,b], now could it be considered a
proof by way of contradiction to simply take advantage of the limit of a sum formulation of
the integral & try to prove it using an arbitrary η:
Using |∑η(xᵢ)f(xᵢ)δxᵢ - 0| < ε we see that this reduces to|∑η(xᵢ)f(xᵢ)δxᵢ| < ε.
As we've assumed η can be arbitrary if it's non-zero at least once on [a,b] then the sum
∑η(xᵢ)f(xᵢ)δxᵢ will equal at least one definite value as f is assumed to be non-zero on
[a,b]. But now there exists an ε ≤ |∑η(xᵢ)f(xᵢ)δxᵢ|, contradicting our original assumption.
But this brings into question another concern, f could be non-zero at every other
point on [a,b] except the non-zero value η(cᵢ) we're forced to assume exists as above,
what I mean is:
∑η(xᵢ)f(xᵢ)δxᵢ = η(x₁)f(x₁)δx₁ + η(x₂)f(x₂)δx₂ + ... = 0·f(x₁)δx₁ + 0·f(x₂)δx₂ + ... + η(cᵢ)·0δxᵢ + ...
Here you'd satisfy the hypothesis by having the sum equal to zero but the conclusion
doesn't follow! The flaw lies in the inclusion of the phrase "arbitrary function" as far as
I can see.
I really feel I must be making a basic, basic, error in my interpretation of this frankly but
as it stands I just don't see where I'm wrong. Please let me know