Proof for part 2 of fundamental theorem of calculus

Bipolarity
Messages
773
Reaction score
2
The proof my book gives for the 2nd part of the FTC is a little hard for me to understand, but I was wondering if this particular proof (which is not from my book) is valid. I did the proof myself, I'm just wondering if it's valid.

\frac{d}{dx}\int^{x}_{0}f(t) \ dt = f(x)

So suppose that the antiderivative of f(t) is F(t).

Then

\frac{d}{dx}\int^{x}_{0}f(t) \ dt = \frac{d}{dx}(F(x)-F(0)) = F'(x) = f(x)

Is this a valid proof? IF not, where am I wrong?

Thanks for your time.

BiP
 
Last edited:
Physics news on Phys.org
Could you perhaps state what you mean with the first and second part of the FTC?
 
If f(t) is integrable over I, then

\frac{d}{dx}\int^{x}_{0}f(t) \ dt = f(x)

That is FTC Part II. My question is if the prove I gave above can be considered rigorous.

BiP
 
Bipolarity said:
If f(t) is integrable over I, then

\frac{d}{dx}\int^{x}_{0}f(t) \ dt = f(x)

That is FTC Part II. My question is if the prove I gave above can be considered rigorous.

BiP

A question that could be asked is why an antiderivative F should exist.
 
f(x) is integrable over I, so why shouldn't an antiderivative exist?

BiP
 
Bipolarity said:
f(x) is integrable over I, so why shouldn't an antiderivative exist?

BiP

That's not really rigorous reasoning is it?? You have to prove that an antiderivative exist.

(in fact, that an antiderivative exists is EXACTLY what the FTC says)
 
I'm sorry I don't think I was clear enough. I am referring to part II of the FTC, not part I.
According to http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus:

Second part

This part is sometimes referred to as the Second Fundamental Theorem of Calculus[7] or the Newton–Leibniz Axiom.

Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivative g on [a, b]. That is, f and g are functions such that for all x in [a, b]

So the FTC Part II assumes that the antiderivative exists. Also, the FTC Part II assumes knowledge of the FTC Part I since it is often given as a corollary to the first part. So I think we can safely assume the antiderivative exists?

BiP
 
Last edited by a moderator:
Bipolarity said:
I'm sorry I don't think I was clear enough. I am referring to part II of the FTC, not part I.
According to http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus:

Second part

This part is sometimes referred to as the Second Fundamental Theorem of Calculus[7] or the Newton–Leibniz Axiom.

Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivative g on [a, b]. That is, f and g are functions such that for all x in [a, b]

So the FTC Part II assumes that the antiderivative exists. Also, the FTC Part II assumes knowledge of the FTC Part I since it is often given as a corollary to the first part. So I think we can safely assume the antiderivative exists?

BiP

On wikipedia, the second part says that if f has an antiderivative F, then

\int_a^bf(t)dt = F(b)-F(a)

How exactly did you prove that in your OP?
 
Last edited by a moderator:
Hmm I just noticed that my textbook gives slightly different statements of the FTC.
I use the one by Larsen and Edwards. I will read wiki's proof. Thank's though!

BiP
 
  • #10
the fundamental theorem of calculus says that the two operations of differentiation and integration are inverse to each other. This is two statements:
1) the derivative of the integral of g is g, and
2) the integral of the derivative of f is f, (up to a constant).

These have different hypotheses. The first one is true for any continuous g, but not for every integrable g. E.g. if g is a step function, then its integral is continuous but only differentiable away from the points where the steps jump up. I general, an integrable function is continuous except on a thin set (a set of measure zero). Then its integral is not just continuous but Lifschitz continuous everywhere. It is differentiable wherever the integrand was continuous, i.e. off the measure zero set. Hence on this measure zero set it may not make sense even to ask whether the derivative of the integral equals the original function. Even if it is differentiable the derivative may differ from the original function where that one is not continuous.

E.g. the Dirichlet function, equal to zero at all irrationals, and equal to 1/q at a rational with p.q relatively prime, is integrable with integral zero. Hence the derivative of the integral is differentiable everywhere but only equals the original function at irrationals.

The second part of the theorem is more delicate still. If f is Lipschitz continuous, then f is differentible except on a set of measure zero, but it is not clear at least to me whether the derivative can be extended to an integrable function. If we assume also that the derivative of f has an integrable extension g however, then I believe the integral of g does equal f(x)-f(a).

However if f is only continuous with a derivative almost everywhere that is integrable, there is no reason for that integral to be anywhere at all like the original function. E.g. the Cantor function is continuous and increases weakly monotonically from 0 to 1, but its derivative equals 0 almost everywhere. Thus the integral is also zero.

The usual statements of part 2) have stronger hypotheses. One either assumes the derivative g of f exists everywhere and is integrable, and then the mean value theorem applied to Riemann sums implies the integral of g equals f(x)-f(a).

Or else one assumes that f actually has a continuous derivative g everywhere, and then g is integrable and has integral equal to f(x)-f(a). The proof is that f and the integral of g have the same derivative. Then it follows from the MVT that they differ only by a constant.

Thus the simplest statement is that if we consider two classes of functions on [a,b], C = continuous functions, and C^1 = differentiable functions with continuous derivative, then differentiation and integration are inverse operations on these two classes. I.e. if I:C-->C^1 is integration, and if D:C^1-->C is differentiation, then DIg = g, and IDf = f-f(a) Thus I is right inverse to D. In particular D is surjective and I is injective. I.e. every continuous g has a C^1 antiderivative, in fact many, and the operation I selects one of these, the one with value zero at a.
 
Last edited:
Back
Top