Proof for part 2 of fundamental theorem of calculus

Click For Summary

Discussion Overview

The discussion centers around the validity of a proof for the second part of the Fundamental Theorem of Calculus (FTC). Participants explore the conditions under which the theorem holds, the existence of antiderivatives, and the implications of integrability on the relationship between differentiation and integration.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant presents a proof involving the derivative of an integral and questions its validity, specifically whether it rigorously demonstrates the second part of the FTC.
  • Another participant asks for clarification on what is meant by the first and second parts of the FTC, indicating a need for precise definitions.
  • Some participants assert that the existence of an antiderivative is a necessary condition for the FTC, while others challenge the reasoning behind this assumption.
  • A participant notes that the FTC assumes knowledge of its first part, which is often considered a corollary, suggesting that this assumption supports the existence of an antiderivative.
  • One participant discusses the differences in statements of the FTC found in different textbooks, indicating variability in interpretations.
  • A detailed explanation is provided regarding the conditions under which the FTC holds, including the implications of continuity and differentiability of functions involved.
  • Examples are raised to illustrate cases where the derivative of an integral may not equal the original function, particularly in the context of functions that are continuous almost everywhere but not differentiable everywhere.

Areas of Agreement / Disagreement

Participants express differing views on the rigor of the proof presented and the assumptions regarding the existence of antiderivatives. There is no consensus on the validity of the proof or the necessary conditions for the FTC to hold.

Contextual Notes

Participants highlight that the proof's rigor may depend on the definitions and assumptions made regarding integrability and the existence of antiderivatives, which are not universally agreed upon in the discussion.

Bipolarity
Messages
773
Reaction score
2
The proof my book gives for the 2nd part of the FTC is a little hard for me to understand, but I was wondering if this particular proof (which is not from my book) is valid. I did the proof myself, I'm just wondering if it's valid.

[tex]\frac{d}{dx}\int^{x}_{0}f(t) \ dt = f(x)[/tex]

So suppose that the antiderivative of f(t) is F(t).

Then

[tex]\frac{d}{dx}\int^{x}_{0}f(t) \ dt = \frac{d}{dx}(F(x)-F(0)) = F'(x) = f(x)[/tex]

Is this a valid proof? IF not, where am I wrong?

Thanks for your time.

BiP
 
Last edited:
Physics news on Phys.org
Could you perhaps state what you mean with the first and second part of the FTC?
 
If f(t) is integrable over I, then

[tex]\frac{d}{dx}\int^{x}_{0}f(t) \ dt = f(x)[/tex]

That is FTC Part II. My question is if the prove I gave above can be considered rigorous.

BiP
 
Bipolarity said:
If f(t) is integrable over I, then

[tex]\frac{d}{dx}\int^{x}_{0}f(t) \ dt = f(x)[/tex]

That is FTC Part II. My question is if the prove I gave above can be considered rigorous.

BiP

A question that could be asked is why an antiderivative F should exist.
 
f(x) is integrable over I, so why shouldn't an antiderivative exist?

BiP
 
Bipolarity said:
f(x) is integrable over I, so why shouldn't an antiderivative exist?

BiP

That's not really rigorous reasoning is it?? You have to prove that an antiderivative exist.

(in fact, that an antiderivative exists is EXACTLY what the FTC says)
 
I'm sorry I don't think I was clear enough. I am referring to part II of the FTC, not part I.
According to http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus:

Second part

This part is sometimes referred to as the Second Fundamental Theorem of Calculus[7] or the Newton–Leibniz Axiom.

Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivative g on [a, b]. That is, f and g are functions such that for all x in [a, b]

So the FTC Part II assumes that the antiderivative exists. Also, the FTC Part II assumes knowledge of the FTC Part I since it is often given as a corollary to the first part. So I think we can safely assume the antiderivative exists?

BiP
 
Last edited by a moderator:
Bipolarity said:
I'm sorry I don't think I was clear enough. I am referring to part II of the FTC, not part I.
According to http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus:

Second part

This part is sometimes referred to as the Second Fundamental Theorem of Calculus[7] or the Newton–Leibniz Axiom.

Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivative g on [a, b]. That is, f and g are functions such that for all x in [a, b]

So the FTC Part II assumes that the antiderivative exists. Also, the FTC Part II assumes knowledge of the FTC Part I since it is often given as a corollary to the first part. So I think we can safely assume the antiderivative exists?

BiP

On wikipedia, the second part says that if f has an antiderivative F, then

[tex]\int_a^bf(t)dt = F(b)-F(a)[/tex]

How exactly did you prove that in your OP?
 
Last edited by a moderator:
Hmm I just noticed that my textbook gives slightly different statements of the FTC.
I use the one by Larsen and Edwards. I will read wiki's proof. Thank's though!

BiP
 
  • #10
the fundamental theorem of calculus says that the two operations of differentiation and integration are inverse to each other. This is two statements:
1) the derivative of the integral of g is g, and
2) the integral of the derivative of f is f, (up to a constant).

These have different hypotheses. The first one is true for any continuous g, but not for every integrable g. E.g. if g is a step function, then its integral is continuous but only differentiable away from the points where the steps jump up. I general, an integrable function is continuous except on a thin set (a set of measure zero). Then its integral is not just continuous but Lifschitz continuous everywhere. It is differentiable wherever the integrand was continuous, i.e. off the measure zero set. Hence on this measure zero set it may not make sense even to ask whether the derivative of the integral equals the original function. Even if it is differentiable the derivative may differ from the original function where that one is not continuous.

E.g. the Dirichlet function, equal to zero at all irrationals, and equal to 1/q at a rational with p.q relatively prime, is integrable with integral zero. Hence the derivative of the integral is differentiable everywhere but only equals the original function at irrationals.

The second part of the theorem is more delicate still. If f is Lipschitz continuous, then f is differentible except on a set of measure zero, but it is not clear at least to me whether the derivative can be extended to an integrable function. If we assume also that the derivative of f has an integrable extension g however, then I believe the integral of g does equal f(x)-f(a).

However if f is only continuous with a derivative almost everywhere that is integrable, there is no reason for that integral to be anywhere at all like the original function. E.g. the Cantor function is continuous and increases weakly monotonically from 0 to 1, but its derivative equals 0 almost everywhere. Thus the integral is also zero.

The usual statements of part 2) have stronger hypotheses. One either assumes the derivative g of f exists everywhere and is integrable, and then the mean value theorem applied to Riemann sums implies the integral of g equals f(x)-f(a).

Or else one assumes that f actually has a continuous derivative g everywhere, and then g is integrable and has integral equal to f(x)-f(a). The proof is that f and the integral of g have the same derivative. Then it follows from the MVT that they differ only by a constant.

Thus the simplest statement is that if we consider two classes of functions on [a,b], C = continuous functions, and C^1 = differentiable functions with continuous derivative, then differentiation and integration are inverse operations on these two classes. I.e. if I:C-->C^1 is integration, and if D:C^1-->C is differentiation, then DIg = g, and IDf = f-f(a) Thus I is right inverse to D. In particular D is surjective and I is injective. I.e. every continuous g has a C^1 antiderivative, in fact many, and the operation I selects one of these, the one with value zero at a.
 
Last edited:

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 5 ·
Replies
5
Views
3K