# Original definition of Riemann Integral and Darboux Sums

• I
• Hall
Hall
Given a function ##f##, interval ##[a,b]##, and its tagged partition ##\dot P##. The Riemann Sum is defined over ##\dot P## is as follows:
$$S (f, \dot P) = \sum f(t_i) (x_k - x_{k-1})$$

A function is integrable on ##[a,b]##, if for every ##\varepsilon \gt 0##, there exists a ##\delta_{\epsilon} \gt 0 ## such that,
$$|| \dot P|| \lt \delta_{\epsilon} \implies | S(f, \dot P) - L| \lt \varepsilon$$.
And we write, ##\int_{a}^{b} f = L##

The problem is we don't know ##L## before-hand, so how are we suppose to carry out the epsilon-delta process?

The Darboux Sums allow us to define Upper Integral and Lower Integral as follows:
$$U(f) = \inf \{ U(f,P) : P \in \mathcal{P} \}$$
$$L(f) = \sup \{ L(f,P) : P \in \mathcal{P} \}$$

And a function is integrable if ##U(f) = L (f)##.

This Darboux Sum and Integral have two major advantages: the existence of infimum and supremum is guaranteed by the Completeness Axiom, and Upper and Lower sums can easily be calculated. We define the integral ##\int_{a}^{b} f = U(f) = L (f)##. Though, we can establish the equivalence relation between Riemann's original definition and Darboux's definition, but how do we, in practice, use Riemann's definition for proving the integrability of a function? And if Darboux's definition was more practical then why was there a need for Riemann's definition? What did Darboux's formulation lack?

˙

Homework Helper
2022 Award
Given a function ##f##, interval ##[a,b]##, and its tagged partition ##\dot P##. The Riemann Sum is defined over ##\dot P## is as follows:
$$S (f, \dot P) = \sum f(t_i) (x_k - x_{k-1})$$

A function is integrable on ##[a,b]##, if for every ##\varepsilon \gt 0##, there exists a ##\delta_{\epsilon} \gt 0 ## such that,
$$|| \dot P|| \lt \delta_{\epsilon} \implies | S(f, \dot P) - L| \lt \varepsilon$$.
And we write, ##\int_{a}^{b} f = L##

The problem is we don't know ##L## before-hand, so how are we suppose to carry out the epsilon-delta process?

The Darboux Sums allow us to define Upper Integral and Lower Integral as follows:
$$U(f) = \inf \{ U(f,P) : P \in \mathcal{P} \}$$
$$L(f) = \sup \{ L(f,P) : P \in \mathcal{P} \}$$

And a function is integrable if ##U(f) = L (f)##.

This Darboux Sum and Integral have two major advantages: the existence of infimum and supremum is guaranteed by the Completeness Axiom, and Upper and Lower sums can easily be calculated. We define the integral ##\int_{a}^{b} f = U(f) = L (f)##. Though, we can establish the equivalence relation between Riemann's original definition and Darboux's definition, but how do we, in practice, use Riemann's definition for proving the integrability of a function? And if Darboux's definition was more practical then why was there a need for Riemann's definition? What did Darboux's formulation lack?

Darboux integrability is equivalent to the assertion that for every $\epsilon > 0$ there exists a partition $P$ such that $U(f,P) - L(f,P) < \epsilon$. This makes it easier to prove integrability, but doesn't help find the value of the integral. However. once you have shown integrability you know that if you can find a sequence of telescoping Riemann sums then you have found the value of the integral.

As an example, Darboux integrability of a continuous function follows almost immediately from the fact that a function continuous on a closed, bounded interval is uniformly continuous. Thus given $\epsilon > 0$ you can always find a partition sufficiently fine that $\sup f - \inf f < \epsilon/(b-a)$ on each interval of the partition. I leave you to prove this theorem using only the Riemann definition.

Going the other way, can you prove the fundamental theorem using only the Darboux definition? This requires relating the values of a continuously differentiable function at the endpoints of an interval to the supremum and infimum of its derivative on that interval. I don't know of a basic result about that which isn't a consequence of the fundamental theorem (for example $(b-a)\inf f' \leq \int_a^b f'(x)\,dx = f(b) - f(a) \leq (b-a)\sup f'$), but if I use the Riemann definition then I can find a telescoping sum by use of the mean value theorem.

Hall
Hall
but doesn't help find the value of the integral.
We can use the sequential definition, easily derivable from epsilon-delta definition, to find the integral.

I leave you to prove this theorem using only the Riemann definition.
I'll surely do it on my own.

Hall
One of the advantage of Riemann's definition that I have come up with is: it help in proving the following powerful theorem:
If ##f_n \to f## uniformly, and ##f_n## is integrable for all ##n## on ##[a,b]##, then ##f## is also integrable on ##[a,b]##.

Homework Helper
Gold Member
The problem is we don't know ##L## before-hand, so how are we suppose to carry out the epsilon-delta process?
That is true, but not essential to the definition. All the definition needs is the existence of a limit value, ##L##.
As a practical matter, it is very difficult to use either definition. For a small subset of problems, we can guess the limit. For a larger set of problems, we can apply the Fundamental Theorem of Calculus. For many important applications, the limit defines a special function as the integral. For many other problems, there are no closed-form solutions and no pre-defined special function.

Staff Emeritus
Gold Member
I don't think darboux really makes things simpler. Even if I give you L, you have to show that any arbitrary bad sequence of partitions can never get too far away from L, which sounds a lot like proving they all converge to the same number. You aren't actually told either L(f) or U(f) in the darboux definition and they are not easy to compute, so it's not like it magically hands you what the integral will be if it happens to exist.

FactChecker
Hall
I leave you to prove this theorem using only the Riemann definition.
I have tried, but without any extension this definition:
\begin{align*} \text{For every epsilon > 0 there exists a delta > 0, such that} \\ ||\dot{P}|| \lt \delta \implies | S(f,\dot P) - L| \lt \epsilon \\ \end{align*}
alone isn't proving to be helpful. We can extend it and come with something like this,
\begin{align*} \text{For any partition P of an interval [a,b], if we define Upper Riemann and Lower Riemann sum as}\\ \text{Lower Riemann sum} ~ S_L (f,P) = \sum f(x_{k-1}) ~ (x_k - x_{k-1}) \\ \text{Upper Riemann sum} ~ S_U (f,P) = \sum f(x_k) ~ (x_k - x_{k-1}) \\ \text{it can easily be derived from the original definition, that a function is integrable if and only if} \\ | S_U - S_L | \lt \epsilon \end{align*}

Let's say we're a given a continuous function ##f## on ##[a,b]##. We have to check its integrability.

Since, ##f## is continuous on a closed interval, therefore, it is uniformly continuous on the interval.

Let ##P## be a partition of ##[a,b]##,
\begin{align*} | S_U (f,P) - S_L (f,P)| = \big| \sum f(x_k) - f(x_{k-1}) ~ (x_k - x_{k-1}) \big| \\ | S_U (f,P) - S_L (f,P)| = \big| b [ f(b) - f(x_{n-1}) ] + a [ f(x_1) - f(a) ] \big| \\ \text{if}~, ||P|| \lt \delta, \implies |a - x_1| \lt \delta ~\text{and}~ |b -x_{n-1}| \lt \delta, \text{and, thus by uniform continuity we have} \\ | S_U (f,P) - S_L (f,P)| = \big| b [ f(b) - f(x_{n-1}) ] + a [ f(x_1) - f(a) ] \big| \lt b \epsilon ' + a \epsilon ' \\ \text{Without loss of generality we can assume b > a, and thus,} \\ | S_U (f,P) - S_L (f,P)| \lt 2 b \epsilon ' \\ \text{taking} ~\epsilon ' = 1/2b \\ | S_U (f,P) - S_L (f,P)| \lt \epsilon ~~\text{ given that}~ ||P|| \lt \delta\\ \end{align*}

Thus, ##f## is integrable.

P.S. : I don't know why using align* shifts everything in alignment with the right margin.

Gold Member
I have tried, but without any extension this definition:
\begin{align*} \text{For every epsilon > 0 there exists a delta > 0, such that} \\ ||\dot{P}|| \lt \delta \implies | S(f,\dot P) - L| \lt \epsilon \\ \end{align*}
alone isn't proving to be helpful. We can extend it and come with something like this,
\begin{align*} \text{For any partition P of an interval [a,b], if we define Upper Riemann and Lower Riemann sum as}\\ \text{Lower Riemann sum} ~ S_L (f,P) = \sum f(x_{k-1}) ~ (x_k - x_{k-1}) \\ \text{Upper Riemann sum} ~ S_U (f,P) = \sum f(x_k) ~ (x_k - x_{k-1}) \\ \text{it can easily be derived from the original definition, that a function is integrable if and only if} \\ | S_U - S_L | \lt \epsilon \end{align*}

Let's say we're a given a continuous function ##f## on ##[a,b]##. We have to check its integrability.

Since, ##f## is continuous on a closed interval, therefore, it is uniformly continuous on the interval.

Let ##P## be a partition of ##[a,b]##,
\begin{align*} | S_U (f,P) - S_L (f,P)| = \big| \sum f(x_k) - f(x_{k-1}) ~ (x_k - x_{k-1}) \big| \\ | S_U (f,P) - S_L (f,P)| = \big| b [ f(b) - f(x_{n-1}) ] + a [ f(x_1) - f(a) ] \big| \\ \text{if}~, ||P|| \lt \delta, \implies |a - x_1| \lt \delta ~\text{and}~ |b -x_{n-1}| \lt \delta, \text{and, thus by uniform continuity we have} \\ | S_U (f,P) - S_L (f,P)| = \big| b [ f(b) - f(x_{n-1}) ] + a [ f(x_1) - f(a) ] \big| \lt b \epsilon ' + a \epsilon ' \\ \text{Without loss of generality we can assume b > a, and thus,} \\ | S_U (f,P) - S_L (f,P)| \lt 2 b \epsilon ' \\ \text{taking} ~\epsilon ' = 1/2b \\ | S_U (f,P) - S_L (f,P)| \lt \epsilon ~~\text{ given that}~ ||P|| \lt \delta\\ \end{align*}

Thus, ##f## is integrable.

P.S. : I don't know why using align* shifts everything in alignment with the right margin.
@Hall This is a nice proof but I don't see how it proves that a continuous function is Riemann integrable. It seems rather to show that if one computes the Riemann sums at the two end points of each interval in a partition,then their difference converges to zero.

You seem to want to prove that the upper and lower sums converge to each other but then the function is evaluated at its maximum and minimum on each interval in the partition not at the endpoints.

This said, I think your proof has the right idea and can be easily modified.

Gold Member
@Hall The Riemann sums can be thought of as the areas under a piecewise constant curve that approximates the graph of the function ##f##. Intuitively, as the partition is refined, this curve gets closer to ##f## much as inscribed polygons in a circle get closer to the circle as the number of sides is increased.

As the partition is refined the value of this curve at a point ##x## is the value of ##f## at an increasingly nearby point to ##x##. In the limit, if there is one, one imagines that the curve converges to the graph of ##f## and the area under the curve becomes the area under ##f##.

I think to prove this you can use only the continuity of ##f## rather than uniform continuity. Not sure.

It might be interesting to see what happens to this picture if ##f## is not continuous, for instance if ##f## has a jump discontinuity. Or if ##f## is discontinuous on the Cantor set but continuous on all of the middle thirds e.g if ##f## is one on each middle third and zero on the Cantor set. What if ##f## is discontinuous on a Cantor set of positive measure e.g. the compliment of all of the middle fifths rather than of the middle thirds?

Last edited:
mathwonk
Gold Member
@Hall This is a nice proof but I don't see how it proves that a continuous function is Riemann integrable. It seems rather to show that if one computes the Riemann sums at the two end points of each interval in a partition,then their difference converges to zero.

You seem to want to prove that the upper and lower sums converge to each other but then the function is evaluated at its maximum and minimum on each interval in the partition not at the endpoints.

This said, I think your proof has the right idea and can be easily modified.
I guess something along the lines of making the kth partition small-enough that ##M_k-m_k< \frac {\epsilon}{2^k}##, so that the difference in sums goes to 0.

Homework Helper
Perhaps these general remarks will be of some interest:
It is true that deciding for a given function whether or not Riemann’s integral exists, using only the limit definition, is not easy. For this reason, Riemann himself gave a necessary and sufficient condition for the integral to exist in the same paper where he first gave the definition of the integral. That paper is “On the representation of function by a trigonometric series”, his 1854 habilitation thesis, and the discussion of the integral is in paragraphs 4 and 5 under the heading “On the concept of a definite integral and the range of its validity”. This second part, on the range of validity of the definition, seems to have been mostly ignored by later readers, except for Lebesgue, who published (50 years later) an equivalent condition. Lebesgue is mostly now credited for it, although any analysis student can easily deduce Lebesgue’s criterion from that of Riemann.

The condition of Riemann is based on the notion of “oscillation” of a function at a point p, which is a measure of the discontinuity of f at p. Given a bounded function f defined near p, and an open interval I containing p, look at the smallest interval J which contains all values f takes on I. The oscillation of f at p is the limiting length of J as the length of I approaches zero. E.g. the function f(x) = x/|x| for x≠0 and f(0) = 0, has oscillation 2 at p=0, as does the function f(x) = sin(1/x) for x≠0 and f(0) = 0. A function f is continuous at p if and only if the oscillation of f at p is zero.

A set S has “content zero” if and only if, for every e>0, S can be covered by a finite union of intervals of total length less than e. (The intervals may be taken either closed or open, since if we cover a set by closed intervals of total length e/2, then doubling these intervals covers the set by open intervals of total length e.) Then Riemann’s criterion is this: A bounded function f on a bounded interval is integrable if and only if: for every d>0, the set S of points where f has oscillation at least d, has content zero.

Lebesgue’s criterion is this: a set S has measure zero if and only if for every e>0, the set S can be covered by an infinite sequence of intervals of total length less than e. Then a bounded function f on a bounded interval, is (Riemann) integrable if and only if, the set S where f is discontinuous, has measure zero.

Since a compact set of measure zero also has content zero, and a countable union of sets of content zero has measure zero, Lebegue’s criterion follows easily from Riemann’s, and vice versa. (In fact Mike Spivak, in his little book Calculus on Manifolds, p.53, states the criterion in Lebesgue's form, but uses the Riemann approach involving the concept of oscillation, to prove it.)

It follows of course instantly that continuous functions, as well as functions with only a countable number of discontinuities, e.g. piecewise monotone functions, are Riemann Integrable on closed bounded intervals. The last case was already proved by Newton!

The connection between oscillation and (Darboux) integrability is clear since any two rectangles whose bases contain p in the interior, one above and one below the graph of f near p, must have heights differing at least by the oscillation. Thus the total length of the bases of rectangles containing points with positive oscillation (at least in their interior), must approach zero if f is to be integrable.

Of course it is still a good exercise to show that a given function satisfies Riemann's (or Darboux's) definition. Newton's result, that all monotone (hence also piecewise monotone) functions are integrable, is the easiest, and should probably be the one presented in most calculus classes. You might try it.

edit: For the first time, I have just now read some of Lebesgue's 1904 treatise "Lecons sur l'integration" concerning this topic (see e.g. pp. 23-27, Conditions of integrability).
https://quod.lib.umich.edu/u/umhistmath/acm0062.0001.001/28?rgn=full+text;view=pdf

His discussion is very thorough and apparently precise, at least to someone good at reading French (not me). He discusses in detail the concept of oscillation, and gives Riemann's criterion, but ascribes the version I gave above instead to duBois Reymond. There is apparently a subtle distinction between oscillation at a point and ("mean")? oscillation over an interval, which I blurred together in reading Riemann. So Lebesgue considers there to be at least 3 versions of the criterion, which all seem essentially identical to me, since they easiy follow from one another. Indeed Lebesgue states explicitly that Riemann's own criterion fails to make clear the relation between integrability and sets of discontinuities, which I thought was quite easily deduced from Riemann's discussion.

Since Lebesgue is a master of the topic, I suggest that whatever he says is correct, and that I may have taken some things for granted unjustifiably. Still Spivak's treatment makes all these relations clear and proceeds exactly as I have understood them from reading Riemann. So I might say that even if Riemann's statement does not say precisely what I thought it did, nonetheless after reading it, I understood exactly what Lebesgue claims is due to duBois Reymond and himself. And I tend to give people credit for any ideas that arise immediately in my mind after reading their exposition, even if it slightly extends what they said. In particular, although Lebegue and duBois Reymond may have given slightly different criteria, they apparently drew almost entirely upon Riemann's ideas, and I myself was able to use those ideas to prove their statements without having read their treatments, so I give primary credit to Riemann. But probably I should also credit people who clarify ideas as well as those who generate them, since they too serve the rest of us well in our attempt to use the ideas.

Last edited:
martinbn and WWGD
A map defined in a closed interval is Riemann integrable if and only if it is bounded and the subset of its discontinuity points has measure zero.

A definition need not apply in any practical sense. If it renders the concept "well defined" that's good enough. We can prove if and only if theorems for the initial definition later. Any of the equivalent conditions can be regarded as definition for the initial concept. It's good to have qualitatively different conditions equivalent to the same thing - in different scenarios, one of those might be easier to work with.

Last edited:
Homework Helper
To follow up on nuuskur's cogent remarks, having a criterion for integrability is useful for actually calculating the value of the integral, which was one of the original challenges considered here.

Namely, using the definition alone would require one to show that the same value is achieved by all choices of approximating Riemann sums, which is somewhat overwhelming. However, once one knows the function is integrable by applying a suitable criterion, one knows that all choices of partitions and points of evaluation, will give approximations to the same limit. Hence, given that the function satifies the criterion for integrability, one can choose any one convenient sequence of approximations whose rectangles have base length approaching zero, and their limit must equal the integral.

Hence the normal procedure for proving that a given function has integral equal to S, using the definition plus the integrability criterion of Riemann/dubois-Reymond/Lebesgue, is to first prove the set of discontinuities has measure zero, and then to show the limit of some convenient sequence of approximating Riemann sums whose rectangles have base length approaching zero, is equal to S. For this purpose one is then free e.g., to use rectangles obtained by subdividing the given interval into equal subintervals, and one may choose the evaluation points at the endpoints of the subintervals. This choice often renders the limit more visible.

I.e. then the logic is: first, all approximating families of Riemann sums have the same limit, and second, some one sequence of these approximations has limit S.

If one wants instead to use the fundamental theorem of calculus, notice that one will be limited further to working only with functions which satisfy the hypotheses of that theorem, usually continuous functions. This means the fundamental theorem is less useful, or not at all, for evaluating the integral of those functions which have discontinuities, even if those discontinuities have measure zero. I.e. there are many functions for which the integral does exist, but to which the fundmental theorem does not readily apply, and one must then use the previous procedure involving the definition.

Actually, one can try to extend the fundamental theorem to cover functions with some discontinuities, by generalizing the concept of "antiderivative", to include certain functions which are differentiable only almost everywhere, but one needs a suitable extra condition of strong ("absolute") continuity for these functions, and even then, finding one in a given case is not so easy. i.e. for functions with some mild discontinuities, one more easily finds the "antiderivative" by finding the integral, rather than the other way around. E.g. one can see that the antiderivative, or integral function, of a step function, is a (continuous) piecewise linear function.

In general it is NOT true that if f is Riemann integrable, and if F is continuous and differentiable wherever f is continuous, with F' = f at such points, that the integral of f over [a,b] equals F(b)-F(a). See e.g. the Cantor function. I.e., one can define a function f which equals zero except on the Cantor set, where it equals 1, hence f is integrable with integral zero, but for which the Cantor function F would qualify in this weak sense as an antiderivative, even though F(1)-F(0) = 1 ≠ 0. The problem is that the Cantor function is not "absolutely" continuous. The main point is that a continuous function with derivative zero almost everywhere, need not be constant, but it is constant if it is also absolutely continuous. (Compare post #9 of Lavinia.)
https://en.wikipedia.org/wiki/Cantor_function

Of course, an even greater limitation on using the fundamental theorem is that, for almost all continuous functions, one has no idea what the antiderivative is, except for a small class of cooked examples used in calculus books, carefully constructed from elementary functions like polynomials, trig and exponential functions, usually without dividing or composing them. Hence for almost all random continuous functions, one must use the definition, plus the integrability criterion, and be content to merely approximate the integral.

Notice also that whereas Darboux's definition has some conceptual and logical advantages in understanding and proving things about integrals, Riemann's definition is more useful in calculating them, since then one has no obligation to worry about whether one has an upper or lower sum, as any Riemann sum will do. This is of course an illustration of the point of nuuskur's last sentence.

Last edited:
martinbn
Homework Helper
Let us examine the conditions given by Riemann, duBois-Reymond and Lebesgue for a function to be Riemann integrable. After reading Lebesgue, the differences seem to concern the notion of oscillation of a function at a point (see post #11), compared to its total oscillation on an interval. This distinction has seemed to me insignificant, probably because today we take for granted the Heine Borel property of compactness, to which Lebesgue himself seemingly made some contributions. (For example his name graces the concept of "Lebesgue number" of a cover in a metric space, which yields a proof of Heine Borel for closed bounded intervals.) For simplicity, assume that f is defined on the interval [0,1], and takes values also in that interval. The argument will apply, with slight variation, to any bounded function defined on a closed bounded interval.

To prove integrability say in the form of Darboux, we want to find upper and lower rectangles for f, i.e. rectangles that lie respectively above and below the graph of f, having the same bases along the x axis, and whose total areas can be made as close together as desired. Equivalently, by looking at the “difference rectangles” extending from the top of the lower rectangles to the top of the upper rectangles, we want to be able to enclose the graph of f entirely within a finite sequence of vertical rectangles with total area as small as desired.

The height of such a difference rectangle, with base lying over a subinterval I, is equal to the total oscillation of f on the subinterval I, i.e. the difference between the supremum and the infimum of f on I. If a is a point where the oscillation of f, as defined earlier, equals d, and if e>0 is any positive number, then by definition an interval can be found containing a, on which the total oscillation of f is less than d+e.

Now assume f is continuous on [0,1], in which case the oscillation at each point equals zero, and given e>0, choose about each point of [0,1] a subinterval on which the total oscillation is less than e. Since the interiors of these subintervals cover the closed bounded interval [0,1], a finite subfamily f them also does so, by the Heine Borel property of compactness. Now just take the full collection of endpoints of these subintervals as the desired partition of [0,1]. With this partition, any subinterval they define is either equal to or smaller than one of the subintervals in the cover. Hence the total oscillation on each subinterval is less than e, and the total area of the corresponding difference rectangles above [0,1] is less than e. Thus any continuous bounded function from [0,1] to [0,1] is Darboux-, hence also Riemann-, integrable. With a small variation, one can prove the same for any continuous (hence bounded) function on any closed bounded interval.

Now suppose that f may have discontinuities, but that the set of those discontinuities has measure zero, as defined earlier. Then given any e>0, the subset of points where f has oscillation ≥ e, also has measure zero, and is closed, hence compact, and thus can be covered by a finite sequence of (open) subintervals of total length less than e. On each of these subintervals, the total oscillation of f is only known to be at most 1, the global bound on f itself, so the rectangles we use to cover the graph of f above them may have height 1, but their bases have total length less than e. Thus the total area of this subcollection of difference rectangles is less than e.

Now consider the complement in [0,1] of the union of this finite collection of open subintervals, that complement being a finite union of closed subintervals of [0,1], at every point of which f has oscillation <e. Now treat this set as we treated the whole interval [0,1] in the continuous case. I.e. at every point of it, choose an interval on which the total oscillation of f is less than e. Then choose a finite subcover, and use as partition the full collection of endpoints of this subcollection, together with the endpoints of the earlier finite subcollection of intervals. With this partition, the total area of the difference rectangles should be less than 2e. I.e. the bases of the second collection of rectangles have total length near 1, but their heights are all less than e. With slight variation, this should prove (assuming I didn’t make a mistake) that any bounded function defined on a closed bounded interval, and whose set of discontinuities has measure zero, is Darboux (and hence Riemann) integrable.

(If you wish to clarify the dependence of this construction on the lengths of the bases of the rectangles, i.e. the "mesh" of the partition, one can use the concept of "Lebesgue number" to show that any partition of sufficiently small mesh will be subordinate to the partition chosen here.)

My apologies for the length of this, but at least for me it is now pretty clear why this condition does imply integrability. The converse also appears clear, since if there were some d>0 such that any finite cover of the points of oscillation ≥ d has total length ≥ e > 0, then the total area of any sequence of difference rectangles is at least de >0.

By the way, in reference to Lavinia's question in #9 as to whether one can prove integrability of a continuous function without using uniform continuity, the argument here seems to do so, but it is deceptive. I.e. this argument uses compactness, which is all that is needed to prove uniform continuity. I.e. essentially it is compactness that seems needed, and continuity plus compactness gives uniform continuity, so I would suggest that every argument uses something essentially equivalent to uniform continuity.

Last edited:
Homework Helper
Pardon me. I keep thinking about this until it seems clear in my mind, and then I try to make it appear clear in writing, but it expands. Here is the latest attempt:

Let us try to imagine how one might arrive at this Riemann integrability criterion. The first concept is that of a difference rectangle for a function on an interval. I.e. Since every Riemann sum lies between the Darboux upper and lower sums, and these sums are respectively the sup and inf of all the Riemann sums for a given partition, it is almost obvious that the Riemann and Darboux definitions are equivalent, so we think in terms if the Darboux one.

Then we want a criterion for when the upper and lower sums can be made arbitrarily near one another by making the partitions fine enough. By subtracting them this means a criterion for when the difference between these upper and lower sums can be made arbitrarily small. The difference rectangles measure exactly the difference between an upper sum and a lower sum. I.e. on a given subinterval, the difference rectangle for f is the rectangle extending from the top of the lower rectangle to the top of the upper rectangle for f on the given subinterval. I.e. over a given subinterval J, the difference rectangle of f has base length equal to that of J, and extends upward from the inf of f on J to the sup of f on J. Thus the difference rectangle, is the smallest rectangle, with horizontal base and vertical sides, that entirely contains the part of the graph of f over J.

One can already see that a step function (with finitely many steps) is always integrable, since it has only a finite number of points near which the function is not constant, and if those points are chosen as endpoints of subintervals of a partition, all difference rectangles have height zero and thus area zero.

Next in trying to see when any difference rectangle has small area, one considers its height. This gives the second, and crucial concept, of oscillation of f on an interval, (and the limiting concept of oscillation at a point). Given a subinterval J (contained in the domain interval of f), the oscillation of f on J equals the sup of f on J minus the inf of f on J, i.e. exactly the height of the difference rectangle over J. Note that if J is an open subinterval, the oscillation of f on the closure of J may be strictly larger than the oscillation of f on J.

It is already almost obvious that any monotone increasing function f on a closed bounded interval [a,b] is integrable. I.e. since f is monotone, its sup on a closed subinterval J is the value of f at the right endpoint of J, and its inf on J is the value if f at the left endpoint of J. Thus on [a,b], f is bounded below by f(a) and bounded above by f(b). Moreover, for any partition at all, the sum of the heights of all the difference rectangles equals f(b)-f(a). In particular if the subintervals all have length (b-a)/n, the total area of the difference rectangles equals (b-a)(f(b)-f(a))/n, which approaches zero as the number n of subintervals increases to infinity. This case alone, if expanded trivially to (bounded) piecewise monotone functions, covers essentially every function that occurs in a calculus course, since all polynomial, trig, exponential and log functions, and their compositions, are piecewise monotone.

To link these ideas to the behavior of f at points, e.g. its continuity or lack of it, and expand the analysis to more general functions, one defines oscillation at a point. Given a point p in the domain of f, the oscillation of f at p equals the limit of the oscillation of f over symmetric open subintervals J containing p, as their diameters approach zero. This limit exists since as the subintervals J shrink, the sup of f on J decreases and the inf of f on J increases, hence their difference is a decreasing non negative function.

Then, given any subinterval J, the oscillation of f on J is at least as great as the oscillation of f at any interior point of J, but if J is closed, the oscillation at an endpoint p of J may be greater than the oscillation on J, since only the one sided oscillation at p figures into the oscillation on J. In particular, the difference rectangle defined over an open subinterval J may be strictly smaller than the difference rectangle defined over the closure of J, although they have the same base. And the difference rectangle defined over a closed subinterval may have height smaller than the oscillation at an endpoint. Moreover, given any point p in the domain interval, there is a closed subinterval J containing p in its interior such that the oscillation of f on J is arbitrarily near the oscillation of f at p.

Now let us see if the assumption that f is integrable on a closed bounded interval imposes a necessary condition on the size of the set of points where the oscillation is positive. If a function has infinitely many points with positive oscillation, then most of them must be interior points of subintervals of any given partition, and we will see this will give us some control on the size of those subintervals.

Given any positive d>0, consider the set S(d) of all points p at which the oscillation of f is ≥ d. Now given any e>0, assume a partition can be chosen with the total area of its difference rectangles less than e. Next consider all subintervals of the partition that contain points of S(d). Now either such a point is an endpoint of the subinterval, or it is an interior point, in which latter case the difference rectangle over that subinterval has height at least d. Since the total area of the difference rectangles is less than e, we must have that the total area of the difference rectangles having points of S(d) in the interior of their base interval is also less than e, whence the total length of their bases is less than e/d. Since these base intervals contain all but a finite number of the points of S(d), it follows that the set S(d) is not very large.

I.e. given any e>0, if f is integrable on [a,b], we can choose a partition such that the total area of the difference rectangles is less than say (de/2). Then we can cover the finitely many endpoints of the partition by new subintervals of total length less than e/2. Then since the subintervals having points of S(d) in their interior determine difference rectangles of height at least d, these subintervals have total length less than e/2. Since the remainder of the points of S(d) are also covered by intervals of total length less than e/2, we have covered the whole set S(d) by a finite family of (either open or closed) intervals of total length less than e.

This is essentially Riemann’s (or Riemann and duBois-Reymond’s) general condition for Riemann integrability of a function f on a closed bounded interval. I.e. for f to be integrable on [a,b], it must be true that for every d>0 and every e>0, one can cover the set S(d) of points of [a,b] where the oscillation of f is at least d, by a finite collection of intervals of total length < e.

To deduce Lebesgue’s criterion, note that the set S of points where f is not continuous is the countable union of the sets S(1/n) where f has oscillation ≥ 1/n, taken for all positive integers n. Hence for f to be Riemann integrable, it must be possible to cover the set S by a countable union of intervals of total length as small as desired.

So to arrive at this criterion, ask yourself how to measure integrability by means of areas of difference rectangles. Since these rectangles must have small total area, then deduce that those rectangles whose height cannot be made small must have small total base length, hence there are not many points where the oscillation is positive.

Then with the criterion for integrability in hand, one tries to prove it is sufficient. The Heine Borel property allows assumptions about oscillation at individual points, to imply properties of oscillation on all subintervals of a partition. This approach to sufficiency is given in post #14. So it seems plausible to me that the necessary condition is more elementary to derive, and probably was found first. Then some cleverness was needed to prove the sufficiency, in the form perhaps of the Heine Borel property of closed bounded intervals.

Last edited: