# Riemann sum and anti-derivative

1. Apr 14, 2010

### stevmg

How do you mathematically equate a Riemann sum as area under the curve to an anti-derivative? How do you prove that, theoreticlly, the one is equalent to the other?

Assuming the function is continuous between points a and b, there is always a Riemann sum and thus the function is integrable.

An anti-derivative is an algebraic manipulation which converts a new algebraic function to the function at hand such that the function at hand is the derivative of the new function. This may not always be possible to obtain such as the anti-derivative to y = e^(-x^2) but is integrable because it is continuous through the domain of x.

2. Apr 14, 2010

### stevmg

Here we are, so we can discuss the above subject.

3. Apr 14, 2010

### stevmg

The Riemann sum IS a numeric function which is the area under the curve between two input points, x_1 and x_2. This defined in all calculus books. The area is computed by adding up the infinite number of f(x)delta_x between points x_1 and x_2, limit as the number of these rectangular "slivers" approaches infinity.

The Fundamental Theorem of Integral Calculus equates this operation, which they call a definite integral, with the operation of obtaining an algebraic function [which differentiates back into f(x)] or "anti-derivative" which we will label F(x) obtained by calculating F(x_2) - F(x_1).

This stuff appears in advanced calculus books and Wikipedia: http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus
also known as the "Second Fundamental Theorem of Calculus"

My question is, how do you prove this to be true?

An intuitive approach would be to take a F(x) and its area below it and the x-axis. Additional area would be f(x)*delta(x) which would obtain the area of a sliver. We would call that delta(A). As delta(x) --> 0 we would call that d(A) and delta(x) would be d(x).

Thus d(A) = f(x)d(x). [d(A)/d(x)] = f(x), therefore A would be the anti-derivative of both sides of the equation between x_1 and x_2 or F(x_2) - F(x_1)

Now, with regards to the normal equation y = e^(-x^2), this integral is defined as an error function. This error function is an expansion of a Taylor series and, as such, is a numeric approximation. There is no actual algebraic F(x) which differentiates back into e^(-x^2). One must solve for the "AUC" by numeric methods. Whether this is done by obtaining the ordinates of the normal probability density function at ever decreasing intervals*delta(x) and adding them up or using the erf, it is still a numeric approximation. It is NOT an algebraic solution. With high speed calculators or computers, this can be done easily.

So, let's leave that alone (the probability density function of a normal distribution) for now and just accept the fact that there ARE algebraic functions for which no anti-derivative can be found, at least not yet, so we must resort to numeric methods to solve them.

This is like climbing a mountain from different sides. One side up the hill is the Riemann sum approach, while the other path from the opposite side is the anti-derivative approach - if such an expression exists for a given function. At the summit, they meet and are the same. My intuitive proof cited above is not rigorous.

Can you help me with a more rigorous proof that "ain't off the wall?"

Any takers?

Last edited: Apr 14, 2010
4. Apr 14, 2010

### HallsofIvy

Here's an outline.

Given a given a continuous function y= f(x) with f(x)> 0 for all x between a and b, define F(x) to be the area of the figured bounded above by the graph of y= f(x), below by y= 0, on the right by x= b, and on the left by x= a.

For any fixed $x_0$, $F(x_0)$, then, is defined as the area, as given above, with right boundary $x= x_0$. For any number h, then, $F(x_0+ h)$ is the area, as given above, with right boundary $x= x_0+ h$.

If h> 0, by fundamental properties of "area", $F(x_0+h)- F(x_0)$ is the area of the figure bounded above by y= f(x), below by y= 0, on the left by $x= x_0$, and on the right by $x_0+ h$. The "height" of that figure, at each x, is f(x). Since f(x) is continuous, there exist x* between $x_0$ and $x_0+ h$ such that f(x*)= $F(x_0+ h)- F(x_0))/h$. Taking the limit, as h goes to 0, x* is force to go to $x_0$ so that

$$\limit_{h\to 0}\frac{F(x_0+h)- F(x_0)}{h}= \frac{dF}{dx}(x_0)= f(x_0)$$

5. Apr 14, 2010

### Martin Rattigan

If you don't want to involve intuitive geometric concepts such as the area under a graph, then any good undergraduate analysis book should contain what you want.

See for example the chapter on the Riemann-Stieljes integral in Apostol's Mathematical Analysis together with prerequisite material from the preceding chapters where necessary (this contains a section on reduction to a Riemann integral).

Reproduction of this sort of thing in a forum is a bit difficult.

6. Apr 14, 2010

### Martin Rattigan

It strikes me that HallsofIvy's closing quote was obviously written in days before the internet.

7. Apr 14, 2010

### stevmg

Hey, I was born decades before the internet.

Does the above prove that the Riemann sum between x = a and x = b is the same as F(x) (antiderivative) between x = a and x = b?

8. Apr 14, 2010

### starthaus

This is taught in high school calculus classes.The proof is relatively simple, you only need to know basic calculus.

Last edited: Apr 14, 2010
9. Apr 14, 2010

### stevmg

Starthaus, you do not know how far back I go.

1) They didn't teach calculus in high school when I went. In fact, they didn't even teach Realtivity in physics and it had been known for 40 odd years.

2) In the first two semesters of calculus (which came after analytic geometry in college) they never went over that proof, even the "intuitive" proof that I demonstrated. They would jump from differentiation into integration and assumed, without proof, that integration was the process of finding an antiderivative and called that "integration." As a result, we thought "integrable" meant that you could find an antiderivative, not that the limit of the Riemann sums (which we never discussed) existed. In effect, we were taught this subject in a tautological way and it took me years of not thinking about it to allow me to "differentiate" (in plain English, not in a calculus sense) between the limit of the Riemann sum over an interval and an antiderivative evaluated at both ends and the difference was the definite integral. So, keep that in mind when you say that all this is taught in high school as it was not when I went. I learned calculus in college and after when certain inconsistencies in what I was taught popped up and I had to rethink what I was taught.

I know the proofs that are presented in Wikipedia quite well. I was hoping for something even simpler, but then one would get my ridiculous intuitive proof (which was in a Barron's "Calculus" book from the 1950s.)

I think Isaac Newton and Liebnitz were freakin' geniuses and Newton wrote his Principia Mathematica in Latin. Hell, I had enough trouble learniing it in English. I guess I could learn it in German or French, if I had to but that's because I lived there for a few years.

10. Apr 14, 2010

### starthaus

That's too bad, I am glad I was born much later. :-)

11. Apr 15, 2010

### stevmg

Starhaus -

You don't have any say when or where you were born or who your parents are.

You gotta live with what you get. At least neither you or I was born in such an era that we had to run from a sabertooth tiger. Of course, now, we have Tea-Baggers to worry about.