Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Riemann sum and anti-derivative

  1. Apr 14, 2010 #1
    How do you mathematically equate a Riemann sum as area under the curve to an anti-derivative? How do you prove that, theoreticlly, the one is equalent to the other?

    Assuming the function is continuous between points a and b, there is always a Riemann sum and thus the function is integrable.

    An anti-derivative is an algebraic manipulation which converts a new algebraic function to the function at hand such that the function at hand is the derivative of the new function. This may not always be possible to obtain such as the anti-derivative to y = e^(-x^2) but is integrable because it is continuous through the domain of x.
     
  2. jcsd
  3. Apr 14, 2010 #2
    Here we are, so we can discuss the above subject.
     
  4. Apr 14, 2010 #3
    The Riemann sum IS a numeric function which is the area under the curve between two input points, x_1 and x_2. This defined in all calculus books. The area is computed by adding up the infinite number of f(x)delta_x between points x_1 and x_2, limit as the number of these rectangular "slivers" approaches infinity.

    The Fundamental Theorem of Integral Calculus equates this operation, which they call a definite integral, with the operation of obtaining an algebraic function [which differentiates back into f(x)] or "anti-derivative" which we will label F(x) obtained by calculating F(x_2) - F(x_1).

    This stuff appears in advanced calculus books and Wikipedia: http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus
    also known as the "Second Fundamental Theorem of Calculus"

    My question is, how do you prove this to be true?

    An intuitive approach would be to take a F(x) and its area below it and the x-axis. Additional area would be f(x)*delta(x) which would obtain the area of a sliver. We would call that delta(A). As delta(x) --> 0 we would call that d(A) and delta(x) would be d(x).

    Thus d(A) = f(x)d(x). [d(A)/d(x)] = f(x), therefore A would be the anti-derivative of both sides of the equation between x_1 and x_2 or F(x_2) - F(x_1)

    Now, with regards to the normal equation y = e^(-x^2), this integral is defined as an error function. This error function is an expansion of a Taylor series and, as such, is a numeric approximation. There is no actual algebraic F(x) which differentiates back into e^(-x^2). One must solve for the "AUC" by numeric methods. Whether this is done by obtaining the ordinates of the normal probability density function at ever decreasing intervals*delta(x) and adding them up or using the erf, it is still a numeric approximation. It is NOT an algebraic solution. With high speed calculators or computers, this can be done easily.

    So, let's leave that alone (the probability density function of a normal distribution) for now and just accept the fact that there ARE algebraic functions for which no anti-derivative can be found, at least not yet, so we must resort to numeric methods to solve them.

    This is like climbing a mountain from different sides. One side up the hill is the Riemann sum approach, while the other path from the opposite side is the anti-derivative approach - if such an expression exists for a given function. At the summit, they meet and are the same. My intuitive proof cited above is not rigorous.

    Can you help me with a more rigorous proof that "ain't off the wall?"

    Any takers?
     
    Last edited: Apr 14, 2010
  5. Apr 14, 2010 #4

    HallsofIvy

    User Avatar
    Science Advisor

    Here's an outline.

    Given a given a continuous function y= f(x) with f(x)> 0 for all x between a and b, define F(x) to be the area of the figured bounded above by the graph of y= f(x), below by y= 0, on the right by x= b, and on the left by x= a.

    For any fixed [itex]x_0[/itex], [itex]F(x_0)[/itex], then, is defined as the area, as given above, with right boundary [math]x= x_0[/math]. For any number h, then, [itex]F(x_0+ h)[/itex] is the area, as given above, with right boundary [itex]x= x_0+ h[/itex].

    If h> 0, by fundamental properties of "area", [itex]F(x_0+h)- F(x_0)[/itex] is the area of the figure bounded above by y= f(x), below by y= 0, on the left by [itex]x= x_0[/itex], and on the right by [itex]x_0+ h[/itex]. The "height" of that figure, at each x, is f(x). Since f(x) is continuous, there exist x* between [itex]x_0[/itex] and [itex]x_0+ h[/itex] such that f(x*)= [itex]F(x_0+ h)- F(x_0))/h[/itex]. Taking the limit, as h goes to 0, x* is force to go to [itex]x_0[/itex] so that

    [tex]\limit_{h\to 0}\frac{F(x_0+h)- F(x_0)}{h}= \frac{dF}{dx}(x_0)= f(x_0)[/tex]
     
  6. Apr 14, 2010 #5
    If you don't want to involve intuitive geometric concepts such as the area under a graph, then any good undergraduate analysis book should contain what you want.

    See for example the chapter on the Riemann-Stieljes integral in Apostol's Mathematical Analysis together with prerequisite material from the preceding chapters where necessary (this contains a section on reduction to a Riemann integral).

    Reproduction of this sort of thing in a forum is a bit difficult.
     
  7. Apr 14, 2010 #6
    It strikes me that HallsofIvy's closing quote was obviously written in days before the internet.
     
  8. Apr 14, 2010 #7
    Hey, I was born decades before the internet.

    Does the above prove that the Riemann sum between x = a and x = b is the same as F(x) (antiderivative) between x = a and x = b?
     
  9. Apr 14, 2010 #8
    This is taught in high school calculus classes.The proof is relatively simple, you only need to know basic calculus.
     
    Last edited: Apr 14, 2010
  10. Apr 14, 2010 #9
    Starthaus, you do not know how far back I go.

    1) They didn't teach calculus in high school when I went. In fact, they didn't even teach Realtivity in physics and it had been known for 40 odd years.

    2) In the first two semesters of calculus (which came after analytic geometry in college) they never went over that proof, even the "intuitive" proof that I demonstrated. They would jump from differentiation into integration and assumed, without proof, that integration was the process of finding an antiderivative and called that "integration." As a result, we thought "integrable" meant that you could find an antiderivative, not that the limit of the Riemann sums (which we never discussed) existed. In effect, we were taught this subject in a tautological way and it took me years of not thinking about it to allow me to "differentiate" (in plain English, not in a calculus sense) between the limit of the Riemann sum over an interval and an antiderivative evaluated at both ends and the difference was the definite integral. So, keep that in mind when you say that all this is taught in high school as it was not when I went. I learned calculus in college and after when certain inconsistencies in what I was taught popped up and I had to rethink what I was taught.

    I know the proofs that are presented in Wikipedia quite well. I was hoping for something even simpler, but then one would get my ridiculous intuitive proof (which was in a Barron's "Calculus" book from the 1950s.)

    I think Isaac Newton and Liebnitz were freakin' geniuses and Newton wrote his Principia Mathematica in Latin. Hell, I had enough trouble learniing it in English. I guess I could learn it in German or French, if I had to but that's because I lived there for a few years.
     
  11. Apr 14, 2010 #10
    That's too bad, I am glad I was born much later. :-)
     
  12. Apr 15, 2010 #11
    Starhaus -

    You don't have any say when or where you were born or who your parents are.

    You gotta live with what you get. At least neither you or I was born in such an era that we had to run from a sabertooth tiger. Of course, now, we have Tea-Baggers to worry about.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook