Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Why do we use anti-derivatives to find the values of definite integrals?

  1. Jan 24, 2013 #1
    It seems like we calculate integration by doing the reverse of derivation. Differentiation is basically just using short-cuts for differentiation by first principles (e.g. power rule). If integration by first principles is the Riemann sum, then why don't we use short-cuts of the Riemann sum to do integration? Why do we instead use reverse derivation?

    Bare in mind, my riemann sums is extremely weak.
  2. jcsd
  3. Jan 24, 2013 #2


    User Avatar
    Homework Helper

    I find the point of the Riemann sum is to prove a function is integrable. It's nice that they can also give us the value of the integral on that particular interval as n → ∞.

    Anti derivatives and methods of finding them merely speed up the long process of integration which we would otherwise have to do with a Riemann sum every time ( May I note how time consuming it is ).
  4. Jan 24, 2013 #3
    Why can't we just say that anti-derivatives are the 'short-cut' for Riemann summing?
  5. Jan 24, 2013 #4


    Staff: Mentor

    We say differentiation, not derivation.
    No, it's not. Differentiation is the action of finding the derivative of some function by any means. Differentiation includes doing so by the limit definition (first principle) or by such shortcuts as the power rule, product rule, quotient rule, etc.
    There's some confusion here, between indefinite integrals and definite integrals. For definite integrals, one way of evaluating them is using a Riemann sum.

    For indefinite integrals, the idea is to find a function whose derivative is the function in the indefinite integral - i.e., finding the antiderivative of the function in the integral.

    What would the Riemann sum look like for this (indefinite) integral?
    $$ \int e^x~dx$$
  6. Jan 26, 2013 #5
    I think I didn't ask my question properly, so I've made a mindmap to make my question more clear.

    Attached Files:

  7. Jan 26, 2013 #6


    Staff: Mentor

    Your mind map is not very useful, IMO, at least beyond the 2nd row.

    Under Differentiation you have the limit definition of the derivative and the various rules that are specific applications of the limit definition (constant multiple rule, sum and difference rules, product rule, quotient rule, etc.) In other words, these techniques would be leaves that come off the limit definition.

    Riemann sums are the basic way of evaluating definite integrals. They have nothing to do with indefinite integrals. To evaluate an indefinite integral, you are essentially using the differentiation rules backwards.

    Every time we develop a new differentiation formula, we get a integral formula for free. For example, there is the sum rule: d/dx(f(x) + g(x)) = f'(x) + g'(x), which implies that
    ## \int ( f'(x) + g'(x) )dx = f(x) + g(x) + \text{an arbitrary constant}##

    This formula is usually presented as the sum rule for integration:
    ## \int ( f(x) + g(x) )dx = F(x) + G(x) + \text{an arbitrary constant}##, where F'(x) = f(x) and G'(x) = g(x). IOW, F and G are antiderivatives of f and g, respectively.

    The technique of substitution in integrals, is nothing more than the differentiation chain rule, working backwards.

    The Fundamental Theorem of Calculus isn't part of differentiation - what it does is show
    1. that differentiation and antidifferentiation are essentially inverse processes
    $$ \frac{d}{dx} \int_a^x f(t)~dt = f(x)$$
    2. how to evaluate a definite integral using the antiderivative of the integrand.
    $$\int_a^b f(t)dt = F(b) - F(a)$$
    where F is an antiderivative of f (i.e., F' = f).
  8. Jan 26, 2013 #7


    User Avatar
    Gold Member

    I liked your Mind Map.

    Can I ask what you used to draw it?
  9. Jan 26, 2013 #8
    Mark44, I understand everything in your post. I think my issue is that I'm having trouble communicating my question due to my poor English and because I might have used the wrong terms by accident.

    I'll try an alternate method. I'll list a list of assumptions that I've made and maybe you can tell me if any are incorrect. Thank you by the way for taking the time to reply.

    1. Differentiation by using the limit definition is essentially taking the limit of x/y where both x and y approach zero
    2. A rigorous proof of the product rule involves using the limit definition. Therefore, the product rule is a shortcut of differentiation by first principles.
    3. Therefore, you can differentiate a function either by: 1) limit definition 2) shortcuts for the limit definition such as the product rule.
    4. Integration by using the limit of the Riemann sum is essentially multiplying an infinite sum times an infinitesimally small value
    5. Integration using the limit of the Riemann sum will always give you the definite integral.
    6. As far as integration using the limit of the Riemann sum is concerned, the indefinite integral is unnecessary.
    7. Finding the definite integral using the limit of the Riemann sum is limited and not very powerful.
    8. A better method is to find the indefinite integral (or anti-derivative) and use the indefinite integral to calculate the definite integral using the fundamental theorem of calculus.
    9. Therefore, you can find the definite integral of a function either by: 1) limit of the Riemann sum 2) finding the anti-derivative and then using fundamental theorem of calculus to find the definite integral using the anti-derivative.
    10. Anti-derivative is reverse differentiation.
    11. Both differentiation and integration can be done using first principles. But differentiation has a shortcut for its first principles while integration's "shortcut" relies on reversing differentiation (with the use of anti-derivative).
    12. The reason why integration does not have a shortcut is because multiplying an infinite sum times an infinitesimally small value is not as easy (or is not as predictable?) as taking the limit of x/y where both x and y approach zero.

    http://www.text2mindmap.com/ :P
  10. Jan 26, 2013 #9
    I think your separation of the two is superficial. We could just as easily (actually it would probably be much harder, but certainly possible) define the Riemann Integral where the upper limit is a variable, and the [itex]\Delta x_i[/itex] is a function of this upper limit. I suspect that this would be terribly difficult to work with, but from this we could define a derivative as an anti-integral. Then we would find some rules that work with integrals so that people didn't have to go back to square-one everytime. But, then somebody out there would be asking why we don't do it the other way around.

    I think you might have been getting at this with one of your mindmap comments, but Riemann integral is not usually described by multiplying an infinite sum by an infinitesimal amount (although I'm pretty sure it can be - I don't find it particularly helpful). I think it is much better thought of as an infinite sum of a function times an infinitesimal amount. I'm not saying you were wrong, but I've found a lot of physics students having trouble by thinking about it the way you stated it.
  11. Jan 26, 2013 #10


    Staff: Mentor

    Your English is very good, and better than that of some native speakers I have seen. I agree that you might have used some terms incorrectly, but that comes with the territory when you are learning something new.
    1. That's (above) pretty much it if your partition is divided into subintervals of the same size. It's possible to divide the partition into variable-length subintervals, with smaller subintervals where the function you are integrating increases or decreases rapidly, and larger subintervals where the function is relatively flat.

      In this case you would be adding up an infinite number of terms, where each term is the product of a function value and the width of the subinterval.
      I would say that it is laborious, but very powerful. Differentiation is straightforward, but there are very many functions for which it is very difficult or impossible to integrate analytically to get a "nice" answer. In cases like this the alternative is to use numerical methods to carry out the integration, and these techniques are variations of the Riemann sum technique you've learned.
      This is better only if the function you're integrating has a "nice" antiderivative, and there's no guarantee that this is true.
  12. Jan 26, 2013 #11
    I think that on it the correct theory comes to an end.
    $$\text{This integral:}~~\int F'(x)dx=F(x)+C~~\text{begins the wrong theory.}$$
    Nobody proved it and it doesn't make sense.

    $$\text{It would be correct to go on such way:}~~F(x)=\int \limits_{x_0}^x f(t)~dt,~~F(x_0)=0;~~~~~C=\int\limits_0^C~dt.$$

    $$\text{If}~~f(t)=\frac{1}{t}:~~~F(x)=\int\limits_1^x f(t)dt=ln(x)=ln\left(\frac{x}{1}\right);~~~\int \limits_{t_1}^{t_2}\frac{dt}{t}=ln\left(\frac{t_2}{t_1}\right);$$

    $$F(b)-F(a)=\int\limits_{a}^{b}\frac{dx}{x}=ln\left(\frac{b}{a}\right);~~~~\int \frac{dx}{x}=ln|x|+C -~~\text{is wrong!}.$$


    $$y=F(x),~~ u=F(x)+C,~~u(x,p)=F(x)+F(p)_{p=F^{(-1)}(C)};$$

    $$f(x)=\frac {\partial u(x,p)}{\partial x}=\frac{\partial (F(x)+C)}{\partial x};$$
    $$\int\limits_{x_0}^x f(t)\partial t=F(x)+C.$$
    "The conditions imposed on functions, become a source of difficulties which will manage to be avoided only by means of new researches about the principles of integral calculus"

    Thomas Ioannes Stiltes. ...
    Last edited: Jan 26, 2013
  13. Jan 27, 2013 #12
    [itex]\int f(x)dx[/itex] means the antiderivative, so Mark is correct. It doesn't have to be proved, it is a definition.
  14. Jan 28, 2013 #13
    $$ \frac{d}{dx} \int_a^x f(t)~dt = f(x);$$

    $$ dx \cdot\frac{d}{dx} \int_a^x f(t)~dt = f(x)\cdot dx;$$

    $$\int d \int_a^x f(t)~dt =\int f(x)~dx;$$

    $$ \int_a^x f(t)~dt =\int f(x)~dx;$$

    $$\int f(x)~dx= F(x)-F(a)!$$
  15. Jan 28, 2013 #14
    This doesn't work, not the least of the reasons being that you are treating ##\frac{d}{dx}## as a fraction.
  16. Jan 28, 2013 #15
    ##\displaystyle\frac{d}{dx}=\lim_{\Delta x\to 0}\frac{\Delta}{\Delta x}##
  17. Jan 28, 2013 #16


    User Avatar
    Science Advisor

    The right hand side is not mathematically well-defined, and furthermore that not the definition of d/dx. As Vorde said, you can't treat d/dx as a fraction.
  18. Jan 28, 2013 #17
    And just to add on to this, ##\frac{d}{dx}## is just notation for a 'derivative operator', i.e. ##\frac{d}{dx}## is saying "take the derivative of this:".

    ##\frac{dy}{dy}##, which is not a derivative operator but in fact a derivative itself can't even be considered a fraction rigorously, though for simplicities sake you can often 'fake' it being a fraction as long as you are careful.
  19. Jan 28, 2013 #18


    Staff: Mentor

    And the right side isn't defined, either, as Δ by itself doesn't mean anything in this context.
  20. Jan 28, 2013 #19
    In your own (incorrect anyway) proof, you used the fact that [itex]\int f(x) dx=F(x)[/itex] between the third and fourth lines. You weren't correct getting to that line, but even if you were, you assume the exact fact that you are trying to prove wrong. Here's my proof...

    Let the operator [itex]\int dx[/itex] denote the operation that sends a function [itex]f[/itex] to the equivalence class of antiderivatives [itex]F+C[/itex] where [itex]F[/itex] is any antiderivative and [itex]C[/itex] can be any element of the range of the function F.

    $$\int f(x)dx=F(x)$$
    because it is the definition! There's no math involved. The fundamental theorem makes this notation meaningful, but this is merely notation that was chosen because it makes sense.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook