Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Integrating Backwards

  1. Jun 2, 2010 #1

    I have a question, if I have a function that is continuous on the interval [a, b] where a <= b, why would integrating this interval backwards (i.e. taking the definite integral from b to a) be the negative of taking the definite integral from a to b? Can someone explain this from the definition? Thanks.
  2. jcsd
  3. Jun 2, 2010 #2
    I don't think you can actually see this from the usual definition. One can define integration in the opposite direction as being negative; this I have heard allows us to prove the fundamental theorem of calculus.

    Intuitively, it makes sense if you think of your partitions going in the opposite direction, so that each delta x in the Riemann sum is negative what it would have been. There is a formal theory behind this intuition, involving orienting manifolds, but I am not totally familiar with it.
  4. Jun 2, 2010 #3
    I'm not sure how far you're into properties of integrals. For example, try proving that [tex]\int_a^b f(x)dx = -\int_b^a f(x)dx[/tex].
    [tex]\int_a^b f(x)dx = F(b) - F(a)[/tex] by the Fundamental Theorem of Calculus
    [tex] - \int_a^b f(x)dx = - F(b) + F(a)[/tex]
    [tex] - \int_a^b f(x)dx = F(a) - F(b)[/tex]
    [tex] - \int_a^b f(x)dx = \int_b^a f(x)dx[/tex]
    [tex] \int_a^b f(x)dx = -\int_b^a f(x)dx[/tex]

    Sorry, I haven't thought of a simpler way to explain it, but this is assuming that the fundamental theorem is true to begin with, which may be circular reasoning.
  5. Jun 2, 2010 #4
    I believe that an integral is defined as follows:

    \int_{a}^{b} f(x) dx = \lim_{n \rightarrow \infty} \sum_{i=1}^{n} f(x_i) \Delta x

    Where [tex] \Delta x = x_{i+1} - x_i[/tex].

    Since we are going backwards, then a and b switch places and [tex] \Delta x < 0 [/tex], hence the negative sign.
  6. Jun 2, 2010 #5


    User Avatar
    Science Advisor

    Or, use this: for any numbers a, b, c, [itex]\int_a^c f(x)dx= \int_a^b f(x)dx+ \int_b^c f(x)dx[/itex] when can be proven by writing each of the integrals as limits of Riemann sums.

    Taking c= a, that says [itex]\int_a^a f(x)dx= 0= \inta^b f(x)dx+ \int_b^a f(x)dx[/itex] and so [itex]\int_a^b f(x) dx= -\int_b^a f(x) dx[/itex].

    (Effectively, my, L'Hopital's, and Anonymous217's responses are all variations on the same thing- this result is implied by the definition of the integral in terms of the Riemann sums.)
  7. Jun 3, 2010 #6
    Okay so the key is the delta X in the reimann sum. The thing is my textbook (Stewart) defined the integral as such:

    It first set a<= x <=b and then set delta X = (b-a)/n where n is the number of intervals. In this case, I don't think I'm allowed to swap a and b as in l'hopital's explaination because the interval is defined as a being smaller than b, so I don't see how going backwards will permit me to change the definition.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook