1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Unknown integration tricks in a book

  1. Dec 7, 2007 #1
    unknown integration "tricks" in a book

    I have a physics book (Serway & Jewett, Physics for Scientists and Engineers, 6th Ed, Thomson 2004) and on p. 1326 there is an example in which the integration variable is changed from x to -x as in 'dx' to '-dx' but I have never seen this in a calculus textbook (I have Protter and Protter and also Stewart). They also reverse the order of the limits of integration and this changes the sign in front of the integral to positive. I have also never seen this in a calculus textbook. Can anyone enlighten me or point me to a book or website where I might find an explanation?
  2. jcsd
  3. Dec 7, 2007 #2
    you mean
    [tex]\int_a^bdx=-\int_b^adx[/tex] ?

    It should be covered in Stewarts I can't remember the specific page though but I know it's covered.

    when they changed the variable of integration was it like multiply inside/outside by - sign? or they just had x and then they replaced x by -x? it would need to be multiplied by a -sign on the outside which could be canceled by reversing the order of integration.
  4. Dec 7, 2007 #3
    Yes that is the second of the two tricks, which seems vaguely familiar but I could not find an example of it in one of my calculus books. I will check Stewart again.

  5. Dec 7, 2007 #4
    I found it in Stewart and my memory of this one is returning. Perhaps I can find the other trick in this book as well.
  6. Dec 7, 2007 #5
    where's the trick? flipping the limits of integration?
  7. Dec 7, 2007 #6


    User Avatar
    Staff Emeritus
    Science Advisor

    If F is the antiderivative of a function f, then we have [itex]F(x)=\int f(x)dx[/itex]. Then, if we put in the limits, we have [itex]\int^a_b f(x)dx=F(a)-F(b)[/itex]. If we reverse the limits, then [itex]\int^b_a f(x) dx=F(b)-F(a)=-(-F(b)+F(a))=-\int^a_b f(x)dx[/itex]
  8. Dec 7, 2007 #7
    Yes, I did not remember the "trick" and could not find it in a book, but I found it now.
    I think I found the other trick too but the book just glosses over it, where the 'dx' in an integral is changed to '-dx'. I don't think it is as big a deal as I thought it was, sort of like the substitutions involved in integration by parts. It just looked like a big deal when I first saw it.
  9. Dec 7, 2007 #8
    Thanks for explaining it.
  10. Dec 7, 2007 #9


    User Avatar
    Staff Emeritus
    Science Advisor

    I can't really comment without seeing the exact example that you're talking about. It's probably nothing more sinister than noting that (-1)(-1)=1.

    You're welcome.
  11. Dec 7, 2007 #10
    Could be worth pointing out... Integration is typically defined on [a,b], where a <= b. Flipping the signs is notation [tex], \ not \ a \ theorem[/tex]. Given a < b, you define [tex] \int_b^a f(x)dx = - \int_a^b f(x)dx[/tex], whereas the right side is defined in the definition of the integral. One finds it simplifies notation a lot, especially in the theorem that if f is continuous and g is differentiable on [a,b], then
    [tex]\int_a^b f(g(x))g'(x)dx = \int_{g(a)}^{g(b)} f(s) ds[/tex].
    Without the above notation, the right hand side would technically not make sense for g(a) > g(b).
  12. Dec 8, 2007 #11

    Gib Z

    User Avatar
    Homework Helper

    That is actually not true. It's not just a notation, its a derivable result from the Riemann Definition of the integral, Courant does it in pg 81 of Volume 1.
  13. Dec 8, 2007 #12
    Thanks, the library has that and I will have a look.
  14. Dec 8, 2007 #13

    Gib Z

    User Avatar
    Homework Helper

    It's not actually a very complicated observation... Just look up the Riemann definition of the integral. Now think, for [itex]
    \int^b_a f(x) dx[/itex], the sum runs from a to b, it takes certain function values in between, and there are the incremental changes. If the bounds are reversed, then the sum runs from b to a, and order of summation does not change the result. Function values takes the same values, but in the reversed order. But the incremental changes are negative instead of positive.
  15. Dec 8, 2007 #14
    I got the book and it has an interesting explanation of integration by parts, I had not seen it explained this way before and it would have helped if I had.
  16. Dec 8, 2007 #15
    Correction noted.. it can be derived but you have to define the notion of an upper/lower sum, say [tex]U_b^a(f,P)[/tex]...

    It amounts to defining the integral from b to a, when a < b, by taking partitions
    P = {b = x0 > x1 > ... > xn = a}. I.e, you are changing the order of the "i index" in a typical partition of [a,b]. You get [tex]U_b^a(f,P) = -U_a^b(f,P)[/tex], and [tex]L_b^a(f,P) = -L_a^b(f,P)[/tex]. The issue that comes up is that when P' is a refinement of P, these sums go in the opposite direction as the original definition: [tex]U_b^a(f,P) \leq U_b^a(f,P') \leq L_b^a(f,P') \leq L_b^a(f,P)[/tex].

    So when f is integrable on [a,b], you get [tex]\int_b^a f(x)dx = sup U_b^a(f,P) = inf L_b^a(f,P) = - sup L_a^b(f,P) = - \int_a^b f(x) dx[/tex], etc...

    You can look one example up in the wiki calculus book (you have to scroll midway through): http://en.wikibooks.org/wiki/Calculus/Integration.

    The calc book above uses a "1/n - mesh" in the definition of integral, which works for continuous functions, but not all Riemann integrable functions.
    Last edited: Dec 8, 2007
  17. Dec 8, 2007 #16


    User Avatar
    Homework Helper
    Gold Member

    Integral = Maple = Solution

    That's my trick. :smile:
  18. Dec 8, 2007 #17


    User Avatar
    Science Advisor
    Homework Helper

    i am so happy that some of you are reading courant. you will be very glad. notice that already you are having the experience of being able to answer others questions.
  19. Dec 8, 2007 #18
    so courant or apostol?
  20. Dec 9, 2007 #19
    Of course it's a very complicated observation when the Riemann integral from b to a is not defined when b > a. And I believe the reason most sources (such as Rudin, but I would also like to include Wiki as such a source too) don't define "negative Riemann sums" is because it's redundant.

    The only thing tricky about the question is the fact that the notion of "negative Riemann sums" is not defined, let alone a "Riemann integral" from b to a. As you mentioned, evidently it is defined rigorously in Courant's book.

    If you do sit down and define them rigorously, what you find is that you do have to "prove" the above claim, it's not just an observation. Once you have defined the terms rigorously, it apparently amounts to proving: f is "integrable from b to a" if and only if f is "integrable from a to b", in which case [itex]\int_b^a f(x)dx = - \int_a^b f(x)dx[/itex]. Not a hard proof, albeit impossible without defining the relavent terms.

    I would just like to add, that with all the above discussion in awareness, there seems to me to be no loss in generality, and no measurable loss in rigor by just defining [tex]\int_b^a f(x)dx = - \int_a^b f(x)dx[/tex].

    For example, in Rudin's Real and Complex Analysis, the Lebesgue integral of a complex function f is defined as [tex]\int u^+ - \int u^- + i(\int v^+ - \int v^-)[/tex], etc without worries of starting from scratch with a definition of a complex.. etc.

    Pity we discussed so much about the philosophy of flipping the limits- NOT ROCKET SCIENCE, when the potential for discussing "tricks" might have been more productive. Anyways, just thoughts.
    Last edited: Dec 9, 2007
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook