Unknown integration tricks in a book

1. Dec 7, 2007

Pollywoggy

unknown integration "tricks" in a book

I have a physics book (Serway & Jewett, Physics for Scientists and Engineers, 6th Ed, Thomson 2004) and on p. 1326 there is an example in which the integration variable is changed from x to -x as in 'dx' to '-dx' but I have never seen this in a calculus textbook (I have Protter and Protter and also Stewart). They also reverse the order of the limits of integration and this changes the sign in front of the integral to positive. I have also never seen this in a calculus textbook. Can anyone enlighten me or point me to a book or website where I might find an explanation?

2. Dec 7, 2007

bob1182006

you mean
$$\int_a^bdx=-\int_b^adx$$ ?

It should be covered in Stewarts I can't remember the specific page though but I know it's covered.

when they changed the variable of integration was it like multiply inside/outside by - sign? or they just had x and then they replaced x by -x? it would need to be multiplied by a -sign on the outside which could be canceled by reversing the order of integration.

3. Dec 7, 2007

Pollywoggy

Yes that is the second of the two tricks, which seems vaguely familiar but I could not find an example of it in one of my calculus books. I will check Stewart again.

thanks

4. Dec 7, 2007

Pollywoggy

I found it in Stewart and my memory of this one is returning. Perhaps I can find the other trick in this book as well.

5. Dec 7, 2007

ice109

where's the trick? flipping the limits of integration?

6. Dec 7, 2007

cristo

Staff Emeritus
If F is the antiderivative of a function f, then we have $F(x)=\int f(x)dx$. Then, if we put in the limits, we have $\int^a_b f(x)dx=F(a)-F(b)$. If we reverse the limits, then $\int^b_a f(x) dx=F(b)-F(a)=-(-F(b)+F(a))=-\int^a_b f(x)dx$

7. Dec 7, 2007

Pollywoggy

Yes, I did not remember the "trick" and could not find it in a book, but I found it now.
I think I found the other trick too but the book just glosses over it, where the 'dx' in an integral is changed to '-dx'. I don't think it is as big a deal as I thought it was, sort of like the substitutions involved in integration by parts. It just looked like a big deal when I first saw it.

8. Dec 7, 2007

Pollywoggy

Thanks for explaining it.

9. Dec 7, 2007

cristo

Staff Emeritus
I can't really comment without seeing the exact example that you're talking about. It's probably nothing more sinister than noting that (-1)(-1)=1.

You're welcome.

10. Dec 7, 2007

Could be worth pointing out... Integration is typically defined on [a,b], where a <= b. Flipping the signs is notation $$, \ not \ a \ theorem$$. Given a < b, you define $$\int_b^a f(x)dx = - \int_a^b f(x)dx$$, whereas the right side is defined in the definition of the integral. One finds it simplifies notation a lot, especially in the theorem that if f is continuous and g is differentiable on [a,b], then
$$\int_a^b f(g(x))g'(x)dx = \int_{g(a)}^{g(b)} f(s) ds$$.
Without the above notation, the right hand side would technically not make sense for g(a) > g(b).

11. Dec 8, 2007

Gib Z

That is actually not true. It's not just a notation, its a derivable result from the Riemann Definition of the integral, Courant does it in pg 81 of Volume 1.

12. Dec 8, 2007

Pollywoggy

Thanks, the library has that and I will have a look.

13. Dec 8, 2007

Gib Z

It's not actually a very complicated observation... Just look up the Riemann definition of the integral. Now think, for $\int^b_a f(x) dx$, the sum runs from a to b, it takes certain function values in between, and there are the incremental changes. If the bounds are reversed, then the sum runs from b to a, and order of summation does not change the result. Function values takes the same values, but in the reversed order. But the incremental changes are negative instead of positive.

14. Dec 8, 2007

Pollywoggy

I got the book and it has an interesting explanation of integration by parts, I had not seen it explained this way before and it would have helped if I had.

15. Dec 8, 2007

Correction noted.. it can be derived but you have to define the notion of an upper/lower sum, say $$U_b^a(f,P)$$...

It amounts to defining the integral from b to a, when a < b, by taking partitions
P = {b = x0 > x1 > ... > xn = a}. I.e, you are changing the order of the "i index" in a typical partition of [a,b]. You get $$U_b^a(f,P) = -U_a^b(f,P)$$, and $$L_b^a(f,P) = -L_a^b(f,P)$$. The issue that comes up is that when P' is a refinement of P, these sums go in the opposite direction as the original definition: $$U_b^a(f,P) \leq U_b^a(f,P') \leq L_b^a(f,P') \leq L_b^a(f,P)$$.

So when f is integrable on [a,b], you get $$\int_b^a f(x)dx = sup U_b^a(f,P) = inf L_b^a(f,P) = - sup L_a^b(f,P) = - \int_a^b f(x) dx$$, etc...

You can look one example up in the wiki calculus book (you have to scroll midway through): http://en.wikibooks.org/wiki/Calculus/Integration.

The calc book above uses a "1/n - mesh" in the definition of integral, which works for continuous functions, but not all Riemann integrable functions.

Last edited: Dec 8, 2007
16. Dec 8, 2007

JasonRox

Integral = Maple = Solution

That's my trick.

17. Dec 8, 2007

mathwonk

i am so happy that some of you are reading courant. you will be very glad. notice that already you are having the experience of being able to answer others questions.

18. Dec 8, 2007

ice109

so courant or apostol?

19. Dec 9, 2007

Of course it's a very complicated observation when the Riemann integral from b to a is not defined when b > a. And I believe the reason most sources (such as Rudin, but I would also like to include Wiki as such a source too) don't define "negative Riemann sums" is because it's redundant.

The only thing tricky about the question is the fact that the notion of "negative Riemann sums" is not defined, let alone a "Riemann integral" from b to a. As you mentioned, evidently it is defined rigorously in Courant's book.

If you do sit down and define them rigorously, what you find is that you do have to "prove" the above claim, it's not just an observation. Once you have defined the terms rigorously, it apparently amounts to proving: f is "integrable from b to a" if and only if f is "integrable from a to b", in which case $\int_b^a f(x)dx = - \int_a^b f(x)dx$. Not a hard proof, albeit impossible without defining the relavent terms.

I would just like to add, that with all the above discussion in awareness, there seems to me to be no loss in generality, and no measurable loss in rigor by just defining $$\int_b^a f(x)dx = - \int_a^b f(x)dx$$.

For example, in Rudin's Real and Complex Analysis, the Lebesgue integral of a complex function f is defined as $$\int u^+ - \int u^- + i(\int v^+ - \int v^-)$$, etc without worries of starting from scratch with a definition of a complex.. etc.

Pity we discussed so much about the philosophy of flipping the limits- NOT ROCKET SCIENCE, when the potential for discussing "tricks" might have been more productive. Anyways, just thoughts.

Last edited: Dec 9, 2007