Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The 'integration by change of variable' theorem

  1. Apr 13, 2005 #1

    quasar987

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    In my analysis textbook, the 'integration by change of variable' theorem reads

    "Theorem: Consider f a continuous fonction on [a,b], g a continuous function on [c,d] such that g' is continuous on [c,d]. If g([c,d]) is a subset of [a,b] and if g(c) = a and g(d) = b, then

    [tex]\int_a^b f(x)dx = \int_c^d f(g(t))g'(t)dt[/tex]


    But reading the proof, I nowhere see the need for g([c,d]) being a subset of [a,b]. Is this really necessary? As long as g(c) = a and g(d) = b, it should be alright, no?
     
  2. jcsd
  3. Apr 13, 2005 #2

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    No. Suppose a is 0, b= 1, and g(x) is the function g(x)= 8x(1-x)= 8x- 8x2. To make it simple, suppose f is the constant function f(x)= 1.

    Then g'(x)= 8- 16x and the theorem, without that requirement, would assert that
    [tex]\int_0^1dx= 1= \int_0^1(8- 16x)dx= 0!
     
  4. Apr 13, 2005 #3

    quasar987

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    But you use [0,1] has your interval [c,d]. And while g(0) = 0, g(1) does not equal 1. It's 0 too.
     
    Last edited: Apr 13, 2005
  5. Apr 15, 2005 #4
    *bump*

    Yes, I'm still curious about this also. It looks like the definition of a line integral in complex analysis:

    If

    [tex]\int_{\gamma}f(z)dz=\int_a^bf(g(t))g'(t)dt[/tex]

    ...g(t) is the function for the parametrization of the curve.

    right? And in this case, we know that the parametrization does not matter: as long as you start and end at the correct points.

    (Also, forgive my LaTeX!)
     
  6. Apr 15, 2005 #5

    quasar987

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

  7. Apr 15, 2005 #6
    It makes sense that it shouldn't matter. Think of a very simple case where g(x) on [c,d] has a single maximal point at c<m<d and say g(c)<g(d)<g(m). So there is a c1, c< c1 < m such that g(c1)=g(d). So since this is a simple case we'll say g([c,c1]) is a subset of [g(c),g(d)]. So
    [tex]\int_{g(c)}^{g(d)} f(x) dx = \int_c^{c1} g'(x)g(x)dx[/tex] and
    [tex]\int_{c1}^m g'(x)g(x)dx = \int_{g(c1)}^{g(m)} f(x) dx = -\int_{g(m)}^{g(c1)} f(x) dx=- \int_{m}^d g'(x)g(x)dx[/tex]
    So the sum cancels out nicely.

    Remember this is a simplified case and in no way am I saying this is a proof for the general case. You can have infinite oscillations so that you are dealing with an infinite sum with no guarantee (from me at least) that it will converge where you want it to.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: The 'integration by change of variable' theorem
Loading...