1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Question with Spivak's proof of L'Hospital's Rule

  1. Jun 23, 2012 #1
    1. The problem statement, all variables and given/known data
    There is a part of Spivak's proof of L'Hospital's Rule that I don't really see justified. It is Theorem 11-9 on p.204 of Calculus, 4th edition.

    I won't reproduce the statement of L'Hospital's Rule here, except to say that it is stated for the case [itex]\lim_{x\rightarrow a}f(x)=0[/itex] and [itex]\lim_{x\rightarrow a}g(x)=0[/itex] only (i.e. indeterminate form 0/0).

    Spivak first explains two assumptions implicit in the hypothesis that [itex]\lim_{x\rightarrow a}\frac{f'(x)}{g'(x)}[/itex] exists:
    1. there is an interval [itex](a-\delta, a+\delta)[/itex] on which [itex]f'(x)[/itex] and [itex]g'(x)[/itex] exist for all x in the interval, except possibly at [itex]a[/itex].
    2. [itex]g'(x)\neq 0[/itex] in this interval, again except possibly at [itex]a[/itex].

    He notes that [itex]f[/itex] and [itex]g[/itex] are not assumed to be defined at [itex]a[/itex].

    ***He then defines [itex]f(a)=g(a)=0[/itex], making [itex]f[/itex] and [itex]g[/itex] continuous at [itex]a[/itex].***

    Having done so, he uses the differentiability of [itex]f[/itex] and [itex]g[/itex] on [itex](a, a+\delta)[/itex] to find [itex]x[/itex] so that [itex]f[/itex] and [itex]g[/itex] are continuous on [itex][a,x][/itex] and differentiable on [itex](a,x)[/itex]. The conditions satisfied, he applies the Mean Value Theorem and the Cauchy Mean Value Theorem to [itex]f[/itex] and [itex]g[/itex], and the rest of the proof is fairly straightforward.

    My question, regarding the section marked by ***: why is he allowed to define [itex]f(a)=g(a)=0[/itex]? The theorem should hold even if [itex]f[/itex] and [itex]g[/itex] are not continuous at [itex]a[/itex]; Rudin provides such a proof in Principles, though I'm having trouble following it as of now. Spivak appears to have proved a weaker result for no good reason, unless this drastically simplifies the proof, and the full result is significantly harder to follow; but then he should at least mention the oversight.
     
  2. jcsd
  3. Jun 23, 2012 #2
    I don't have the book but maybe he's proving a special case? Do you see him proving a general case anytime later?
     
  4. Jun 23, 2012 #3
    Hi a7d07c8114!! :smile:

    I believe you are talking of the proof here

    It definitely is a special case because it requires 'a' to be real as well as the continuity argument to hold. But it does hold for many general functions as suggested by the article, i.e for polynomials, exponential functions etc. The particular case can be dealt by defining two different functions,

    [tex]p(x) = \left\{\begin{matrix}
    f(x)& &if \: x \neq a \\
    0& & if \: x = a
    \end{matrix}\right.[/tex]

    And similarly for g(x). Note that both these new functions are continuous. You can then continue with MVT and limit properties to complete the proof.

    A stronger version by Taylor is in the wikipedia link above.

    Its disheartening to see this isn't included in his book. I have the third edition, though.
     
  5. Jun 23, 2012 #4
    Thanks to both for the replies, and thanks to Infinitum for the link.

    Infinitum: The proof for continuous differentiability in the link is pretty different from Spivak's proof. Spivak starts with a different set of hypotheses, namely that [itex]\lim_{x\rightarrow c}f'(x)/g'(x)[/itex] exist and f and g be continuous at c, but not quite continuous differentiability. He doesn't use the difference-quotient definition of the derivative, but instead applies the Cauchy Mean Value Theorem on a suitable interval. Which is strange I suppose, if he's going to use the Cauchy Mean Value Theorem then why not just do the proof in general? Maybe to avoid frightening his freshman calculus class? Though Taylor's proof doesn't seem like a colossal leap from what he's done in the book so far.

    I haven't found the general proof in the book either, though he does provide some other statements and forms of the theorem as problems at the end of the chapter (like the case of infinite limits, among others). It's a shame really - if he wanted to show the special case of continuously differentiable functions, I would have much preferred the version in the Wikipedia article. (Using Cauchy's Mean Value Theorem for this seems like a waste of the theorem's power to me - killing a mosquito with a sledgehammer, so to speak.) And he could at least mention that he only proved a special case.

    That does answer my question though. I'll take the general proof from Taylor, and do some catch-up with Rudin so I can see his general proof in Principles as well - or, if it turns out to be the same proof, then Taylor's proof in a tenth as many words.

    Thanks all! Knew I could count on you.
     
  6. Jun 25, 2012 #5
    Hey, I think Spivak's proof might hold for the theorem as it's stated. It doesn't actually matter what f(a) and g(a) are in the limit, so it suffices to prove the f(a)=g(a)=0 case to prove the theorem for all values of f(a) and g(a).
     
  7. Jun 25, 2012 #6
    Spivak's proof isn't wrong. But it only deals with the case when f(a) and g(a) are equal to zero. A function can have the limit 0 at a point, without actually attaining zero value at that point. This needs to be taken into consideration, which can easily be rectified by defining the two new functions that I stated above.
     
  8. Jun 25, 2012 #7
    But isn't this precisely why he can assume, without loss of generality, that those functions attain 0 at a? Say you prove it for:

    [tex]p(x) = \left\{\begin{matrix}
    f(x)& &if \: x \neq a \\
    0& & if \: x = a
    \end{matrix}\right.[/tex]

    and a similar function q(x) for g(x). Then in the definition of the limit you can just replace [tex]p(x)/q(x)[/tex] by [tex]f(x)/g(x)[/tex], since they're equivalent as far as the limit is concerned.
     
    Last edited: Jun 25, 2012
  9. Jun 25, 2012 #8
    If the limits were any other number then maybe, but since the limits here are 0 we actually get continuity, a pretty big loss in generality if you ask me. (Don't actually ask me though, I'm clearly not the expert :P)

    Spivak's proof did basically what you did with p(x) and q(x), but I'm not sure why this would lead to a general proof. He specifically calls on the continuity of p(x) and q(x) so he can invoke the Mean Value Theorem and Cauchy Mean Value Theorem. Which is fine as long as p and q are concerned, but I'm unsure why any conclusions depending on the continuity of p and q at a would translate to f and g.

    It seems that you understand where Spivak is going with this - please explain! (and possibly provide your own proof using this idea)
     
  10. Jun 25, 2012 #9

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Axiomer is right, it isn't a special case. If we want to calculate [itex]\lim_{x\rightarrow 0}\frac{f(x)}{g(x)}[/itex], then the value of f(0) and g(0) is totally irrelevant.

    Let's formalize that: Let [itex]f,g:\mathbb{R}\rightarrow \mathbb{R}[/itex] be functions. Defin e F(x)=f(x) and G(x)=g(x) whenever x is nonzero and define F(0)=G(0)=0. Then
    [tex]\lim_{x\rightarrow 0} \frac{f(x)}{g(x)}=\lim_{x\rightarrow 0}\frac{F(x)}{G(x)}[/tex]

    Try to prove this statement. It's an easy epsilon-delta argument.
     
  11. Jun 25, 2012 #10
    Ah. Yes, that does hold. So what the two of you are saying is that Spivak is finding [itex]\lim_{x\rightarrow a}\frac{F(x)}{G(x)}[/itex] (for suitably defined F and G), and this equality is why the definition results in no loss of generality.

    That clears up everything! I wish Spivak had included this bit about the epsilon-delta argument - his exposition in the proof doesn't seem to be quite up to the rest of the book's. For instance, he insists on redefining f and g at a instead of just defining new functions F and G to do the job, and never really justifies why he's allowed to (except to say that f and g are not necessarily defined at a). I suppose the confusion probably stemmed from there.

    Many thanks to both Axiomer and micromass.
     
  12. Jun 25, 2012 #11
    This is what I proposed to do in my post above. It cleared up stuff better for me that way :biggrin: It didn't hit me that he can assume f(0)=g(0)=0 to prove the fact, though. It makes it look like a special case.

    ]
    Indeed. The proof wasn't adequately worded, I believe. He directly made the assumptions, without their reasoning, but probably that's left to us :smile:
     
    Last edited: Jun 26, 2012
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Question with Spivak's proof of L'Hospital's Rule
  1. L'Hospital's Rule (Replies: 3)

  2. L'Hospital's Rule (Replies: 12)

  3. L'hospitals rule (Replies: 5)

  4. L'Hospital's Rule (Replies: 12)

Loading...