# Question with Spivak's proof of L'Hospital's Rule

1. Jun 23, 2012

### a7d07c8114

1. The problem statement, all variables and given/known data
There is a part of Spivak's proof of L'Hospital's Rule that I don't really see justified. It is Theorem 11-9 on p.204 of Calculus, 4th edition.

I won't reproduce the statement of L'Hospital's Rule here, except to say that it is stated for the case $\lim_{x\rightarrow a}f(x)=0$ and $\lim_{x\rightarrow a}g(x)=0$ only (i.e. indeterminate form 0/0).

Spivak first explains two assumptions implicit in the hypothesis that $\lim_{x\rightarrow a}\frac{f'(x)}{g'(x)}$ exists:
1. there is an interval $(a-\delta, a+\delta)$ on which $f'(x)$ and $g'(x)$ exist for all x in the interval, except possibly at $a$.
2. $g'(x)\neq 0$ in this interval, again except possibly at $a$.

He notes that $f$ and $g$ are not assumed to be defined at $a$.

***He then defines $f(a)=g(a)=0$, making $f$ and $g$ continuous at $a$.***

Having done so, he uses the differentiability of $f$ and $g$ on $(a, a+\delta)$ to find $x$ so that $f$ and $g$ are continuous on $[a,x]$ and differentiable on $(a,x)$. The conditions satisfied, he applies the Mean Value Theorem and the Cauchy Mean Value Theorem to $f$ and $g$, and the rest of the proof is fairly straightforward.

My question, regarding the section marked by ***: why is he allowed to define $f(a)=g(a)=0$? The theorem should hold even if $f$ and $g$ are not continuous at $a$; Rudin provides such a proof in Principles, though I'm having trouble following it as of now. Spivak appears to have proved a weaker result for no good reason, unless this drastically simplifies the proof, and the full result is significantly harder to follow; but then he should at least mention the oversight.

2. Jun 23, 2012

### dimension10

I don't have the book but maybe he's proving a special case? Do you see him proving a general case anytime later?

3. Jun 23, 2012

### Infinitum

Hi a7d07c8114!!

I believe you are talking of the proof here

It definitely is a special case because it requires 'a' to be real as well as the continuity argument to hold. But it does hold for many general functions as suggested by the article, i.e for polynomials, exponential functions etc. The particular case can be dealt by defining two different functions,

$$p(x) = \left\{\begin{matrix} f(x)& &if \: x \neq a \\ 0& & if \: x = a \end{matrix}\right.$$

And similarly for g(x). Note that both these new functions are continuous. You can then continue with MVT and limit properties to complete the proof.

A stronger version by Taylor is in the wikipedia link above.

Its disheartening to see this isn't included in his book. I have the third edition, though.

4. Jun 23, 2012

### a7d07c8114

Thanks to both for the replies, and thanks to Infinitum for the link.

Infinitum: The proof for continuous differentiability in the link is pretty different from Spivak's proof. Spivak starts with a different set of hypotheses, namely that $\lim_{x\rightarrow c}f'(x)/g'(x)$ exist and f and g be continuous at c, but not quite continuous differentiability. He doesn't use the difference-quotient definition of the derivative, but instead applies the Cauchy Mean Value Theorem on a suitable interval. Which is strange I suppose, if he's going to use the Cauchy Mean Value Theorem then why not just do the proof in general? Maybe to avoid frightening his freshman calculus class? Though Taylor's proof doesn't seem like a colossal leap from what he's done in the book so far.

I haven't found the general proof in the book either, though he does provide some other statements and forms of the theorem as problems at the end of the chapter (like the case of infinite limits, among others). It's a shame really - if he wanted to show the special case of continuously differentiable functions, I would have much preferred the version in the Wikipedia article. (Using Cauchy's Mean Value Theorem for this seems like a waste of the theorem's power to me - killing a mosquito with a sledgehammer, so to speak.) And he could at least mention that he only proved a special case.

That does answer my question though. I'll take the general proof from Taylor, and do some catch-up with Rudin so I can see his general proof in Principles as well - or, if it turns out to be the same proof, then Taylor's proof in a tenth as many words.

Thanks all! Knew I could count on you.

5. Jun 25, 2012

### Axiomer

Hey, I think Spivak's proof might hold for the theorem as it's stated. It doesn't actually matter what f(a) and g(a) are in the limit, so it suffices to prove the f(a)=g(a)=0 case to prove the theorem for all values of f(a) and g(a).

6. Jun 25, 2012

### Infinitum

Spivak's proof isn't wrong. But it only deals with the case when f(a) and g(a) are equal to zero. A function can have the limit 0 at a point, without actually attaining zero value at that point. This needs to be taken into consideration, which can easily be rectified by defining the two new functions that I stated above.

7. Jun 25, 2012

### Axiomer

But isn't this precisely why he can assume, without loss of generality, that those functions attain 0 at a? Say you prove it for:

$$p(x) = \left\{\begin{matrix} f(x)& &if \: x \neq a \\ 0& & if \: x = a \end{matrix}\right.$$

and a similar function q(x) for g(x). Then in the definition of the limit you can just replace $$p(x)/q(x)$$ by $$f(x)/g(x)$$, since they're equivalent as far as the limit is concerned.

Last edited: Jun 25, 2012
8. Jun 25, 2012

### a7d07c8114

If the limits were any other number then maybe, but since the limits here are 0 we actually get continuity, a pretty big loss in generality if you ask me. (Don't actually ask me though, I'm clearly not the expert :P)

Spivak's proof did basically what you did with p(x) and q(x), but I'm not sure why this would lead to a general proof. He specifically calls on the continuity of p(x) and q(x) so he can invoke the Mean Value Theorem and Cauchy Mean Value Theorem. Which is fine as long as p and q are concerned, but I'm unsure why any conclusions depending on the continuity of p and q at a would translate to f and g.

It seems that you understand where Spivak is going with this - please explain! (and possibly provide your own proof using this idea)

9. Jun 25, 2012

### micromass

Staff Emeritus
Axiomer is right, it isn't a special case. If we want to calculate $\lim_{x\rightarrow 0}\frac{f(x)}{g(x)}$, then the value of f(0) and g(0) is totally irrelevant.

Let's formalize that: Let $f,g:\mathbb{R}\rightarrow \mathbb{R}$ be functions. Defin e F(x)=f(x) and G(x)=g(x) whenever x is nonzero and define F(0)=G(0)=0. Then
$$\lim_{x\rightarrow 0} \frac{f(x)}{g(x)}=\lim_{x\rightarrow 0}\frac{F(x)}{G(x)}$$

Try to prove this statement. It's an easy epsilon-delta argument.

10. Jun 25, 2012

### a7d07c8114

Ah. Yes, that does hold. So what the two of you are saying is that Spivak is finding $\lim_{x\rightarrow a}\frac{F(x)}{G(x)}$ (for suitably defined F and G), and this equality is why the definition results in no loss of generality.

That clears up everything! I wish Spivak had included this bit about the epsilon-delta argument - his exposition in the proof doesn't seem to be quite up to the rest of the book's. For instance, he insists on redefining f and g at a instead of just defining new functions F and G to do the job, and never really justifies why he's allowed to (except to say that f and g are not necessarily defined at a). I suppose the confusion probably stemmed from there.

Many thanks to both Axiomer and micromass.

11. Jun 25, 2012

### Infinitum

This is what I proposed to do in my post above. It cleared up stuff better for me that way It didn't hit me that he can assume f(0)=g(0)=0 to prove the fact, though. It makes it look like a special case.

]
Indeed. The proof wasn't adequately worded, I believe. He directly made the assumptions, without their reasoning, but probably that's left to us

Last edited: Jun 26, 2012